id
stringlengths
40
40
pid
stringlengths
42
42
input
stringlengths
8.37k
169k
output
stringlengths
1
1.63k
3786164eaf3965c11c9969c4463b8c3223627067
3786164eaf3965c11c9969c4463b8c3223627067_0
Q: What annotations are available in ICSI meeting corpus? Text: Introduction and Prior Work A definition of the meeting “hot spots” was first introduced in BIBREF2, where it was investigated whether human annotators could reliably identify regions in which participants are “highly involved in the discussion”. The motivation was that meetings generally have low information density and are tedious to review verbatim after the fact. An automatic system that could detect regions of high interest (as indicated by the involvement of the participants during the meeting) would thus be useful. Relatedly, automatic meeting summarization could also benefit from such information to give extra weight to hot spot regions in selecting or abstracting material for inclusion in the summary. Later work on the relationship between involvement and summarization BIBREF3 defined a different approach: hot spots are those regions chosen for inclusion in a summary by human annotators (“summarization hot spots”). In the present work we stick with the original “involvement hot spot” notion, and refer to such regions simply as “hot spots”, regardless of their possible role in summarization. We note that high involvement may be triggered both by a meeting's content (“what is being talked about”, and “what may be included in a textual summary”), as well as behavioral and social factors, such as a desire to participate, to stake out a position, or to oppose another participant. As related notion in dialog system research is “level of interest” BIBREF4. The initial research on hot spots focused on the reliability of human annotators and correlations with certain low-level acoustic features, such as pitch BIBREF2. Also investigated were the correlation between hot spots and dialog acts BIBREF5 and hot spots and speaker overlap BIBREF6, without however conducting experiments in automatic hot spot prediction using machine learning techniques. Laskowski BIBREF7 redefined the hot spot annotations in terms of time-based windows over meetings, and investigated various classifier models to detect “hotness” (i.e., elevated involvement). However, that work focused on only two types of speech features: presence of laughter and the temporal patterns of speech activity across the various participants, both of which were found to be predictive of involvement. For the related problem of level-of-interest prediction in dialog systems BIBREF8, it was found that content-based classification can also be effective, using both a discriminative TF-IDF model and lexical affect scores, as well as prosodic features. In line with the earlier hot spot research on interaction patterns and speaker overlap, turn-taking features were shown to be helpful for spotting summarization hot spots, in BIBREF3, and even more so than the human involvement annotations. The latter result confirms our intuition that summarization-worthiness and involvement are different notions of “hotness”. In this paper, following Laskowski, we focus on the automatic prediction of the speakers' involvement in sliding-time windows/segments. We evaluate machine learning models based on a range of features that can be extracted automatically from audio recordings, either directly via signal processing or via the use of automatic transcriptions (ASR outputs). In particular, we investigate the relative contributions of three classes of information: low-level acoustic-prosodic features, such as those commonly used in other paralinguistic tasks, such as sentiment analysis (extracted using openSMILE BIBREF0); spoken word content, as encoded with a state-of-the-art lexical embedding approach such as BERT BIBREF1; speaker interaction, based on speech activity over time and across different speakers. We attach lower importance to laughter, even though it was found to be highly predictive of involvement in the ICSI corpus, partly because we believe it would not transfer well to more general types of (e.g., business) meetings, and partly because laughter detection is still a hard problem in itself BIBREF9. Generation of speaker-attributed meeting transcriptions, on the other hand, has seen remarkable progress BIBREF10 and could support the features we focus on here. Data The ICSI Meeting Corpus BIBREF11 is a collection of meeting recordings that has been thoroughly annotated, including annotations for involvement hot spots BIBREF12, linguistic utterance units, and word time boundaries based on forced alignment. The dataset is comprised of 75 meetings and about 70 hours of real-time audio duration, with 6 speakers per meeting on average. Most of the participants are well-acquainted and friendly with each other. Hot spots were originally annotated with 8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator. Hightened involvement is rare, being marked on only 1% of utterances. Due to the severe imbalance in the label distribution, Laskowski BIBREF13 proposed extending the involvement, or hotness, labels to sliding time windows. In our implementation (details below), this resulted in 21.7% of samples (windows) being labeled as “involved”. We split the corpus into three subsets: training, development, and evaluation, keeping meetings intact. Table TABREF4 gives statistics of these partitions. We were concerned with the relatively small number of meetings in the test sets, and repeated several of our experiments with a (jackknifing) cross-validation setup over the training set. The results obtained were very similar to those with the fixed train/test split results that we report here. Data ::: Time Windowing As stated above, the corpus was originally labeled for hot spots at the utterance level, where involvement was marked by either a `b' or a `b+' label. Training and test samples for our experiments correspond to 60 s-long sliding windows, with a 15 s step size. If a certain window, e.g., a segment spanning the times 15 s ...75 s, overlaps with any involved speech utterance, then we label that whole window as `hot'. Fig. FIGREF6 gives a visual representation. Data ::: Metric In spite of the windowing approach, the class distribution is still skewed, and an accuracy metric would reflect the particular class distribution in our data set. Therefore, we adopt the unweighted average recall (UAR) metric commonly used in emotion classification research. UAR is a reweighted accuracy where the samples of both classes are weighted equally in aggregate. UAR thus simulates a uniform class distribution. To match the objective, our classifiers are trained on appropriately weighted training data. Note that chance performance for UAR is by definition 50%, making results more comparable across different data sets. Feature Description ::: Acoustic-Prosodic Features Prosody encompasses pitch, energy, and durational features of speech. Prosody is thought to convey emphasis, sentiment, and emotion, all of which are presumably correlated with expressions of involvement. We used the openSMILE toolkit BIBREF0 to compute 988 features as defined by the emobase988 configuration file, operating on the close-talking meeting recordings. This feature set consists of low-level descriptors such as intensity, loudness, Mel-frequency cepstral coefficients, and pitch. For each low-level descriptor, functionals such as max/min value, mean, standard deviation, kurtosis, and skewness are computed. Finally, global mean and variance normalization are applied to each feature, using training set statistics. The feature vector thus captures acoustic-prosodic features aggregated over what are typically utterances. We tried extracting openSMILE features directly from 60 s windows, but found better results by extracting subwindows of 5 s, followed by pooling over the longer 60 s duration. We attribute this to the fact that emobase features are designed to operate on individual utterances, which have durations closer to 5 s than 60 s. Feature Description ::: Word-Based Features ::: Bag of words with TF-IDF Initially, we investigated a simple bag-of-words model including all unigrams, bigrams, and trigrams found in the training set. Occurrences of the top 10,000 n-grams were encoded to form a 10,000-dimensional vector, with values weighted according to TD-IDF. TF-IDF weights n-grams according to both their frequency (TF) and their salience (inverse document frequency, IDF) in the data, where each utterance was treated as a separate document. The resulting feature vectors are very sparse. Feature Description ::: Word-Based Features ::: Embeddings The ICSI dataset is too small to train a neural embedding model from scratch. Therefore, it is convenient to use the pre-trained BERT embedding architecture BIBREF1 to create an utterance-level embedding vector for each region of interest. Having been trained on a large text corpus, the resulting embeddings encode semantic similarities among utterances, and would enable generalization from word patterns seen in the ICSI training data to those that have not been observed on that limited corpus. We had previously also created an adapted version of the BERT model, tuned to to perform utterance-level sentiment classification, on a separate dataset BIBREF14. As proposed in BIBREF1, we fine-tuned all layers of the pre-trained BERT model by adding a single fully-connected layer and classifying using only the embedding corresponding to the classification ([CLS]) token prepended to each utterance. The difference in UAR between the hot spot classifiers using the pre-trained embeddings and those using the sentiment-adapted embeddings is small. Since the classifier using embeddings extracted by the sentiment-adapted model yielded slightly better performance, we report all results using these as input. To obtain a single embedding for each 60 s window, we experimented with various approaches of pooling the token and utterance-level embeddings. For our first approach, we ignored the ground-truth utterance segmentation and speaker information. We merged all words spoken within a particular window into a single contiguous span. Following BIBREF1, we added the appropriate classification and separation tokens to the text and selected the embedding corresponding to the [CLS] token as the window-level embedding. Our second approach used the ground-truth segmentation of the dialogue. Each speaker turn was independently modeled, and utterance-level embeddings were extracted using the representation corresponding to the [CLS] token. Utterances that cross window boundaries are truncated using the word timestamps, so only words spoken within the given time window are considered. For all reported experiments, we use L2-norm pooling to form the window-level embeddings for the final classifier, as this performed better than either mean or max pooling. Feature Description ::: Speaker Activity Features These features were a compilation of three different feature types: Speaker overlap percentages: Based on the available word-level times, we computed a 6-dimensional feature vector, where the $i$th index indicates the fraction of time that $i$ or more speakers are talking within a given window. This can be expressed by $\frac{t_i}{60}$ with $t_i$ indicating the time in seconds that $i$ or more people were speaking at the same time. Unique speaker count: Counts the unique speakers within a window, as a useful metric to track the diversity of participation within a certain window. Turn switch count: Counts the number of times a speaker begins talking within a window. This is a similar metric to the number of utterances. However, unlike utterance count, turn switches can be computed entirely from speech activity, without requiring a linguistic segmentation. Feature Description ::: Laughter Count Laskowski found that laughter is highly predictive of involvement in the ICSI data. Laughter is annotated on an utterance level and falls into two categories: laughter solely on its own (no words) or laughter contained within an utterance (i.e. during speech). The feature is a simple tally of the number of times people laughed within a window. We include it in some of our experiments for comparison purposes, though we do not trust it as general feature. (The participants in the ICSI meetings are far too familiar and at ease with each other to be representative with regards to laughter.) Modeling ::: Non-Neural Models In preliminary experiments, we compared several non-neural classifiers, including logistic regression (LR), random forests, linear support vector machines, and multinomial naive Bayes. Logistic regression gave the best results all around, and we used it exclusively for the results shown here, unless neural networks are used instead. Modeling ::: Feed-Forward Neural Networks ::: Pooling Techniques For BERT and openSMILE vector classification, we designed two different feed-forward neural network architectures. The sentiment-adapted embeddings described in Section SECREF3 produce one 1024-dimensional vector per utterance. Since all classification operates on time windows, we had to pool over all utterances falling withing a given window, taking care to truncate words falling outside the window. We tested four pooling methods: L2-norm, mean, max, and min, with L2-norm giving the best results. As for the prosodic model, each vector extracted from openSMILE represents a 5 s interval. Since there was both a channel/speaker-axis and a time-axis, we needed to pool over both dimensions in order to have a single vector representing the prosodic features of a 60 s window. The second to last layer is the pooling layer, max-pooling across all the channels, and then mean-pooling over time. The output of the pooling layer is directly fed into the classifier. Modeling ::: Feed-Forward Neural Networks ::: Hyperparameters The hyperparameters of the neural networks (hidden layer number and sizes) were also tuned in preliminary experiments. Details are given in Section SECREF5. Modeling ::: Model Fusion Fig. FIGREF19 depicts the way features from multiple categories are combined. Speech activity and word features are fed directly into a final LR step. Acoustic-prosodic features are first combined in a feed-forward neural classifier, whose output log posteriors are in turn fed into the LR step for fusion. (When using only prosodic features, the ANN outputs are used directly.) Experiments We group experiments by the type of feaures they are based on: acoustic-prosodic, word-based, and speech activity, evaluating each group first by itself, and then in combination with others. Experiments ::: Speech Feature Results As discussed in Section SECREF3, a multitude of input features were investigated, with some being more discriminative. The most useful speech activity features were speaker overlap percentage, number of unique speakers, and number of turn switches, giving evaluation set UARs of 63.5%, 63.9%, and 66.6%, respectively. When combined the UAR improved to 68.0%, showing that these features are partly complementary. Experiments ::: Word-Based Results The TF-IDF model alone gave a UAR of 59.8%. A drastic increase in performance to 70.5% was found when using the BERT embeddings instead. Therefore we adopted embeddings for all further experiments based on word information. Three different types of embeddings were investigated, i.e. sentiment-adapted embeddings at an utterance-level, unadapted embeddings at the utterance-level, and unadapted embeddings over time windows. The adapted embeddings (on utterances) performed best, indicating that adaptation to sentiment task is useful for involvement classification. It is important to note, however, that the utterance-level embeddings are larger than the window-level embeddings. This is due to there being more utterances than windows in the meeting corpus. The best neural architecture we found for these embeddings is a 5-layer neural network with sizes 1024-64-32-12-2. Other hyperparameters for this model are dropout rate = 0.4, learning rate = $10^{-7}$ and activation function “tanh”. The UAR on the evaluation set with just BERT embeddings as input is 65.2%. Interestingly, the neural model was outperformed by a LR directly on the embedding vectors. Perhaps the neural network requires further fine-tuning, or the neural model is too prone to overfitting, given the small training corpus. In any case, we use LR on embeddings for all subsequent results. Experiments ::: Acoustic-Prosodic Feature Results Our prosodic model is a 5-layer ANN, as described in Section SECREF15. The architecture is: 988-512-128-16-Pool-2. The hyperparameters are: dropout rate = 0.4, learning rate = $10^{-7}$, activation = “tanh". The UAR on the evaluation set with just openSMILE features is 62.0%. Experiments ::: Fusion Results and Discussion Table TABREF24 gives the UAR for each feature subset individually, for all features combined, and for a combination in which one feature subset in turn is left out. The one-feature-set-at-time results suggest that prosody, speech activity and words are of increasing importance in that order. The leave-one-out analysis agrees that the words are the most important (largest drop in accuracy when removed), but on that criterion the prosodic features are more important than speech-activity. The combination of all features is 0.4% absolute better than any other subset, showing that all feature subsets are partly complementary. Fig. FIGREF25 shows the same results in histogram form, but also add those with laughter information. Laughter count by itself is the strongest cue to involvement, as Laskowski BIBREF7 had found. However, even given the strong individual laughter feature, the other features add information, pushing the UAR from from 75.1% to 77.5%. Conclusion We studied detection of areas of high involvement, or “hot spots”, within meetings using the ICSI corpus. The features that yielded the best results are in line with our intuitions. Word embeddings, speech activity features such a number of turn changes, and prosodic features are all plausible indicators of high involvement. Furthermore, the feature sets are partly complementary and yield best results when combined using a simple logistic regression model. The combined model achieves 72.6% UAR, or 77.5% with laughter feature. For future work, we would want to see a validation on an independent meeting collection, such as business meetings. Some features, in particular laughter, are bound not be as useful in this case. More data could also enable the training of joint models that perform an early fusion of the different feature types. Also, the present study still relied on human transcripts, and it would be important to know how much UAR suffers with a realistic amount of speech recognition error. Transcription errors are expected to boost the importance of the features types that do not rely on words. Acknowledgments We thank Britta Wrede, Elizabeth Shriberg and Kornel Laskowski for explanations concerning the details of the data.
8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator
2fd8688c8f475ab43edaf5d189567f8799b018e1
2fd8688c8f475ab43edaf5d189567f8799b018e1_0
Q: Is such bias caused by bad annotation? Text: Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts BIBREF0 , BIBREF1 . In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone). The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI. However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts BIBREF2 , BIBREF3 , BIBREF4 . For instance, in some datasets, negation words like “not” and “nobody” are often associated with a relationship of contradiction. As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases. Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets BIBREF5 is costly and may still result in other artifacts; filtering “easy” examples and defining a harder subset is useful for evaluation purposes BIBREF2 , but difficult to do on a large scale that enables training; and compiling adversarial examples BIBREF6 is informative but again limited by scale or diversity. Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts. Typical NLI models learn to predict an entailment label discriminatively given a premise-hypothesis pair (Figure 1 ), enabling them to learn hypothesis-only biases. Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts. While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases. Our first method uses a hypothesis-only classifier (Figure 1 ) and the second uses negative sampling by swapping premises between premise-hypothesis pairs (Figure 1 ). We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings. First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts. Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases. We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts. An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases. We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset. Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data. In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases. Elsewhere BIBREF7 , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased. However, we caution that complete removal of biases remains difficult and is dependent on the techniques used. The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed. In summary, in this paper we make the following contributions: Motivation A training instance for NLI consists of a hypothesis sentence $H$ , a premise statement $P$ , and an inference label $y$ . A probabilistic NLI model aims to learn a parameterized distribution $p_{\theta }(y \,|\,P, H)$ to compute the probability of the label given the two sentences. We consider NLI models with premise and hypothesis encoders, $f_{P,\theta }$ and $f_{H,\theta }$ , which learn representations of $P$ and $H$ , and a classification layer, $g_\theta $ , which learns a distribution over $y$ . Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline (Figure 1 ). However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses BIBREF4 , BIBREF2 , BIBREF3 . This allows models to leverage hypothesis-only biases that may be present in a dataset. A model may perform well on a specific dataset, without identifying whether $P $ entails $H $ . gururangan-EtAl:2018:N18-2 argue that “the bulk” of many models' “success [is] attribute[d] to the easy examples”. Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts. Consider an example where $P $ and $H $ are strings from $\lbrace a, b, c\rbrace $ , and an environment where $P $ entails $H $ if and only if the first letters are the same, as in synthetic dataset A. In such a setting, a model should be able to learn the correct condition for $P $ to entail $H $ . Synthetic dataset A $(a, a)$ $\rightarrow $ True $(a, b)$ $\rightarrow $ False $(b, b)$ $\rightarrow $ True $(b, a)$ $\rightarrow $ False Imagine now that an artifact $c$ is appended to every entailed $H $ (synthetic dataset B). A model of $y$ with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of $c$ in $H $ , ignoring the more general pattern. Therefore, we hypothesize that a model that learns $p_{\theta }(y \,|\,P, H)$ by training on such data would be misled by the bias $c$ and would fail to learn the relationship between the premise and the hypothesis. Consequently, the model would not perform well on the unbiased synthetic dataset A. Synthetic dataset B (with artifact) $(a, ac)$ $\rightarrow $ True $(a, b)$ $\rightarrow $ False $(b, bc)$ $\rightarrow $ True $(b, a)$ $\rightarrow $ False Instead of maximizing the discriminative likelihood $p_{\theta }(y \,|\,P, H)$ directly, we consider maximizing the likelihood of generating the premise $P$ conditioned on the hypothesis $H$ and the label $y$ : $p(P \,|\,H, y)$ . This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account. For example, a model that only looks for $c$ in the above example cannot do better than chance on this objective. However, as $P$ comes from the space of all sentences, this objective is much more difficult to estimate. Training Methods Our goal is to maximize $\log p(P \,|\,H, y)$ on the training data. While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard $p_{\theta }(y \,|\,P, H)$ and introduce a new term to approximate the normalization: $ \log p(P \,|\,y, H) = \log \dfrac{p_{\theta }(y \,|\,P, H) p(P \,|\,H)}{p(y \,|\,H)}. $ Throughout we will assume $p(P \,|\,H)$ is a fixed constant ( justified by the dataset assumption that, lacking $y$ , $P$ and $H$ are independent and drawn at random). Therefore, to approximately maximize this objective we need to estimate $p(y \,|\,H)$ . We propose two methods for doing so. Method 1: Hypothesis-only Classifier Our first approach is to estimate the term $p(y \,|\,H)$ directly. In theory, if labels in an NLI dataset depend on both premises and hypothesis (which poliak-EtAl:2018:S18-2 call “interesting NLI”), this should be a uniform distribution. However, as discussed above, it is often possible to correctly predict $y$ based only on the hypothesis. Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data. We define this distribution using a shared representation between our new estimator $p_{\phi ,\theta }(y \,|\,H)$ and $p_{\theta }(y \,|\,P, H)$ . In particular, the two share an embedding of $H$ from the hypothesis encoder $f_{H,\theta }$ . The additional parameters $\phi $ are in the final layer $g_{\phi }$ , which we call the hypothesis-only classifier. The parameters of this layer $\phi $ are updated to fit $p(y \,|\,H)$ whereas the rest of the parameters in $\theta $ are updated based on the gradients of $\log p(P \,|\,y, H)$ . Training is illustrated in Figure 1 . This interplay is controlled by two hyper-parameters. First, the negative term is scaled by a hyper-parameter $\alpha $ . Second, the updates of $g_\phi $ are weighted by $\beta $ . We therefore minimize the following multitask loss functions (shown for a single example): $ \max _{\theta } L_1(\theta ) &= \log {p_{\theta }(y \,|\,P, H) } - \alpha \log {p_{\phi ,\theta }(y \,|\,H)} \\ \max _{\phi } L_2(\phi ) &= \beta \log {p_{\phi , \theta }(y \,|\,H) } $ We implement these together with a gradient reversal layer BIBREF8 . As illustrated in Figure 1 , during back-propagation, we first pass gradients through the hypothesis-only classifier $g_{\phi }$ and then reverse the gradients going to the hypothesis encoder $g_{H,\theta }$ (potentially scaling them by $\beta $ ). Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises. Consider instead writing the normalization term above as, $ -\log p(y \,|\,H) &= -\log \sum _{P^{\prime }} p(P^{\prime } \,|\,H) p(y \,|\,P^{\prime }, H) \\ &= -\log {\mathbb {E}}_{P^{\prime }} p(y \,|\,P^{\prime }, H) \\ &\ge - {\mathbb {E}}_{P^{\prime }} \log p(y \,|\,P^{\prime }, H), $ where the expectation is uniform and the last step is from Jensen's inequality. As in Method 1, we define a separate $p_{\phi ,\theta }(y \,|\,P^{\prime }, H)$ which shares the embedding layers from $\theta $ , $f_{P,\theta }$ and $f_{H,\theta }$ . However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder $f_{P,\theta }$ . The full setting is shown in Figure 1 . To approximate the expectation, we use uniform samples $P^{\prime }$ (from other training examples) to replace the premise in a ( $P $ , $H $ )-pair, while keeping the label $y$ . We also maximize $p_{\theta , \phi }(y \,|\,P^{\prime }, H)$ to learn the artifacts in the hypotheses. We use $\alpha \in [0, 1]$ to control the fraction of randomly sampled $P $ 's (so the total number of examples remains the same). As before, we implement this using gradient reversal scaled by $\beta $ . $ \max _{\theta } L_1(\theta ) &= (1- \alpha ) \log {p_{\theta }(y \,|\,P, H) } \\ & \hspace{26.00006pt} - \alpha \log {p_{\theta , \phi }(y \,|\,P^{\prime }, H)} \\ \max _{\phi } L_2(\phi ) &= \beta \log {p_{\theta , \phi }(y \,|\,P^{\prime }, H) } $ Finally, we share the classifier weights between $p_{\theta }(y \,|\,P, H)$ and $p_{\phi ,\theta }(y \,|\,P^{\prime }, H)$ . In a sense this is counter-intuitive, since $p_{\theta }$ is being trained to unlearn bias, while $p_{\phi ,\theta }$ is being trained to learn it. However, if the models are trained separately, they may learn to co-adapt with each other BIBREF13 . If $p_{\phi ,\theta }$ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation. For some evidence that this indeed happens when the models are trained separately, see BIBREF7 . Experimental Setup To evaluate how well our methods can overcome hypothesis-only biases, we test our methods on a synthetic dataset as well as on a wide range of existing NLI datasets. The scenario we aim to address is when training on a source dataset with biases and evaluating on a target dataset with different or no biases. We first describe the data and experimental setup before discussing the results.
No
b68d2549431c524a86a46c63960b3b283f61f445
b68d2549431c524a86a46c63960b3b283f61f445_0
Q: How do they determine similar environments for fragments in their data augmentation scheme? Text: Introduction This paper proposes a data augmentation protocol for sequence modeling problems. Our approach aims to supply a simple and model-agnostic bias toward compositional reuse of previously observed sequence fragments in novel environments. Consider a language modeling task in which we wish to estimate a probability distribution over a family of sentences with the following finite sample as training data: In language processing problems, we often want models to analyze this dataset compositionally and infer that ( SECREF6 ) is also probable but ( UID7 ) is not: This generalization amounts to to an inference about syntactic categories BIBREF0 : because cat and wug are interchangeable in the environment the...sang, they are also likely interchangeable elsewhere. Human learners make judgments like ( SECREF5 ) about novel lexical items BIBREF1 and fragments of novel languages BIBREF2 . But we do not expect such judgments from unstructured sequence models trained to maximize the likelihood of the training data in ( SECREF1 ). A large body of work in natural language processing provides generalization to data like ( SECREF6 ) by adding structure to the learned predictor BIBREF3 , BIBREF4 , BIBREF5 . But on real-world datasets, such models are typically worse than “black-box” function approximators like neural networks even when the black-box models fail to place probability mass on either example in ( SECREF5 ) BIBREF6 . To the extent that we believe ( SECREF6 ) to capture an important inductive bias, we would like to find a way of softly encouraging it without tampering with the structure of predictors that work well at scale. In this paper, we introduce a procedure for generating synthetic training examples by recombining real ones, such that ( SECREF6 ) is assigned nontrivial probability because it already appears in the training dataset. The basic operation underlying our proposal (which we call geca, for “good-enough compositional augmentation”) is depicted in fig:teaser: if two (possibly discontinuous) fragments of training examples appear in some common environment, then any additional environment where the first fragment appears is also a valid environment for the second. geca is crude: as a linguistic principle, it is both limited and imprecise. As discussed in Sections UID17 and SECREF5 , it captures a narrow slice of the many phenomena studied under the heading of “compositionality”, while also making a number of incorrect predictions about real languages. Nevertheless, geca appears to be quite effective across a range of learning problems. In semantic parsing, it gives improvements comparable to the data augmentation approach of BIBREF7 on INLINEFORM0 -calculus expressions, better performance than that approach on a different split of the data designed to test generalization more rigorously, and better performance on a different meaning representation language. Outside of semantic parsing, it solves two representative problems from the scan dataset of BIBREF8 that are synthetic but precise in the notion of compositionality they test. Finally, it helps with some (unconditional) low-resource language modeling problems in a typologically diverse set of languages. Background Recent years have seen tremendous success at natural language transduction and generation tasks using black-box function approximators, especially recurrent BIBREF9 and attentional BIBREF10 neural models. With enough training data, these models are often more accurate than than approaches built on traditional tools from the computational linguistics literature—formal models like regular transducers or context-free grammars BIBREF11 can be brittle and challenging to efficiently infer from large datasets. However, models equipped with an explicit (symbolic) generative process have at least one significant advantage over the aforementioned black-box approaches: given a grammar, it is straightforward to precisely characterize how that grammar will extrapolate beyond the examples in a given training set to out-of-distribution data. Indeed, it is often possible for researchers to design the form that this extrapolation will take: smoothed n-gram language models guarantee that no memorization is possible beyond a certain length BIBREF12 ; CCG-based semantic parsers can make immediate use of entity lexicons without having ever seen the lexicon entries used in real sentences BIBREF13 . It is not the case, as sometimes claimed BIBREF14 , that black-box neural models are fundamentally incapable of this kind of predictable generalization—the success of these models at capturing long-range structure in text BIBREF15 and controlled algorithmic data BIBREF16 indicate that some representation of hierarchical structure can be learned given enough data. But the precise point at which this transition occurs is not well-characterized; it is evidently beyond the scale available in many real-world problems. How can we improve the behavior of high-quality black-box models in these settings? There are many sophisticated tools available for improving the function approximators or loss functions themselves—regularization BIBREF17 , posterior regularization BIBREF18 , BIBREF19 , explicit stacks BIBREF20 and composition operators BIBREF21 ; these existing proposals tend to be task- and architecture-specific. But to the extent that the generalization problem can be addressed by increasing the scale of the training data, it is natural to ask whether we can address the problem by increasing this scale artificially—in other words, via data augmentation. Previous work BIBREF7 also studied data augmentation and compositionality in specific setting of learning language-to-logical-form mappings, beginning from the principle that data is compositional if it is generated by a synchronous grammar that relates strings to meanings. The specific approach proposed by BIBREF7 is effective but tailored for semantic parsing; it requires access to structured meaning representations with explicit types and bracketings, which are not available in most NLP applications. Here we aim at a notion of compositionality that is simpler and more general: a bias toward identifying recurring fragments seen at training time, and re-using them in environments distinct from the environments in which they were first observed. This view makes no assumptions about the availability of brackets and types, and is synchronous only to the extent that the notion of a fragment is permitted to include content from both the source and target sides. We will find that it is nearly as effective as the approach of BIBREF7 in the settings for which the latter was designed, but also effective on a variety of problems where it cannot be applied. Approach Consider again the example in fig:teaser. Our data augmentation protocol aims to discover substitutable sentence fragments (highlighted), with the fact a pair of fragments appear in some common sub-sentential environment (underlined) taken as evidence that the fragments belong to a common category. To generate a new examples for the model, an occurrence of one fragment is removed from a sentence to produce a sentence template, which is then populated with the other fragment. Why should we expect this procedure to produce well-formed training examples? The existence of syntactic categories, and the expressibility of well-formedness rules in terms of these abstract categories, is one of the foundational principles of generative approaches to syntax BIBREF22 . The observation that sentence context provides a strong signal about a constitutent's category is in turn the foundation of distributional approaches to language processing BIBREF23 . Combining the two gives the outlines of the above procedure. This combination has a productive history in natural language processing: when fragments are single words, it yields class-based language models BIBREF24 ; when fragments are contiguous spans it yields unsupervised parsers BIBREF0 , BIBREF25 . The present data augmentation scenario is distinguished mainly by the fact that we are unconcerned with producing a complete generative model of data, or with recovering the latent structure implied by the presence of nested syntactic categories. We can still synthesize high-precision examples of well-formed sequences by identifying individual substitutions that are likely to be correct without understanding how they fit into the grammar as a whole. Indeed, if we are not concerned with recovering linguistically plausible analyses, we need not limit ourselves to words or contiguous sentence fragments. We can take as evidence that we can use picks...up wherever we can use puts...down. Indeed, given a translation dataset: we can apply the same principle to synthesize I dax. INLINEFORM0 Dajo. based on the common environment ...marvelously INLINEFORM1 ...maravillosamente. From the perspective of a generalized substitution principle, the alignment problem in machine translation is the same as the class induction problem in language modeling, but with sequences featuring large numbers of gappy fragments and a boundary symbol INLINEFORM2 . The only remaining question is what makes two environments similar enough to infer the existence of a common category. There is, again, a large literature on this question (including the aforementioned language modeling, unsupervised parsing, and alignment work), but in the current work we will make use of a very simple criterion: fragments are interchangeable if they occur in at least one lexical environment that is exactly the same. Given a window size INLINEFORM0 , a sequence of INLINEFORM1 words INLINEFORM2 , and a fragment consisting of a set of INLINEFORM3 spans INLINEFORM4 , the environment is given by INLINEFORM5 , i.e. a INLINEFORM6 -word window around each span of the fragment. The data augmentation operation that defines geca is formally stated as follows: let INLINEFORM0 denote the substitution of the fragment INLINEFORM1 into the template INLINEFORM2 , and INLINEFORM3 be a representation of the environment in which INLINEFORM4 occurs in INLINEFORM5 . Then, If the training data contains sequences INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , and that INLINEFORM3 and INLINEFORM4 , synthesize a new training example INLINEFORM5 . Implementation Naïve implementation of the boxed operation takes INLINEFORM0 time (where INLINEFORM1 is the number of distinct templates in the dataset and INLINEFORM2 the number of distinct fragments). This can be improved to INLINEFORM3 (where INLINEFORM4 is the number of templates that map to the same environment) by building appropriate data structures: [h] python f2t = dict(default=set()) fragment -> template t2f = dict(default=set()) template -> fragment e2t = dict(default=set()) env -> template for sentence in dataset: for template, fragment in fragments(sentence): add(f2t[fragment], template) add(t2f[template], fragment) add(e2t[env(template)], template) t2t = dict(default=set()) for fragment in keys(f2t)): for template in f2t[fragment]: for template2 in f2t[fragment]: for newtemplate in e2t[env(template2)] add(t2t[template1], template2) for template1, template2 in t2t: for arg in t2a[template1] if arg not in t2a[template2]: yield fill(template2, arg) Sample geca implementation. Space requirements might still be considerable (comparable to those used by n-gram language models), and similar tricks can be used to reduce memory usage BIBREF27 . The above pseudocode is agnostic with respect to the choice of fragmentation and environment functions; task-specific choices are described in more detail for each experiment below. Discussion We introduced geca, a simple data augmentation scheme based on identifying local phrase substitutions that are licensed by common context, and demonstrated that extra training examples generated with geca lead to improvements on both diagnostic and natural datasets for semantic parsing and language modeling. While the approach is surprisingly effective in its current form, we view these results mostly as an invitation to consider more carefully the role played by representations of sentence fragments in larger questions about compositionality in black-box sequence models. The experiments in this paper all rely on exact string matching; future work might take advantage of learned representations of spans and their environments BIBREF32 , BIBREF33 . More generally, the present results underline the extent to which current models fail to learn simple, context-independent notions of reuse, but also how easy it is to make progress towards addressing this problem without fundamental changes in model architecture.
fragments are interchangeable if they occur in at least one lexical environment that is exactly the same
7f5059b4b5e84b7705835887f02a51d4d016316a
7f5059b4b5e84b7705835887f02a51d4d016316a_0
Q: Do they experiment with language modeling on large datasets? Text: Introduction This paper proposes a data augmentation protocol for sequence modeling problems. Our approach aims to supply a simple and model-agnostic bias toward compositional reuse of previously observed sequence fragments in novel environments. Consider a language modeling task in which we wish to estimate a probability distribution over a family of sentences with the following finite sample as training data: In language processing problems, we often want models to analyze this dataset compositionally and infer that ( SECREF6 ) is also probable but ( UID7 ) is not: This generalization amounts to to an inference about syntactic categories BIBREF0 : because cat and wug are interchangeable in the environment the...sang, they are also likely interchangeable elsewhere. Human learners make judgments like ( SECREF5 ) about novel lexical items BIBREF1 and fragments of novel languages BIBREF2 . But we do not expect such judgments from unstructured sequence models trained to maximize the likelihood of the training data in ( SECREF1 ). A large body of work in natural language processing provides generalization to data like ( SECREF6 ) by adding structure to the learned predictor BIBREF3 , BIBREF4 , BIBREF5 . But on real-world datasets, such models are typically worse than “black-box” function approximators like neural networks even when the black-box models fail to place probability mass on either example in ( SECREF5 ) BIBREF6 . To the extent that we believe ( SECREF6 ) to capture an important inductive bias, we would like to find a way of softly encouraging it without tampering with the structure of predictors that work well at scale. In this paper, we introduce a procedure for generating synthetic training examples by recombining real ones, such that ( SECREF6 ) is assigned nontrivial probability because it already appears in the training dataset. The basic operation underlying our proposal (which we call geca, for “good-enough compositional augmentation”) is depicted in fig:teaser: if two (possibly discontinuous) fragments of training examples appear in some common environment, then any additional environment where the first fragment appears is also a valid environment for the second. geca is crude: as a linguistic principle, it is both limited and imprecise. As discussed in Sections UID17 and SECREF5 , it captures a narrow slice of the many phenomena studied under the heading of “compositionality”, while also making a number of incorrect predictions about real languages. Nevertheless, geca appears to be quite effective across a range of learning problems. In semantic parsing, it gives improvements comparable to the data augmentation approach of BIBREF7 on INLINEFORM0 -calculus expressions, better performance than that approach on a different split of the data designed to test generalization more rigorously, and better performance on a different meaning representation language. Outside of semantic parsing, it solves two representative problems from the scan dataset of BIBREF8 that are synthetic but precise in the notion of compositionality they test. Finally, it helps with some (unconditional) low-resource language modeling problems in a typologically diverse set of languages. Background Recent years have seen tremendous success at natural language transduction and generation tasks using black-box function approximators, especially recurrent BIBREF9 and attentional BIBREF10 neural models. With enough training data, these models are often more accurate than than approaches built on traditional tools from the computational linguistics literature—formal models like regular transducers or context-free grammars BIBREF11 can be brittle and challenging to efficiently infer from large datasets. However, models equipped with an explicit (symbolic) generative process have at least one significant advantage over the aforementioned black-box approaches: given a grammar, it is straightforward to precisely characterize how that grammar will extrapolate beyond the examples in a given training set to out-of-distribution data. Indeed, it is often possible for researchers to design the form that this extrapolation will take: smoothed n-gram language models guarantee that no memorization is possible beyond a certain length BIBREF12 ; CCG-based semantic parsers can make immediate use of entity lexicons without having ever seen the lexicon entries used in real sentences BIBREF13 . It is not the case, as sometimes claimed BIBREF14 , that black-box neural models are fundamentally incapable of this kind of predictable generalization—the success of these models at capturing long-range structure in text BIBREF15 and controlled algorithmic data BIBREF16 indicate that some representation of hierarchical structure can be learned given enough data. But the precise point at which this transition occurs is not well-characterized; it is evidently beyond the scale available in many real-world problems. How can we improve the behavior of high-quality black-box models in these settings? There are many sophisticated tools available for improving the function approximators or loss functions themselves—regularization BIBREF17 , posterior regularization BIBREF18 , BIBREF19 , explicit stacks BIBREF20 and composition operators BIBREF21 ; these existing proposals tend to be task- and architecture-specific. But to the extent that the generalization problem can be addressed by increasing the scale of the training data, it is natural to ask whether we can address the problem by increasing this scale artificially—in other words, via data augmentation. Previous work BIBREF7 also studied data augmentation and compositionality in specific setting of learning language-to-logical-form mappings, beginning from the principle that data is compositional if it is generated by a synchronous grammar that relates strings to meanings. The specific approach proposed by BIBREF7 is effective but tailored for semantic parsing; it requires access to structured meaning representations with explicit types and bracketings, which are not available in most NLP applications. Here we aim at a notion of compositionality that is simpler and more general: a bias toward identifying recurring fragments seen at training time, and re-using them in environments distinct from the environments in which they were first observed. This view makes no assumptions about the availability of brackets and types, and is synchronous only to the extent that the notion of a fragment is permitted to include content from both the source and target sides. We will find that it is nearly as effective as the approach of BIBREF7 in the settings for which the latter was designed, but also effective on a variety of problems where it cannot be applied. Approach Consider again the example in fig:teaser. Our data augmentation protocol aims to discover substitutable sentence fragments (highlighted), with the fact a pair of fragments appear in some common sub-sentential environment (underlined) taken as evidence that the fragments belong to a common category. To generate a new examples for the model, an occurrence of one fragment is removed from a sentence to produce a sentence template, which is then populated with the other fragment. Why should we expect this procedure to produce well-formed training examples? The existence of syntactic categories, and the expressibility of well-formedness rules in terms of these abstract categories, is one of the foundational principles of generative approaches to syntax BIBREF22 . The observation that sentence context provides a strong signal about a constitutent's category is in turn the foundation of distributional approaches to language processing BIBREF23 . Combining the two gives the outlines of the above procedure. This combination has a productive history in natural language processing: when fragments are single words, it yields class-based language models BIBREF24 ; when fragments are contiguous spans it yields unsupervised parsers BIBREF0 , BIBREF25 . The present data augmentation scenario is distinguished mainly by the fact that we are unconcerned with producing a complete generative model of data, or with recovering the latent structure implied by the presence of nested syntactic categories. We can still synthesize high-precision examples of well-formed sequences by identifying individual substitutions that are likely to be correct without understanding how they fit into the grammar as a whole. Indeed, if we are not concerned with recovering linguistically plausible analyses, we need not limit ourselves to words or contiguous sentence fragments. We can take as evidence that we can use picks...up wherever we can use puts...down. Indeed, given a translation dataset: we can apply the same principle to synthesize I dax. INLINEFORM0 Dajo. based on the common environment ...marvelously INLINEFORM1 ...maravillosamente. From the perspective of a generalized substitution principle, the alignment problem in machine translation is the same as the class induction problem in language modeling, but with sequences featuring large numbers of gappy fragments and a boundary symbol INLINEFORM2 . The only remaining question is what makes two environments similar enough to infer the existence of a common category. There is, again, a large literature on this question (including the aforementioned language modeling, unsupervised parsing, and alignment work), but in the current work we will make use of a very simple criterion: fragments are interchangeable if they occur in at least one lexical environment that is exactly the same. Given a window size INLINEFORM0 , a sequence of INLINEFORM1 words INLINEFORM2 , and a fragment consisting of a set of INLINEFORM3 spans INLINEFORM4 , the environment is given by INLINEFORM5 , i.e. a INLINEFORM6 -word window around each span of the fragment. The data augmentation operation that defines geca is formally stated as follows: let INLINEFORM0 denote the substitution of the fragment INLINEFORM1 into the template INLINEFORM2 , and INLINEFORM3 be a representation of the environment in which INLINEFORM4 occurs in INLINEFORM5 . Then, If the training data contains sequences INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , and that INLINEFORM3 and INLINEFORM4 , synthesize a new training example INLINEFORM5 . Implementation Naïve implementation of the boxed operation takes INLINEFORM0 time (where INLINEFORM1 is the number of distinct templates in the dataset and INLINEFORM2 the number of distinct fragments). This can be improved to INLINEFORM3 (where INLINEFORM4 is the number of templates that map to the same environment) by building appropriate data structures: [h] python f2t = dict(default=set()) fragment -> template t2f = dict(default=set()) template -> fragment e2t = dict(default=set()) env -> template for sentence in dataset: for template, fragment in fragments(sentence): add(f2t[fragment], template) add(t2f[template], fragment) add(e2t[env(template)], template) t2t = dict(default=set()) for fragment in keys(f2t)): for template in f2t[fragment]: for template2 in f2t[fragment]: for newtemplate in e2t[env(template2)] add(t2t[template1], template2) for template1, template2 in t2t: for arg in t2a[template1] if arg not in t2a[template2]: yield fill(template2, arg) Sample geca implementation. Space requirements might still be considerable (comparable to those used by n-gram language models), and similar tricks can be used to reduce memory usage BIBREF27 . The above pseudocode is agnostic with respect to the choice of fragmentation and environment functions; task-specific choices are described in more detail for each experiment below. Discussion We introduced geca, a simple data augmentation scheme based on identifying local phrase substitutions that are licensed by common context, and demonstrated that extra training examples generated with geca lead to improvements on both diagnostic and natural datasets for semantic parsing and language modeling. While the approach is surprisingly effective in its current form, we view these results mostly as an invitation to consider more carefully the role played by representations of sentence fragments in larger questions about compositionality in black-box sequence models. The experiments in this paper all rely on exact string matching; future work might take advantage of learned representations of spans and their environments BIBREF32 , BIBREF33 . More generally, the present results underline the extent to which current models fail to learn simple, context-independent notions of reuse, but also how easy it is to make progress towards addressing this problem without fundamental changes in model architecture.
No
df79d04cc10a01d433bb558d5f8a51bfad29f46b
df79d04cc10a01d433bb558d5f8a51bfad29f46b_0
Q: Which languages do they test on? Text: Introduction This paper proposes a data augmentation protocol for sequence modeling problems. Our approach aims to supply a simple and model-agnostic bias toward compositional reuse of previously observed sequence fragments in novel environments. Consider a language modeling task in which we wish to estimate a probability distribution over a family of sentences with the following finite sample as training data: In language processing problems, we often want models to analyze this dataset compositionally and infer that ( SECREF6 ) is also probable but ( UID7 ) is not: This generalization amounts to to an inference about syntactic categories BIBREF0 : because cat and wug are interchangeable in the environment the...sang, they are also likely interchangeable elsewhere. Human learners make judgments like ( SECREF5 ) about novel lexical items BIBREF1 and fragments of novel languages BIBREF2 . But we do not expect such judgments from unstructured sequence models trained to maximize the likelihood of the training data in ( SECREF1 ). A large body of work in natural language processing provides generalization to data like ( SECREF6 ) by adding structure to the learned predictor BIBREF3 , BIBREF4 , BIBREF5 . But on real-world datasets, such models are typically worse than “black-box” function approximators like neural networks even when the black-box models fail to place probability mass on either example in ( SECREF5 ) BIBREF6 . To the extent that we believe ( SECREF6 ) to capture an important inductive bias, we would like to find a way of softly encouraging it without tampering with the structure of predictors that work well at scale. In this paper, we introduce a procedure for generating synthetic training examples by recombining real ones, such that ( SECREF6 ) is assigned nontrivial probability because it already appears in the training dataset. The basic operation underlying our proposal (which we call geca, for “good-enough compositional augmentation”) is depicted in fig:teaser: if two (possibly discontinuous) fragments of training examples appear in some common environment, then any additional environment where the first fragment appears is also a valid environment for the second. geca is crude: as a linguistic principle, it is both limited and imprecise. As discussed in Sections UID17 and SECREF5 , it captures a narrow slice of the many phenomena studied under the heading of “compositionality”, while also making a number of incorrect predictions about real languages. Nevertheless, geca appears to be quite effective across a range of learning problems. In semantic parsing, it gives improvements comparable to the data augmentation approach of BIBREF7 on INLINEFORM0 -calculus expressions, better performance than that approach on a different split of the data designed to test generalization more rigorously, and better performance on a different meaning representation language. Outside of semantic parsing, it solves two representative problems from the scan dataset of BIBREF8 that are synthetic but precise in the notion of compositionality they test. Finally, it helps with some (unconditional) low-resource language modeling problems in a typologically diverse set of languages. Background Recent years have seen tremendous success at natural language transduction and generation tasks using black-box function approximators, especially recurrent BIBREF9 and attentional BIBREF10 neural models. With enough training data, these models are often more accurate than than approaches built on traditional tools from the computational linguistics literature—formal models like regular transducers or context-free grammars BIBREF11 can be brittle and challenging to efficiently infer from large datasets. However, models equipped with an explicit (symbolic) generative process have at least one significant advantage over the aforementioned black-box approaches: given a grammar, it is straightforward to precisely characterize how that grammar will extrapolate beyond the examples in a given training set to out-of-distribution data. Indeed, it is often possible for researchers to design the form that this extrapolation will take: smoothed n-gram language models guarantee that no memorization is possible beyond a certain length BIBREF12 ; CCG-based semantic parsers can make immediate use of entity lexicons without having ever seen the lexicon entries used in real sentences BIBREF13 . It is not the case, as sometimes claimed BIBREF14 , that black-box neural models are fundamentally incapable of this kind of predictable generalization—the success of these models at capturing long-range structure in text BIBREF15 and controlled algorithmic data BIBREF16 indicate that some representation of hierarchical structure can be learned given enough data. But the precise point at which this transition occurs is not well-characterized; it is evidently beyond the scale available in many real-world problems. How can we improve the behavior of high-quality black-box models in these settings? There are many sophisticated tools available for improving the function approximators or loss functions themselves—regularization BIBREF17 , posterior regularization BIBREF18 , BIBREF19 , explicit stacks BIBREF20 and composition operators BIBREF21 ; these existing proposals tend to be task- and architecture-specific. But to the extent that the generalization problem can be addressed by increasing the scale of the training data, it is natural to ask whether we can address the problem by increasing this scale artificially—in other words, via data augmentation. Previous work BIBREF7 also studied data augmentation and compositionality in specific setting of learning language-to-logical-form mappings, beginning from the principle that data is compositional if it is generated by a synchronous grammar that relates strings to meanings. The specific approach proposed by BIBREF7 is effective but tailored for semantic parsing; it requires access to structured meaning representations with explicit types and bracketings, which are not available in most NLP applications. Here we aim at a notion of compositionality that is simpler and more general: a bias toward identifying recurring fragments seen at training time, and re-using them in environments distinct from the environments in which they were first observed. This view makes no assumptions about the availability of brackets and types, and is synchronous only to the extent that the notion of a fragment is permitted to include content from both the source and target sides. We will find that it is nearly as effective as the approach of BIBREF7 in the settings for which the latter was designed, but also effective on a variety of problems where it cannot be applied. Approach Consider again the example in fig:teaser. Our data augmentation protocol aims to discover substitutable sentence fragments (highlighted), with the fact a pair of fragments appear in some common sub-sentential environment (underlined) taken as evidence that the fragments belong to a common category. To generate a new examples for the model, an occurrence of one fragment is removed from a sentence to produce a sentence template, which is then populated with the other fragment. Why should we expect this procedure to produce well-formed training examples? The existence of syntactic categories, and the expressibility of well-formedness rules in terms of these abstract categories, is one of the foundational principles of generative approaches to syntax BIBREF22 . The observation that sentence context provides a strong signal about a constitutent's category is in turn the foundation of distributional approaches to language processing BIBREF23 . Combining the two gives the outlines of the above procedure. This combination has a productive history in natural language processing: when fragments are single words, it yields class-based language models BIBREF24 ; when fragments are contiguous spans it yields unsupervised parsers BIBREF0 , BIBREF25 . The present data augmentation scenario is distinguished mainly by the fact that we are unconcerned with producing a complete generative model of data, or with recovering the latent structure implied by the presence of nested syntactic categories. We can still synthesize high-precision examples of well-formed sequences by identifying individual substitutions that are likely to be correct without understanding how they fit into the grammar as a whole. Indeed, if we are not concerned with recovering linguistically plausible analyses, we need not limit ourselves to words or contiguous sentence fragments. We can take as evidence that we can use picks...up wherever we can use puts...down. Indeed, given a translation dataset: we can apply the same principle to synthesize I dax. INLINEFORM0 Dajo. based on the common environment ...marvelously INLINEFORM1 ...maravillosamente. From the perspective of a generalized substitution principle, the alignment problem in machine translation is the same as the class induction problem in language modeling, but with sequences featuring large numbers of gappy fragments and a boundary symbol INLINEFORM2 . The only remaining question is what makes two environments similar enough to infer the existence of a common category. There is, again, a large literature on this question (including the aforementioned language modeling, unsupervised parsing, and alignment work), but in the current work we will make use of a very simple criterion: fragments are interchangeable if they occur in at least one lexical environment that is exactly the same. Given a window size INLINEFORM0 , a sequence of INLINEFORM1 words INLINEFORM2 , and a fragment consisting of a set of INLINEFORM3 spans INLINEFORM4 , the environment is given by INLINEFORM5 , i.e. a INLINEFORM6 -word window around each span of the fragment. The data augmentation operation that defines geca is formally stated as follows: let INLINEFORM0 denote the substitution of the fragment INLINEFORM1 into the template INLINEFORM2 , and INLINEFORM3 be a representation of the environment in which INLINEFORM4 occurs in INLINEFORM5 . Then, If the training data contains sequences INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , and that INLINEFORM3 and INLINEFORM4 , synthesize a new training example INLINEFORM5 . Implementation Naïve implementation of the boxed operation takes INLINEFORM0 time (where INLINEFORM1 is the number of distinct templates in the dataset and INLINEFORM2 the number of distinct fragments). This can be improved to INLINEFORM3 (where INLINEFORM4 is the number of templates that map to the same environment) by building appropriate data structures: [h] python f2t = dict(default=set()) fragment -> template t2f = dict(default=set()) template -> fragment e2t = dict(default=set()) env -> template for sentence in dataset: for template, fragment in fragments(sentence): add(f2t[fragment], template) add(t2f[template], fragment) add(e2t[env(template)], template) t2t = dict(default=set()) for fragment in keys(f2t)): for template in f2t[fragment]: for template2 in f2t[fragment]: for newtemplate in e2t[env(template2)] add(t2t[template1], template2) for template1, template2 in t2t: for arg in t2a[template1] if arg not in t2a[template2]: yield fill(template2, arg) Sample geca implementation. Space requirements might still be considerable (comparable to those used by n-gram language models), and similar tricks can be used to reduce memory usage BIBREF27 . The above pseudocode is agnostic with respect to the choice of fragmentation and environment functions; task-specific choices are described in more detail for each experiment below. Discussion We introduced geca, a simple data augmentation scheme based on identifying local phrase substitutions that are licensed by common context, and demonstrated that extra training examples generated with geca lead to improvements on both diagnostic and natural datasets for semantic parsing and language modeling. While the approach is surprisingly effective in its current form, we view these results mostly as an invitation to consider more carefully the role played by representations of sentence fragments in larger questions about compositionality in black-box sequence models. The experiments in this paper all rely on exact string matching; future work might take advantage of learned representations of spans and their environments BIBREF32 , BIBREF33 . More generally, the present results underline the extent to which current models fail to learn simple, context-independent notions of reuse, but also how easy it is to make progress towards addressing this problem without fundamental changes in model architecture.
Answer with content missing: (Applications section) We use Wikipedia articles in five languages (Kinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English) as well as the Na dataset of Adams et al. (2017). Select: Kinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English
182b6d77b51fa83102719a81862891f49c23a025
182b6d77b51fa83102719a81862891f49c23a025_0
Q: What limitations are mentioned? Text: Introduction In a survey across 38 countries, the Pew Research Center reported that the global public opposed partisanship in news media BIBREF0 . It is, however, challenging to assess the partisanship of news articles on a large scale. We thus made an effort to create a dataset of articles annotated with political partisanship so that content analysis systems can benefit from it. To construct a dataset of news articles labeled with partisanship, it is required that some annotators read each article and decide whether it is partisan. This is an expensive annotation process. Another way to derive a label for an article is by using the partisanship of the publisher of the article. Previous work has used this method BIBREF1 , BIBREF2 , BIBREF3 . This labeling paradigm is premised on that partisan publishers publish more partisan articles and non-partisan publishers publish more non-partisan articles. Although there would be non-partisan articles published by partisan publishers (and vice versa), and thus labeled wrongly, the assumption ensures more information than noise. Once the partisanship of a publisher is known, the labels of all its articles are known, which is fast and cheap. We created a dataset of two parts. The first part contains a large number of articles that were labeled using the partisanship of publishers. The second part contains a few hundreds of articles that were annotated by readers who were asked to read each article and answer survey questions. In the following sections, we describe the collection and annotation of both parts of the dataset. Dataset description DpgMedia2019 is a Dutch dataset that was collected from the publications within DPG Media. We took 11 publishers in the Netherlands for the dataset. These publishers include 4 national publishers, Algemeen Dagblad (AD), de Volkskrant (VK), Trouw, and Het Parool, and 7 regional publishers, de Gelderlander, Tubantia, Brabants Dagblad, Eindhovens Dagblad, BN/De Stem PZC, and de Stentor. The regional publishers are collectively called Algemeen Dagblad Regionaal (ADR). A summary of the dataset is shown in Table TABREF3 . Publisher-level data We used an internal database that stores all articles written by journalists and ready to be published to collect the articles. From the database, we queried all articles that were published between 2017 and 2019. We filtered articles to be non-advertisement. We also filtered on the main sections so that the articles were not published under the sports and entertainment sections, which we assumed to be less political. After collecting, we found that a lot of the articles were published by several publishers, especially a large overlap existed between AD and ADR. To deal with the problem without losing many articles, we decided that articles that appeared in both AD and its regional publications belonged to AD. Therefore, articles were processed in the following steps: Remove any article that was published by more than one national publisher (VK, AD, Trouw, and Het Parool). This gave us a list of unique articles from the largest 4 publishers. Remove any article from ADR that overlapped with the articles from national publishers. Remove any article that was published by more than one regional publisher (ADR). The process assured that most of the articles are unique to one publisher. The only exceptions were the AD articles, of which some were also published by ADR. This is not ideal but acceptable as we show in the section UID8 that AD and ADR publishers would have the same partisanship labels. In the end, we have 103,812 articles. To our knowledge, there is no comprehensive research about the partisanship of Dutch publishers. We thus adopted the audience-based method to decide the partisanship of publishers. Within the survey that will be explained in section SECREF11 , we asked the annotators to rate their political leanings. The question asked an annotator to report his or her political standpoints to be extreme-left, left, neutral, right, or extreme-right. We mapped extreme-left to -2, left to -1, center to 0, right to 1, extremely-right to 2, and assigned the value to each annotator. Since each annotator is subscribed to one of the publishers in our survey, we calculated the partisanship score of a publisher by averaging the scores of all annotators that subscribed to the publisher. The final score of the 11 publishers are listed in Table TABREF9 , sorted from the most left-leaning to the most right-leaning. We decided to treat VK, Trouw, and Het Parool as partisan publishers and the rest non-partisan. This result largely accords with that from the news media report from the Pew Research Center in 2018 BIBREF4 , which found that VK is left-leaning and partisan while AD is less partisan. Table TABREF10 shows the final publisher-level dataset of dpgMedia2019, with the number of articles and class distribution. Article-level data To collect article-level labels, we utilized a platform in the company that has been used by the market research team to collect surveys from the subscribers of different news publishers. The survey works as follows: The user is first presented with a set of selected pages (usually 4 pages and around 20 articles) from the print paper the day before. The user can select an article each time that he or she has read, and answer some questions about it. We added 3 questions to the existing survey that asked the level of partisanship, the polarity of partisanship, and which pro- or anti- entities the article presents. We also asked the political standpoint of the user. The complete survey can be found in Appendices. The reason for using this platform was two-fold. First, the platform provided us with annotators with a higher probability to be competent with the task. Since the survey was distributed to subscribers that pay for reading news, it's more likely that they regularly read newspapers and are more familiar with the political issues and parties in the Netherlands. On the other hand, if we use crowdsourcing platforms, we need to design process to select suitable annotators, for example by nationality or anchor questions to test the annotator's ability. Second, the platform gave us more confidence that an annotator had read the article before answering questions. Since the annotators could choose which articles to annotate, it is more likely that they would rate an article that they had read and had some opinions about. The annotation task ran for around two months in February to April 2019. We collected annotations for 1,536 articles from 3,926 annotators. For the first question, where we asked about the intensity of partisanship, more than half of the annotations were non-partisan. About 1% of the annotation indicated an extreme partisanship, as shown in Table TABREF13 . For the polarity of partisanship, most of the annotators found it not applicable or difficult to decide, as shown in Table TABREF14 . For annotations that indicated a polarity, the highest percentage was given to progressive. Progressive and conservative seemed to be more relevant terms in the Netherlands as they are used more than their counterparts, left and right, respectively. As for the self-rated political standpoint of the annotators, nearly half of the annotators identified themselves as left-leaning, while only around 20% were right-leaning. This is interesting because when deciding the polarity of articles, left and progressive ratings were given much more often than right and conservative ones. This shows that these left-leaning annotators were able to identify their partisanship and rate the articles accordingly. We suspected that the annotators would induce bias in ratings based on their political leaning and we might want to normalize it. To check whether this was the case, we grouped annotators based on their political leaning and calculate the percentage of each option being annotated. In Figure FIGREF16 , we grouped options and color-coded political leanings to compare whether there are differences in the annotation between the groups. We observe that the "extreme-right" group used less "somewhat partisan", "partisan", and "extremely-partisan" annotations. This might mean that articles that were considered partisan by other groups were considered "non-partisan" or "impossible to decide" by this group. We didn't observe a significant difference between the groups. Figure FIGREF17 shows the same for the second question. Interestingly, the "extreme-right" group gave a lot more "right" and slightly more "progressive" ratings than other groups. In the end, we decided to use the raw ratings. How to scale the ratings based on self-identified political leaning needs more investigation. The main question that we are interested in is the first question in our survey. In addition to the 5-point Likert scale that an annotator could choose from (non-partisan to extremely partisan), we also provided the option to choose "impossible to decide" because the articles could be about non-political topics. When computing inter-rater agreement, this option was ignored. The remaining 5 ratings were treated as ordinal ratings. The initial Krippendorff's alpha was 0.142, using the interval metric. To perform quality control, we devised some filtering steps based on the information we had. These steps are as follows: Remove uninterested annotators: we assumed that annotators that provided no information were not interested in participating in the task. These annotators always rated "not possible to decide" for Q1, 'not applicable' or "unknown" for Q2, and provide no textual comment for Q3. There were in total 117 uninterested annotators and their answers were discarded. Remove unreliable annotators: as we didn't have "gold data" to evaluate reliability, we used the free text that an annotator provided in Q3 to compute a reliability score. The assumption was that if an annotator was able to provide texts with meaningful partisanship description, he or she was more reliable in performing the task. To do this, we collected the text given by each annotator. We filtered out text that didn't answer the question, such as symbols, 'no idea', 'see above', etc. Then we calculated the reliability score of annotator INLINEFORM0 with equation EQREF21 , where INLINEFORM1 is the number of clean texts that annotator INLINEFORM2 provided in total and INLINEFORM3 is the number of articles that annotator INLINEFORM4 rated. DISPLAYFORM0 We added one to INLINEFORM0 so that annotators that gave no clean texts would not all end up with a zero score but would have different scores based on how many articles they rated. In other words, if an annotator only rated one article and didn't give textual information, we considered he or she reliable since we had little information. However, an annotator that rated ten articles but never gave useful textual information was more likely to be unreliable. The reliability score was used to filter out annotators that rarely gave meaningful text. The threshold of the filtering was decided by the Krippendorff's alpha that would be achieved after discarding the annotators with a score below the threshold. Remove articles with too few annotations: articles with less than 3 annotations were discarded because we were not confident with a label that was derived from less than 3 annotations. Remove unreliable articles: if at least half of the annotations of an article were "impossible to decide", we assumed that the article was not about issues of which partisanship could be decided. Finally, we mapped ratings of 1 and 2 to non-partisan, and 3 to 5 to partisan. A majority vote was used to derive the final label. Articles with no majority were discarded. In the end, 766 articles remained, of which 201 were partisan. Table TABREF24 shows the number of articles and the percentage of partisan articles per publisher. The final alpha value is 0.180. Analysis of the datasets In this section, we analyze the properties and relationship of the two parts (publisher-level and article-level) of the datasets. In Table TABREF25 , we listed the length of articles of the two parts. The reason that this is important is to check whether there are apparent differences between the articles in the two parts of the dataset. We see that the lengths are comparable, which is desired. The second analysis is the relationship between publisher and article partisanship. We want to check whether the assumption of partisan publishers publish more partisan articles is valid for our dataset. To do this, we used the article-level labels and calculated the percentage of partisan articles for each publisher. This value was then compared with the publisher partisanship. We calculated Spearsman's correlation between the publisher partisanship derived from the audience and article content. We take the absolute value of the partisanship in table TABREF9 and that in table TABREF24 . The correlation is 0.21. This low correlation resulted from the nature of the task and publishers that were considered. The partisan publishers in DPG Media publish news articles that are reviewed by professional editors. The publishers are often partisan only on a portion of the articles and on certain topics. Limitations We identified some limitations during the process, which we describe in this section. When deciding publisher partisanship, the number of people from whom we computed the score was small. For example, de Stentor is estimated to reach 275K readers each day on its official website. Deciding the audience leaning from 55 samples was subject to sampling bias. Besides, the scores differ very little between publishers. None of the publishers had an absolute score higher than 1, meaning that even the most partisan publisher was only slightly partisan. Deciding which publishers we consider as partisan and which not is thus not very reliable. The article-level annotation task was not as well-defined as on a crowdsourcing platform. We included the questions as part of an existing survey and didn't want to create much burden to the annotators. Therefore, we did not provide long descriptive text that explained how a person should annotate an article. We thus run under the risk of annotator bias. This is one of the reasons for a low inter-rater agreement. Dataset Application This dataset is aimed to contribute to developing a partisan news detector. There are several ways that the dataset can be used to devise the system. For example, it is possible to train the detector using publisher-level labels and test with article-level labels. It is also possible to use semi-supervised learning and treat the publisher-level part as unsupervised, or use only the article-level part. We also released the raw survey data so that new mechanisms to decide the article-level labels can be devised. Acknowledgements We would like to thank Johannes Kiesel and the colleagues from Factmata for providing us with the annotation questions they used when creating a hyperpartisan news dataset. We would also like to thank Jaron Harambam, Judith Möller for helping us in asking the right questions for our annotations and Nava Tintarev for sharing her insights in the domain. We list the questions we asked in the partisanship annotation survey, in the original Dutch language and an English translation. Translated
deciding publisher partisanship, risk annotator bias because of short description text provided to annotators
441886f0497dc84f46ed8c32e8fa32983b5db42e
441886f0497dc84f46ed8c32e8fa32983b5db42e_0
Q: What examples of applications are mentioned? Text: Introduction In a survey across 38 countries, the Pew Research Center reported that the global public opposed partisanship in news media BIBREF0 . It is, however, challenging to assess the partisanship of news articles on a large scale. We thus made an effort to create a dataset of articles annotated with political partisanship so that content analysis systems can benefit from it. To construct a dataset of news articles labeled with partisanship, it is required that some annotators read each article and decide whether it is partisan. This is an expensive annotation process. Another way to derive a label for an article is by using the partisanship of the publisher of the article. Previous work has used this method BIBREF1 , BIBREF2 , BIBREF3 . This labeling paradigm is premised on that partisan publishers publish more partisan articles and non-partisan publishers publish more non-partisan articles. Although there would be non-partisan articles published by partisan publishers (and vice versa), and thus labeled wrongly, the assumption ensures more information than noise. Once the partisanship of a publisher is known, the labels of all its articles are known, which is fast and cheap. We created a dataset of two parts. The first part contains a large number of articles that were labeled using the partisanship of publishers. The second part contains a few hundreds of articles that were annotated by readers who were asked to read each article and answer survey questions. In the following sections, we describe the collection and annotation of both parts of the dataset. Dataset description DpgMedia2019 is a Dutch dataset that was collected from the publications within DPG Media. We took 11 publishers in the Netherlands for the dataset. These publishers include 4 national publishers, Algemeen Dagblad (AD), de Volkskrant (VK), Trouw, and Het Parool, and 7 regional publishers, de Gelderlander, Tubantia, Brabants Dagblad, Eindhovens Dagblad, BN/De Stem PZC, and de Stentor. The regional publishers are collectively called Algemeen Dagblad Regionaal (ADR). A summary of the dataset is shown in Table TABREF3 . Publisher-level data We used an internal database that stores all articles written by journalists and ready to be published to collect the articles. From the database, we queried all articles that were published between 2017 and 2019. We filtered articles to be non-advertisement. We also filtered on the main sections so that the articles were not published under the sports and entertainment sections, which we assumed to be less political. After collecting, we found that a lot of the articles were published by several publishers, especially a large overlap existed between AD and ADR. To deal with the problem without losing many articles, we decided that articles that appeared in both AD and its regional publications belonged to AD. Therefore, articles were processed in the following steps: Remove any article that was published by more than one national publisher (VK, AD, Trouw, and Het Parool). This gave us a list of unique articles from the largest 4 publishers. Remove any article from ADR that overlapped with the articles from national publishers. Remove any article that was published by more than one regional publisher (ADR). The process assured that most of the articles are unique to one publisher. The only exceptions were the AD articles, of which some were also published by ADR. This is not ideal but acceptable as we show in the section UID8 that AD and ADR publishers would have the same partisanship labels. In the end, we have 103,812 articles. To our knowledge, there is no comprehensive research about the partisanship of Dutch publishers. We thus adopted the audience-based method to decide the partisanship of publishers. Within the survey that will be explained in section SECREF11 , we asked the annotators to rate their political leanings. The question asked an annotator to report his or her political standpoints to be extreme-left, left, neutral, right, or extreme-right. We mapped extreme-left to -2, left to -1, center to 0, right to 1, extremely-right to 2, and assigned the value to each annotator. Since each annotator is subscribed to one of the publishers in our survey, we calculated the partisanship score of a publisher by averaging the scores of all annotators that subscribed to the publisher. The final score of the 11 publishers are listed in Table TABREF9 , sorted from the most left-leaning to the most right-leaning. We decided to treat VK, Trouw, and Het Parool as partisan publishers and the rest non-partisan. This result largely accords with that from the news media report from the Pew Research Center in 2018 BIBREF4 , which found that VK is left-leaning and partisan while AD is less partisan. Table TABREF10 shows the final publisher-level dataset of dpgMedia2019, with the number of articles and class distribution. Article-level data To collect article-level labels, we utilized a platform in the company that has been used by the market research team to collect surveys from the subscribers of different news publishers. The survey works as follows: The user is first presented with a set of selected pages (usually 4 pages and around 20 articles) from the print paper the day before. The user can select an article each time that he or she has read, and answer some questions about it. We added 3 questions to the existing survey that asked the level of partisanship, the polarity of partisanship, and which pro- or anti- entities the article presents. We also asked the political standpoint of the user. The complete survey can be found in Appendices. The reason for using this platform was two-fold. First, the platform provided us with annotators with a higher probability to be competent with the task. Since the survey was distributed to subscribers that pay for reading news, it's more likely that they regularly read newspapers and are more familiar with the political issues and parties in the Netherlands. On the other hand, if we use crowdsourcing platforms, we need to design process to select suitable annotators, for example by nationality or anchor questions to test the annotator's ability. Second, the platform gave us more confidence that an annotator had read the article before answering questions. Since the annotators could choose which articles to annotate, it is more likely that they would rate an article that they had read and had some opinions about. The annotation task ran for around two months in February to April 2019. We collected annotations for 1,536 articles from 3,926 annotators. For the first question, where we asked about the intensity of partisanship, more than half of the annotations were non-partisan. About 1% of the annotation indicated an extreme partisanship, as shown in Table TABREF13 . For the polarity of partisanship, most of the annotators found it not applicable or difficult to decide, as shown in Table TABREF14 . For annotations that indicated a polarity, the highest percentage was given to progressive. Progressive and conservative seemed to be more relevant terms in the Netherlands as they are used more than their counterparts, left and right, respectively. As for the self-rated political standpoint of the annotators, nearly half of the annotators identified themselves as left-leaning, while only around 20% were right-leaning. This is interesting because when deciding the polarity of articles, left and progressive ratings were given much more often than right and conservative ones. This shows that these left-leaning annotators were able to identify their partisanship and rate the articles accordingly. We suspected that the annotators would induce bias in ratings based on their political leaning and we might want to normalize it. To check whether this was the case, we grouped annotators based on their political leaning and calculate the percentage of each option being annotated. In Figure FIGREF16 , we grouped options and color-coded political leanings to compare whether there are differences in the annotation between the groups. We observe that the "extreme-right" group used less "somewhat partisan", "partisan", and "extremely-partisan" annotations. This might mean that articles that were considered partisan by other groups were considered "non-partisan" or "impossible to decide" by this group. We didn't observe a significant difference between the groups. Figure FIGREF17 shows the same for the second question. Interestingly, the "extreme-right" group gave a lot more "right" and slightly more "progressive" ratings than other groups. In the end, we decided to use the raw ratings. How to scale the ratings based on self-identified political leaning needs more investigation. The main question that we are interested in is the first question in our survey. In addition to the 5-point Likert scale that an annotator could choose from (non-partisan to extremely partisan), we also provided the option to choose "impossible to decide" because the articles could be about non-political topics. When computing inter-rater agreement, this option was ignored. The remaining 5 ratings were treated as ordinal ratings. The initial Krippendorff's alpha was 0.142, using the interval metric. To perform quality control, we devised some filtering steps based on the information we had. These steps are as follows: Remove uninterested annotators: we assumed that annotators that provided no information were not interested in participating in the task. These annotators always rated "not possible to decide" for Q1, 'not applicable' or "unknown" for Q2, and provide no textual comment for Q3. There were in total 117 uninterested annotators and their answers were discarded. Remove unreliable annotators: as we didn't have "gold data" to evaluate reliability, we used the free text that an annotator provided in Q3 to compute a reliability score. The assumption was that if an annotator was able to provide texts with meaningful partisanship description, he or she was more reliable in performing the task. To do this, we collected the text given by each annotator. We filtered out text that didn't answer the question, such as symbols, 'no idea', 'see above', etc. Then we calculated the reliability score of annotator INLINEFORM0 with equation EQREF21 , where INLINEFORM1 is the number of clean texts that annotator INLINEFORM2 provided in total and INLINEFORM3 is the number of articles that annotator INLINEFORM4 rated. DISPLAYFORM0 We added one to INLINEFORM0 so that annotators that gave no clean texts would not all end up with a zero score but would have different scores based on how many articles they rated. In other words, if an annotator only rated one article and didn't give textual information, we considered he or she reliable since we had little information. However, an annotator that rated ten articles but never gave useful textual information was more likely to be unreliable. The reliability score was used to filter out annotators that rarely gave meaningful text. The threshold of the filtering was decided by the Krippendorff's alpha that would be achieved after discarding the annotators with a score below the threshold. Remove articles with too few annotations: articles with less than 3 annotations were discarded because we were not confident with a label that was derived from less than 3 annotations. Remove unreliable articles: if at least half of the annotations of an article were "impossible to decide", we assumed that the article was not about issues of which partisanship could be decided. Finally, we mapped ratings of 1 and 2 to non-partisan, and 3 to 5 to partisan. A majority vote was used to derive the final label. Articles with no majority were discarded. In the end, 766 articles remained, of which 201 were partisan. Table TABREF24 shows the number of articles and the percentage of partisan articles per publisher. The final alpha value is 0.180. Analysis of the datasets In this section, we analyze the properties and relationship of the two parts (publisher-level and article-level) of the datasets. In Table TABREF25 , we listed the length of articles of the two parts. The reason that this is important is to check whether there are apparent differences between the articles in the two parts of the dataset. We see that the lengths are comparable, which is desired. The second analysis is the relationship between publisher and article partisanship. We want to check whether the assumption of partisan publishers publish more partisan articles is valid for our dataset. To do this, we used the article-level labels and calculated the percentage of partisan articles for each publisher. This value was then compared with the publisher partisanship. We calculated Spearsman's correlation between the publisher partisanship derived from the audience and article content. We take the absolute value of the partisanship in table TABREF9 and that in table TABREF24 . The correlation is 0.21. This low correlation resulted from the nature of the task and publishers that were considered. The partisan publishers in DPG Media publish news articles that are reviewed by professional editors. The publishers are often partisan only on a portion of the articles and on certain topics. Limitations We identified some limitations during the process, which we describe in this section. When deciding publisher partisanship, the number of people from whom we computed the score was small. For example, de Stentor is estimated to reach 275K readers each day on its official website. Deciding the audience leaning from 55 samples was subject to sampling bias. Besides, the scores differ very little between publishers. None of the publishers had an absolute score higher than 1, meaning that even the most partisan publisher was only slightly partisan. Deciding which publishers we consider as partisan and which not is thus not very reliable. The article-level annotation task was not as well-defined as on a crowdsourcing platform. We included the questions as part of an existing survey and didn't want to create much burden to the annotators. Therefore, we did not provide long descriptive text that explained how a person should annotate an article. We thus run under the risk of annotator bias. This is one of the reasons for a low inter-rater agreement. Dataset Application This dataset is aimed to contribute to developing a partisan news detector. There are several ways that the dataset can be used to devise the system. For example, it is possible to train the detector using publisher-level labels and test with article-level labels. It is also possible to use semi-supervised learning and treat the publisher-level part as unsupervised, or use only the article-level part. We also released the raw survey data so that new mechanisms to decide the article-level labels can be devised. Acknowledgements We would like to thank Johannes Kiesel and the colleagues from Factmata for providing us with the annotation questions they used when creating a hyperpartisan news dataset. We would also like to thank Jaron Harambam, Judith Möller for helping us in asking the right questions for our annotations and Nava Tintarev for sharing her insights in the domain. We list the questions we asked in the partisanship annotation survey, in the original Dutch language and an English translation. Translated
partisan news detector
62afbf8b1090e56fdd2a2fa2bdb687c3995477f6
62afbf8b1090e56fdd2a2fa2bdb687c3995477f6_0
Q: Did they crowdsource the annotations? Text: Introduction In a survey across 38 countries, the Pew Research Center reported that the global public opposed partisanship in news media BIBREF0 . It is, however, challenging to assess the partisanship of news articles on a large scale. We thus made an effort to create a dataset of articles annotated with political partisanship so that content analysis systems can benefit from it. To construct a dataset of news articles labeled with partisanship, it is required that some annotators read each article and decide whether it is partisan. This is an expensive annotation process. Another way to derive a label for an article is by using the partisanship of the publisher of the article. Previous work has used this method BIBREF1 , BIBREF2 , BIBREF3 . This labeling paradigm is premised on that partisan publishers publish more partisan articles and non-partisan publishers publish more non-partisan articles. Although there would be non-partisan articles published by partisan publishers (and vice versa), and thus labeled wrongly, the assumption ensures more information than noise. Once the partisanship of a publisher is known, the labels of all its articles are known, which is fast and cheap. We created a dataset of two parts. The first part contains a large number of articles that were labeled using the partisanship of publishers. The second part contains a few hundreds of articles that were annotated by readers who were asked to read each article and answer survey questions. In the following sections, we describe the collection and annotation of both parts of the dataset. Dataset description DpgMedia2019 is a Dutch dataset that was collected from the publications within DPG Media. We took 11 publishers in the Netherlands for the dataset. These publishers include 4 national publishers, Algemeen Dagblad (AD), de Volkskrant (VK), Trouw, and Het Parool, and 7 regional publishers, de Gelderlander, Tubantia, Brabants Dagblad, Eindhovens Dagblad, BN/De Stem PZC, and de Stentor. The regional publishers are collectively called Algemeen Dagblad Regionaal (ADR). A summary of the dataset is shown in Table TABREF3 . Publisher-level data We used an internal database that stores all articles written by journalists and ready to be published to collect the articles. From the database, we queried all articles that were published between 2017 and 2019. We filtered articles to be non-advertisement. We also filtered on the main sections so that the articles were not published under the sports and entertainment sections, which we assumed to be less political. After collecting, we found that a lot of the articles were published by several publishers, especially a large overlap existed between AD and ADR. To deal with the problem without losing many articles, we decided that articles that appeared in both AD and its regional publications belonged to AD. Therefore, articles were processed in the following steps: Remove any article that was published by more than one national publisher (VK, AD, Trouw, and Het Parool). This gave us a list of unique articles from the largest 4 publishers. Remove any article from ADR that overlapped with the articles from national publishers. Remove any article that was published by more than one regional publisher (ADR). The process assured that most of the articles are unique to one publisher. The only exceptions were the AD articles, of which some were also published by ADR. This is not ideal but acceptable as we show in the section UID8 that AD and ADR publishers would have the same partisanship labels. In the end, we have 103,812 articles. To our knowledge, there is no comprehensive research about the partisanship of Dutch publishers. We thus adopted the audience-based method to decide the partisanship of publishers. Within the survey that will be explained in section SECREF11 , we asked the annotators to rate their political leanings. The question asked an annotator to report his or her political standpoints to be extreme-left, left, neutral, right, or extreme-right. We mapped extreme-left to -2, left to -1, center to 0, right to 1, extremely-right to 2, and assigned the value to each annotator. Since each annotator is subscribed to one of the publishers in our survey, we calculated the partisanship score of a publisher by averaging the scores of all annotators that subscribed to the publisher. The final score of the 11 publishers are listed in Table TABREF9 , sorted from the most left-leaning to the most right-leaning. We decided to treat VK, Trouw, and Het Parool as partisan publishers and the rest non-partisan. This result largely accords with that from the news media report from the Pew Research Center in 2018 BIBREF4 , which found that VK is left-leaning and partisan while AD is less partisan. Table TABREF10 shows the final publisher-level dataset of dpgMedia2019, with the number of articles and class distribution. Article-level data To collect article-level labels, we utilized a platform in the company that has been used by the market research team to collect surveys from the subscribers of different news publishers. The survey works as follows: The user is first presented with a set of selected pages (usually 4 pages and around 20 articles) from the print paper the day before. The user can select an article each time that he or she has read, and answer some questions about it. We added 3 questions to the existing survey that asked the level of partisanship, the polarity of partisanship, and which pro- or anti- entities the article presents. We also asked the political standpoint of the user. The complete survey can be found in Appendices. The reason for using this platform was two-fold. First, the platform provided us with annotators with a higher probability to be competent with the task. Since the survey was distributed to subscribers that pay for reading news, it's more likely that they regularly read newspapers and are more familiar with the political issues and parties in the Netherlands. On the other hand, if we use crowdsourcing platforms, we need to design process to select suitable annotators, for example by nationality or anchor questions to test the annotator's ability. Second, the platform gave us more confidence that an annotator had read the article before answering questions. Since the annotators could choose which articles to annotate, it is more likely that they would rate an article that they had read and had some opinions about. The annotation task ran for around two months in February to April 2019. We collected annotations for 1,536 articles from 3,926 annotators. For the first question, where we asked about the intensity of partisanship, more than half of the annotations were non-partisan. About 1% of the annotation indicated an extreme partisanship, as shown in Table TABREF13 . For the polarity of partisanship, most of the annotators found it not applicable or difficult to decide, as shown in Table TABREF14 . For annotations that indicated a polarity, the highest percentage was given to progressive. Progressive and conservative seemed to be more relevant terms in the Netherlands as they are used more than their counterparts, left and right, respectively. As for the self-rated political standpoint of the annotators, nearly half of the annotators identified themselves as left-leaning, while only around 20% were right-leaning. This is interesting because when deciding the polarity of articles, left and progressive ratings were given much more often than right and conservative ones. This shows that these left-leaning annotators were able to identify their partisanship and rate the articles accordingly. We suspected that the annotators would induce bias in ratings based on their political leaning and we might want to normalize it. To check whether this was the case, we grouped annotators based on their political leaning and calculate the percentage of each option being annotated. In Figure FIGREF16 , we grouped options and color-coded political leanings to compare whether there are differences in the annotation between the groups. We observe that the "extreme-right" group used less "somewhat partisan", "partisan", and "extremely-partisan" annotations. This might mean that articles that were considered partisan by other groups were considered "non-partisan" or "impossible to decide" by this group. We didn't observe a significant difference between the groups. Figure FIGREF17 shows the same for the second question. Interestingly, the "extreme-right" group gave a lot more "right" and slightly more "progressive" ratings than other groups. In the end, we decided to use the raw ratings. How to scale the ratings based on self-identified political leaning needs more investigation. The main question that we are interested in is the first question in our survey. In addition to the 5-point Likert scale that an annotator could choose from (non-partisan to extremely partisan), we also provided the option to choose "impossible to decide" because the articles could be about non-political topics. When computing inter-rater agreement, this option was ignored. The remaining 5 ratings were treated as ordinal ratings. The initial Krippendorff's alpha was 0.142, using the interval metric. To perform quality control, we devised some filtering steps based on the information we had. These steps are as follows: Remove uninterested annotators: we assumed that annotators that provided no information were not interested in participating in the task. These annotators always rated "not possible to decide" for Q1, 'not applicable' or "unknown" for Q2, and provide no textual comment for Q3. There were in total 117 uninterested annotators and their answers were discarded. Remove unreliable annotators: as we didn't have "gold data" to evaluate reliability, we used the free text that an annotator provided in Q3 to compute a reliability score. The assumption was that if an annotator was able to provide texts with meaningful partisanship description, he or she was more reliable in performing the task. To do this, we collected the text given by each annotator. We filtered out text that didn't answer the question, such as symbols, 'no idea', 'see above', etc. Then we calculated the reliability score of annotator INLINEFORM0 with equation EQREF21 , where INLINEFORM1 is the number of clean texts that annotator INLINEFORM2 provided in total and INLINEFORM3 is the number of articles that annotator INLINEFORM4 rated. DISPLAYFORM0 We added one to INLINEFORM0 so that annotators that gave no clean texts would not all end up with a zero score but would have different scores based on how many articles they rated. In other words, if an annotator only rated one article and didn't give textual information, we considered he or she reliable since we had little information. However, an annotator that rated ten articles but never gave useful textual information was more likely to be unreliable. The reliability score was used to filter out annotators that rarely gave meaningful text. The threshold of the filtering was decided by the Krippendorff's alpha that would be achieved after discarding the annotators with a score below the threshold. Remove articles with too few annotations: articles with less than 3 annotations were discarded because we were not confident with a label that was derived from less than 3 annotations. Remove unreliable articles: if at least half of the annotations of an article were "impossible to decide", we assumed that the article was not about issues of which partisanship could be decided. Finally, we mapped ratings of 1 and 2 to non-partisan, and 3 to 5 to partisan. A majority vote was used to derive the final label. Articles with no majority were discarded. In the end, 766 articles remained, of which 201 were partisan. Table TABREF24 shows the number of articles and the percentage of partisan articles per publisher. The final alpha value is 0.180. Analysis of the datasets In this section, we analyze the properties and relationship of the two parts (publisher-level and article-level) of the datasets. In Table TABREF25 , we listed the length of articles of the two parts. The reason that this is important is to check whether there are apparent differences between the articles in the two parts of the dataset. We see that the lengths are comparable, which is desired. The second analysis is the relationship between publisher and article partisanship. We want to check whether the assumption of partisan publishers publish more partisan articles is valid for our dataset. To do this, we used the article-level labels and calculated the percentage of partisan articles for each publisher. This value was then compared with the publisher partisanship. We calculated Spearsman's correlation between the publisher partisanship derived from the audience and article content. We take the absolute value of the partisanship in table TABREF9 and that in table TABREF24 . The correlation is 0.21. This low correlation resulted from the nature of the task and publishers that were considered. The partisan publishers in DPG Media publish news articles that are reviewed by professional editors. The publishers are often partisan only on a portion of the articles and on certain topics. Limitations We identified some limitations during the process, which we describe in this section. When deciding publisher partisanship, the number of people from whom we computed the score was small. For example, de Stentor is estimated to reach 275K readers each day on its official website. Deciding the audience leaning from 55 samples was subject to sampling bias. Besides, the scores differ very little between publishers. None of the publishers had an absolute score higher than 1, meaning that even the most partisan publisher was only slightly partisan. Deciding which publishers we consider as partisan and which not is thus not very reliable. The article-level annotation task was not as well-defined as on a crowdsourcing platform. We included the questions as part of an existing survey and didn't want to create much burden to the annotators. Therefore, we did not provide long descriptive text that explained how a person should annotate an article. We thus run under the risk of annotator bias. This is one of the reasons for a low inter-rater agreement. Dataset Application This dataset is aimed to contribute to developing a partisan news detector. There are several ways that the dataset can be used to devise the system. For example, it is possible to train the detector using publisher-level labels and test with article-level labels. It is also possible to use semi-supervised learning and treat the publisher-level part as unsupervised, or use only the article-level part. We also released the raw survey data so that new mechanisms to decide the article-level labels can be devised. Acknowledgements We would like to thank Johannes Kiesel and the colleagues from Factmata for providing us with the annotation questions they used when creating a hyperpartisan news dataset. We would also like to thank Jaron Harambam, Judith Möller for helping us in asking the right questions for our annotations and Nava Tintarev for sharing her insights in the domain. We list the questions we asked in the partisanship annotation survey, in the original Dutch language and an English translation. Translated
Yes
d3341eefe4188ee8a68914a2e8c9047334997e84
d3341eefe4188ee8a68914a2e8c9047334997e84_0
Q: Why they conclude that the usage of Gated-Attention provides no competitive advantage against concatenation in this setting? Text: Introduction Task-oriented language grounding refers to the process of extracting semantically meaningful representations of language by mapping it to visual elements and actions in the environment in order to perform the task specified by the instruction BIBREF0. Recent works in this paradigm focus on wide spectrum of natural language instructions' semantics: different characteristics of referents (colors, relative positions, relative sizes), multiple tasks, and multiple sub-goals BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. In this work, we are interested in the language input with the semantics of multiple sub-goals, focusing on the order of execution, as the natural language contains elements that may lead to the non-linear order of execution (e.g. “Take the garbage out, but first wash the dishes“). We refer to this kind of elements as non-linear order connectors in the setting of task-oriented language grounding. In particular, we want to answer: what is the performance of general deep reinforcement learning algorithms with this kind of language input? Can it successfully learn all of the training instructions? Can it generalize to an unseen number of sub-goals? To answer these questions, we generate an instruction's language for a modified GridWorld environment, where the agent needs to visit items specified in a given instruction. The language is built around three order connectors: one linear – “comma“, and two non-linear – “but first“, and “but before“, producing instructions like “Go to the red, go to the blue, but first go to the green“. In contrast to BIBREF6, where the sub-goals are separated in advance and the order is known, in this work, we specifically aim to study whether the agent can learn to determine the order of execution based on the connectors present in the language input. We apply one of the offline deep reinforcement learning baselines – Dueling DQN and examine the impact of several extensions such as Gated-Attention architecture BIBREF0 and Prioritized Experience Replay BIBREF7. First, we discover that training for both non-linear and linear order connectors at the same time improves the performance on the latter when compared to training using only linear ones. Second, we observe that the generalization to an unseen number of sub-goals using general methods of deep reinforcement learning is possible but still very limited for both linear and non-linear connectors. Third, we find that there is no advantage of using gated-attention against simple concatenation in this setting. And fourth, we observe that the usage of prioritized experience replay in this setup may be enough to achieve the training performance near to perfect. Problem Formulation ::: Environment The target environment is the modified version of GridWorld. Figure FIGREF3 illustrates one of the possible layouts. The environment is episodic and bounded by a maximum number of steps which can be restarted indefinitely. The goal of the agent is to visit every object in the right order. For example, in Figure FIGREF3, if the provided linguistic instruction is “Go to the blue object, but first go to the green object“ then the agent must visit the green object, and then move to the blue object. If the agent violates the order of visitation or arrives at the wrong object, the episode terminates. Problem Formulation ::: Language The semantics of interest are the presence of multiple sub-goals and the order of their execution. To capture them, we generate an instruction's language, where every instruction describes what objects (referents) the agent should visit and in what order. The generated language operates on only one task – “Go to the ...“, and three referents: red, blue, and green. An instruction may contain multiple sub-goals, and these sub-goals are connected in special ways. In the setting of task-oriented language grounding, we denote the connections between sub-goals that define the order of execution as the order connectors. We distinguish between linear and non-linear order connectors. The former refers to the connectors that preserve the order as they appear in the language input, e.g. “Go to the red, go to the blue“, “comma“ is a linear connector, as the order of execution is the same as the sub-goals ordered in the input. The non-linear connectors may change the order of execution in any way, e.g. “Go to the red, go to the green, but first go to the blue“, “but first“ is a non-linear connector as the last sub-goal in the language input should be executed the first. The generated language contains three order connectors: one linear – “comma“, and two non-linear – “but first“, “but before“. The connector “but before“ swaps the order of two consecutive sub-goals, e.g. “Go to the red, go to the green, but before go to the blue“ resolves to [red, blue, green]. Figure FIGREF5 depicts how the language is generated on the level of order connectors. If “but before“ or “but first“ connectors are present, we restrict them to be the last order connector in the instruction to avoid ambiguity in the language. The generated language excludes all the instructions that resolve to the visitation of the same item two or more times in a row (e.g “Go to the red, go to the blue, but first go to the red“ is not allowed). Problem Formulation ::: Evaluation The evaluation is built around instructions' allocation in order to assess the effect of different order connectors. We extract three subsets of the language: Comma, Comma-ButFirst, Comma-ButBefore. The first subset is designated to assess the performance on the language input with linear order connectors only. The goal of the two last subsets is to measure the performance in the presence of non-linear connectors of different type: absolute position change (Comma-ButFirst) and the relative position change (Comma-ButBefore). For every subset, we make two splits, one – for the evaluation of training performance, the other – for the evaluation of generalization to a higher number of sub-goals. The splits are obtained as depicted in Figure FIGREF9. We limit every subset to include instructions with six sub-goals at max. Then we split it into the training and testing parts. The training part contains all the instructions bounded by 3 sub-goals, and the testing part contains the rest. Table TABREF8 describes what order connectors are present in every subset, and how many instructions are in the training and testing splits. The Comma is a subset of both Comma-ButFirst and Comma-ButBefore. To measure the training performance, we vary the proportion of training instructions from 0.1 to 0.9 with a step of 0.1. The performance on the training instructions is quantified as the success rate on these instructions. To measure the testing performance, we train the models on the biggest proportion of the training instructions. As for the environment's layout, we randomly sampled only one instance, and utilize it in both training and testing scenarios for all of the algorithms to reduce the computing time, as the layouts generalization is not in the scope of this study. Methods ::: Algorithms We focus on one of the offline reinforcement learning baseline algorithms - single-actor Dueling DQN (DDQN) BIBREF8. In particular, we vary two components: the network's architecture and the form of the experience replay. For the latter, we examine the Prioritized Experience Replay BIBREF7, which is hypothesized to provide a form of implicit curriculum. For the former, we experiment with Gated-Attention (GA) and Concatenation (Cat) architectures BIBREF0. As the GA was shown to be advantageous for the language input in VizDOOM environment BIBREF9, BIBREF0. But, the language input in BIBREF0 primarily concentrated on different object attributes, we are interested in whether this mechanism will have a positive impact on instructions with multiple sub-goals as well. The network architecture is as in BIBREF0, but instead of recurrence part, we utilize stacking of 4 previous observations as in BIBREF10. The training loop is organized such that the number of the target network's updates is kept constant for all instructions. Meaning that, instead of random instruction sampling, we iterate over all training instructions in a similar way to BIBREF10 and only then update our target network. Methods ::: Reward Shaping The problem is tackled under the most informative reward shaping scheme. It incorporates the information on how many steps are left to the successful execution of the current sub-goal. In order to preserve the optimal policy for the original Markov Decision Process, we apply a potential-based reward transformation BIBREF11. Results and Discussion The training and testing performance of the Dueling DQN algorithm with different extensions can be found in the Tables TABREF12 and TABREF13. The training at language subsets Comma-ButFirst and Comma-ButBefore substantially improves the training and generalization performance at Comma subset when compared to training only at Comma subset. This is quite an unexpected outcome that suggests that the exposition to the non-linear order connectors may improve the performance for linear connectors. We notice that the concatenation consistently outperforms the gated-attention mechanism for both training and testing instructions. We suspect that the gated-attention is useful in the scenarios where objects are described in terms of multiple attributes, but it has no to harming effect when it comes to the order connectors. The frame stacking was enough to achieve the success at training and at some part of the testing instructions. The reason is not clear, but we hypothesize that it can be explained by the lack of the layouts variability and by the offloading mechanism BIBREF12. This requires further investigation. The usage of prioritized experience replay outperforms the simple replay buffer by a big margin – from 10% to 20% success rate improvement. This is a well-established fact for the Atari domain BIBREF7, but not quite explored in the areas of multi-task reinforcement learning or task-oriented language grounding. Conclusion In this work, we applied baseline methods of general deep reinforcement learning to the problem of task-oriented language grounding, where language input contains linear and non-linear order connectors. We found that even baseline models can capture the semantics of linear and non-linear order connectors at the training instructions, but it is not enough to achieve high generalization performance even up to six sub-goals. The best results are achieved with the prioritized experience replay mechanism, which suggests its potential application to general multi-task reinforcement learning and task-oriented language grounding. And the most importantly, we found that training at both linear and non-linear order connectors helps to capture the semantics of the order connectors better when compared to just training on linear connectors – the generalization performance increases in 2-3 times. These findings suggest that we should look for a model that would generalize better even in such a simple setting: for instance, by introducing recurrence BIBREF0, BIBREF13, BIBREF1 or hindsight experience replay BIBREF14. But as the training performance is shown to be near perfect, it would be interesting to investigate the order connectors in visually rich environments or/and in the presence of other natural language instructions' semantics: multiple tasks, multiple referent attributes, and etc.
concatenation consistently outperforms the gated-attention mechanism for both training and testing instructions
334f90bb715d8950ead1be0742d46a3b889744e7
334f90bb715d8950ead1be0742d46a3b889744e7_0
Q: What semantic features help in detecting whether a piece of text is genuine or generated? of Text: Introduction Building on a long history of language generation models that are based on statistical knowledge that people have BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, large-scale, neural network-based language models (LMs) that write paragraph-length text with the coherence of human writing have emerged BIBREF6, BIBREF7, BIBREF8. Such models have raised concerns about misuse in generating fake news, misleading reviews, and hate speech BIBREF9, BIBREF10, BIBREF8, BIBREF11, BIBREF12. The alarming consequences of such machine-generated misinformation present an urgent need to discern fake content from genuine, as it is becoming more and more difficult for people to do so without cognitive support tools BIBREF13. Several recent studies have used supervised learning to develop classifiers for this task BIBREF8, BIBREF14, BIBREF9, BIBREF15, BIBREF16 and interpreted their properties. Here we take inspiration from our recent work on information-theoretic limits for detecting audiovisual deepfakes generated by GANs BIBREF17 to develop information-theoretic limits for detecting the outputs of language models. In particular, we build on the information-theoretic study of authentication BIBREF18 to use a formal hypothesis testing framework for detecting the outputs of language models. In establishing fundamental limits of detection, we consider two settings. First, we characterize the error exponent for a particular language model in terms of standard performance metrics such as cross-entropy and perplexity. As far as we know, these informational performance metrics had not previously emerged from a formal operational theorem. Second, we consider not just a setting with a specific language model with given performance metrics, but rather consider a universal setting where we take a generic view of language models as empirical maximum likelihood $k$-order Markov approximations of stationary, ergodic random processes. Results on estimation of such random processes are revisited in the context of the error probability, using a conjectured extension of the reverse Pinsker inequality. In closing, we discuss how the semantics of generated text may be a form of side information in detection. Problem Formulation and Basics ::: Language Models and their Performance Metrics Consider a language $L$ like English, which has tokens drawn from a finite alphabet $\mathcal {A}$; tokens can be letters, words, or other such symbols. A language model assigns probabilities to sequences of tokens $(a_1,a_2,\ldots ,a_m)$ so the more likely a sequence is in $L$, the greater its probability. Language models discussed in Sec. SECREF1 estimate this probability $Q$ as a product of each token's probability $q$ given its preceding tokens: a Markov structure. Some language models including BERT BIBREF19 consider tokens on both sides in assessing probabilities (cloze task) and are Markov random field models rather than Markov language models BIBREF20. Using training data, a language model aims to learn a distribution $Q$ that is close to the empirical distribution $P$ of the language $L$. Basic language models can be extended to be conditional language models so as to allow control of style. In certain models, such as CTRL, one can generate text only from a conditioning variable without the need for an initial prompt BIBREF7. An example of text generated from the conditioning text: https://www.cnn.com/2007/09/20/us-president-meets-british-pm is the following: LONDON, England (CNN) – U.S. President George W. Bush met with British Prime Minister Tony Blair on Monday to discuss the war in Iraq, according to a statement from Blair's office.\n\nThe meeting was held at 10 Downing Street and lasted about an hour.\n\n"The two leaders discussed Iraq and other international issues of mutual concern," said Blair.\n\nBush, who is scheduled to meet Wednesday with Russian President Vladimir Putin, will also visit Germany for talks later this week.\n\nIn his statement, Blair said, "We agreed that we should continue our efforts together to bring peace and stability to Iraq. We both reaffirmed our commitment to working closely together, as well as to continuing to work constructively toward achieving lasting security and prosperity throughout the Middle East region."\n\nBush's trip comes after he visited Britain last week where he spoke out against terrorism while visiting Buckingham Palace.\n\nHe has been criticized by some lawmakers over what they say are insufficient military resources being devoted to fighting terrorism. Notwithstanding their limitations BIBREF21, BIBREF22, the standard performance metrics used for assessing language models are the cross-entropy and the perplexity, which quantify how close $Q$ is to $P$. As far as we know, these performance measures have been proposed through the intuitive notion that small values of these quantities seem to correspond, empirically, to higher-quality generated text as judged by people. Within the common task framework BIBREF10, there are leaderboards that assess the perplexity of language models over standard datasets such as WikiText-103 BIBREF23. The cross-entropy of $Q$ with respect to $P$ is defined as: which simplifies, using standard information-theoretic identities, to: where $H(\cdot )$ with one argument is the Shannon entropy and $D_{\mathrm {KL}}( \cdot || \cdot )$ is the Kullback-Leibler divergence (relative entropy). For a given language $L$ being modeled, the first term $H(P)$ can be thought of as fixed BIBREF24. The second term $D_{\mathrm {KL}}(P || Q)$ can be interpreted as the excess information rate needed to represent a language using a mismatched probability distribution BIBREF25. Perplexity is also a measure of uncertainty in predicting the next letter and is simply defined as: when entropies are measured in nats, rather than bits. For a given language, we can consider the ratio of perplexity values or the difference of cross-entropy values of two models $Q_1$ and $Q_2$ as a language-independent notion of performance gap: Problem Formulation and Basics ::: Hypothesis Test and General Error Bounds Recall that the distribution of authentic text is denoted $P$ and the distribution of text generated by the language model is $Q$. Suppose we have access to $n$ tokens of generated text from the language model, which we call $Y_1, Y_2, Y_3, \ldots , Y_n$. We can then formalize a hypothesis test as: If we assume the observed tokens are i.i.d., that only makes the hypothesis test easier than the non-i.i.d. case seen in realistic text samples, and therefore its performance acts as a bound. There are general characterizations of error probability of hypothesis tests as follows BIBREF26. For the Neyman-Pearson formulation of fixing the false alarm probability at $\epsilon $ and maximizing the true detection probability, it is known that the error probability satisfies: for $n$ i.i.d. samples, where $\stackrel{.}{=}$ indicates exponential equality. Thus the error exponent is just the divergence $D_{\mathrm {KL}}(P || Q))$. For more general settings (including ergodic settings), the error exponent is given by the asymptotic Kullback-Leibler divergence rate, defined as the almost-sure limit of: if the limit exists, where $P_n$ and $Q_n$ are the null and alternate joint densities of $(Y_1,\ldots ,Y_n)$, respectively, see further details in BIBREF27, BIBREF28. When considering Bayesian error rather than Neyman-Pearson error, for i.i.d. samples, we have the following upper bound: where $C(\cdot ,\cdot )$ is Chernoff information. Here we will focus on the Neyman-Pearson formulation rather than the Bayesian one. Limits Theorems With the preparation of Sec. SECREF3, we can now establish statistical limits for detection of LM-generated texts. We first consider a given language model, and then introduce a generic model of language models. Limits Theorems ::: Given Language Model Suppose we are given a specific language model such as GPT-2 BIBREF6, GROVER BIBREF8, or CTRL BIBREF7, and it is characterized in terms of estimates of either cross-entropy $H(P,Q)$ or perplexity $\mathrm {PPL}(P,Q)$. We can see directly that the Neyman-Pearson error of detection in the case of i.i.d. tokens is: and similar results hold for ergodic observations. Since we think of $H(P)$ as a constant, we observe that the error exponent for the decision problem is precisely an affine shift of the cross-entropy. Outputs from models that are better in the sense of cross-entropy or perplexity are harder to distinguish from authentic text. Thus we see that intuitive measures of generative text quality match a formal operational measure of indistinguishability that comes from the hypothesis testing limit. Limits Theorems ::: Optimal Language Model Now rather than considering a particular language model, we consider bounding the error probability in detection of the outputs of an empirical maximum likelihood (ML) language model. We specifically consider the empirical ML model among the class of models that are $k$-order Markov approximations of language $L$, which is simply the empirical plug-in estimate. Manning and Schütze argue that, even though not quite correct, language text can be modeled as stationary, ergodic random processes BIBREF29, an assumption that we follow. Moreover, given the diversity of language production, we assume this stationary ergodic random process with finite alphabet $\mathcal {A}$ denoted $X = \lbrace X_i, -\infty < i < \infty \rbrace $ is non-null in the sense that always $P(x_{-m}^{-1}) > 0$ and This is sometimes called the smoothing requirement. We further introduce an additional property of random processes that we assume for language $L$. We define the continuity rate of the process $X$ as: We further let $\gamma = \sum _{k=1}^{\infty } \gamma (k)$, and If $\gamma < \infty $, then the process has summable continuity rate. These specific technical notions of smoothing and continuity are taken from the literature on estimation of stationary, ergodic random processes BIBREF30. As such, the hypothesis test we aim to consider here is between a non-null, stationary, ergodic process with summable continuity rate (genuine language) and its empirical $k$-order Markov approximation based on training data (language model output). We think of the setting where the language model is trained on data with many tokens, a sequence of very long length $m$. For example, the CTRL language model was trained using 140 GB of text BIBREF7. We think of the Markov order $k$ as a large value and so the family of empirical $k$-order Markov approximations encompasses the class of neural language models like GPT-2 and CTRL, which are a fortiori Markov in structure. Empirical perplexity comparisons show that LSTM and similar neural language models have Markov order as small as $k = 13$ BIBREF31. The appropriate Markov order for large-scale neural language models has not been investigated empirically, but is thought to scale with the neural network size. Now we aim to bound the error exponent in hypothesis testing, by first drawing on a bound for the Ornstein $\bar{d}$-distance between a stationary, ergodic process and its Markov approximation, due to Csiszar and Talata BIBREF30. Then we aim to relate the Ornstein $\bar{d}$-distance to the Kullback-Leibler divergence (from error exponent expressions), using a generalization of the so-called reverse Pinsker inequality BIBREF32, BIBREF33. Before proceeding, let us formalize a few measures. Let the per-letter Hamming distance between two strings $x_1^m$ and $y_1^m$ be $d_m(x_1^m,y_1^m)$. Then the Ornstein $\bar{d}$-distance between two random sequences $X_1^m$ and $Y_1^m$ with distributions $P_X$ and $P_Y$ is defined as: where the minimization is over all joint distributions whose marginals equal $P_X$ and $P_Y$. Let $N_m(a_1^k)$ be the number of occurrences of the string $a_1^k$ in the sample $X_1^m$. Then the empirical $k$-order Markov approximation of a random process $X$ based on the sample $X_1^m$ is the stationary Markov chain of order $k$ whose transition probabilities are the following empirical conditional probabilities: We refer to this empirical approximation as $\hat{X}[k]_1^m$. Although they give more refined finitary versions, let us restate Csiszár and Talata's asymptotic result on estimating Markov approximations of stationary, ergodic processes from data. The asymptotics are in the size of the training set, $m \rightarrow \infty $, and we let the Markov order scale logarithmically with $m$. Theorem 1 (BIBREF30) Let $X$ be a non-null stationary ergodic process with summable continuity rate. Then for any $\nu > 0$, the empirical $(\nu \log m)$-order Markov approximation $\hat{X}$ satisfies: eventually almost surely as $m\rightarrow \infty $ if $\nu < \tfrac{\mu }{|\log p_m|}$. Now we consider Kullback-Leibler divergence. Just as Marton had extended Pinsker's inequality between variational distance and Kullback-Leibler divergence to an inequality between Ornstein's $\bar{d}$-distance and Kullback-Leibler divergence BIBREF34, BIBREF35 as given in Theorem UNKREF7 below, is it possible to make a similar conversion for the reverse Pinsker inequality when there is a common finite alphabet $\mathcal {A}$? Theorem 2 (BIBREF35) Let $X$ be a stationary random process from a discrete alphabet $\mathcal {A}$. Then for any other random process $Y$ defined on the same alphabet $\mathcal {A}$, for a computable constant $u$. We conjecture that one can indeed convert the reverse Pinsker inequality BIBREF32: for two probability distributions $P$ and $Q$ defined on a common finite alphabet $\mathcal {A}$, where $Q_{\min } = \min _{a\in \mathcal {A}} Q(a)$. That is, we make the following conjecture. Conjecture 1 Let $X$ be a stationary random process from a finite alphabet $\mathcal {A}$. Then for any other random process $Y$ defined on the same alphabet $\mathcal {A}$, for some constant $\tilde{K}$. If this generalized reverse Pinsker inequality holds, it implies the following further bound on the Kullback-Leibler divergence and therefore the error exponent of the detection problem for the empirical maximum likelihood Markov language model. Conjecture 2 Let $X$ be a non-null stationary ergodic process with summable continuity rate defined on the finite alphabet $\mathcal {A}$. Then for any $\nu > 0$, the empirical $(\nu \log m)$-order Markov approximation $\hat{X}$ satisfies: eventually almost surely as $m\rightarrow \infty $ if $\nu < \tfrac{\mu }{|\log p_m|}$, for some constant $\hat{K}$. Under the conjecture, we have a precise asymptotic characterization of the error exponent in deciding between genuine text and text generated from the empirical maximum likelihood language model, expressed in terms of basic parameters of the language, and of the training data set. Discussion Motivated by the problem of detecting machine-generated misinformation text that may have deleterious societal consequences, we have developed a formal hypothesis testing framework and established limits on the error exponents. For the case of specific language models such as GPT-2 or CTRL, we provide a precise operational interpretation for the perplexity and cross-entropy. For any future large-scale language model, we also conjecture a precise upper bound on the error exponent. It has been said that “in AI circles, identifying fake media has long received less attention, funding and institutional backing than creating it: Why sniff out other people’s fantasy creations when you can design your own? `There's no money to be made out of detecting these things,' [Nasir] Memon said” BIBREF36. Here we have tried to demonstrate that there are, at least, interesting research questions on the detection side, which may also inform practice. As we had considered previously in the context of deepfake images BIBREF17, it is also of interest to understand how error probability in detection parameterizes the dynamics of information spreading processes in social networks, e.g. in determining epidemic thresholds. Many practical fake news detection algorithms use a kind of semantic side information, such as whether the generated text is factually correct, in addition to its statistical properties. Although statistical side information would be straightforward to incorporate in the hypothesis testing framework, it remains to understand how to cast such semantic knowledge in a statistical decision theory framework. Acknowledgment Discussions with Bryan McCann, Kathy Baxter, and Miles Brundage are appreciated.
No feature is given, only discussion that semantic features are use in practice and yet to be discovered how to embed that knowledge into statistical decision theory framework.
53c8416f2983e07a7fa33bcb4c4281bbf49c8164
53c8416f2983e07a7fa33bcb4c4281bbf49c8164_0
Q: Which language models generate text that can be easier to classify as genuine or generated? Text: Introduction Building on a long history of language generation models that are based on statistical knowledge that people have BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, large-scale, neural network-based language models (LMs) that write paragraph-length text with the coherence of human writing have emerged BIBREF6, BIBREF7, BIBREF8. Such models have raised concerns about misuse in generating fake news, misleading reviews, and hate speech BIBREF9, BIBREF10, BIBREF8, BIBREF11, BIBREF12. The alarming consequences of such machine-generated misinformation present an urgent need to discern fake content from genuine, as it is becoming more and more difficult for people to do so without cognitive support tools BIBREF13. Several recent studies have used supervised learning to develop classifiers for this task BIBREF8, BIBREF14, BIBREF9, BIBREF15, BIBREF16 and interpreted their properties. Here we take inspiration from our recent work on information-theoretic limits for detecting audiovisual deepfakes generated by GANs BIBREF17 to develop information-theoretic limits for detecting the outputs of language models. In particular, we build on the information-theoretic study of authentication BIBREF18 to use a formal hypothesis testing framework for detecting the outputs of language models. In establishing fundamental limits of detection, we consider two settings. First, we characterize the error exponent for a particular language model in terms of standard performance metrics such as cross-entropy and perplexity. As far as we know, these informational performance metrics had not previously emerged from a formal operational theorem. Second, we consider not just a setting with a specific language model with given performance metrics, but rather consider a universal setting where we take a generic view of language models as empirical maximum likelihood $k$-order Markov approximations of stationary, ergodic random processes. Results on estimation of such random processes are revisited in the context of the error probability, using a conjectured extension of the reverse Pinsker inequality. In closing, we discuss how the semantics of generated text may be a form of side information in detection. Problem Formulation and Basics ::: Language Models and their Performance Metrics Consider a language $L$ like English, which has tokens drawn from a finite alphabet $\mathcal {A}$; tokens can be letters, words, or other such symbols. A language model assigns probabilities to sequences of tokens $(a_1,a_2,\ldots ,a_m)$ so the more likely a sequence is in $L$, the greater its probability. Language models discussed in Sec. SECREF1 estimate this probability $Q$ as a product of each token's probability $q$ given its preceding tokens: a Markov structure. Some language models including BERT BIBREF19 consider tokens on both sides in assessing probabilities (cloze task) and are Markov random field models rather than Markov language models BIBREF20. Using training data, a language model aims to learn a distribution $Q$ that is close to the empirical distribution $P$ of the language $L$. Basic language models can be extended to be conditional language models so as to allow control of style. In certain models, such as CTRL, one can generate text only from a conditioning variable without the need for an initial prompt BIBREF7. An example of text generated from the conditioning text: https://www.cnn.com/2007/09/20/us-president-meets-british-pm is the following: LONDON, England (CNN) – U.S. President George W. Bush met with British Prime Minister Tony Blair on Monday to discuss the war in Iraq, according to a statement from Blair's office.\n\nThe meeting was held at 10 Downing Street and lasted about an hour.\n\n"The two leaders discussed Iraq and other international issues of mutual concern," said Blair.\n\nBush, who is scheduled to meet Wednesday with Russian President Vladimir Putin, will also visit Germany for talks later this week.\n\nIn his statement, Blair said, "We agreed that we should continue our efforts together to bring peace and stability to Iraq. We both reaffirmed our commitment to working closely together, as well as to continuing to work constructively toward achieving lasting security and prosperity throughout the Middle East region."\n\nBush's trip comes after he visited Britain last week where he spoke out against terrorism while visiting Buckingham Palace.\n\nHe has been criticized by some lawmakers over what they say are insufficient military resources being devoted to fighting terrorism. Notwithstanding their limitations BIBREF21, BIBREF22, the standard performance metrics used for assessing language models are the cross-entropy and the perplexity, which quantify how close $Q$ is to $P$. As far as we know, these performance measures have been proposed through the intuitive notion that small values of these quantities seem to correspond, empirically, to higher-quality generated text as judged by people. Within the common task framework BIBREF10, there are leaderboards that assess the perplexity of language models over standard datasets such as WikiText-103 BIBREF23. The cross-entropy of $Q$ with respect to $P$ is defined as: which simplifies, using standard information-theoretic identities, to: where $H(\cdot )$ with one argument is the Shannon entropy and $D_{\mathrm {KL}}( \cdot || \cdot )$ is the Kullback-Leibler divergence (relative entropy). For a given language $L$ being modeled, the first term $H(P)$ can be thought of as fixed BIBREF24. The second term $D_{\mathrm {KL}}(P || Q)$ can be interpreted as the excess information rate needed to represent a language using a mismatched probability distribution BIBREF25. Perplexity is also a measure of uncertainty in predicting the next letter and is simply defined as: when entropies are measured in nats, rather than bits. For a given language, we can consider the ratio of perplexity values or the difference of cross-entropy values of two models $Q_1$ and $Q_2$ as a language-independent notion of performance gap: Problem Formulation and Basics ::: Hypothesis Test and General Error Bounds Recall that the distribution of authentic text is denoted $P$ and the distribution of text generated by the language model is $Q$. Suppose we have access to $n$ tokens of generated text from the language model, which we call $Y_1, Y_2, Y_3, \ldots , Y_n$. We can then formalize a hypothesis test as: If we assume the observed tokens are i.i.d., that only makes the hypothesis test easier than the non-i.i.d. case seen in realistic text samples, and therefore its performance acts as a bound. There are general characterizations of error probability of hypothesis tests as follows BIBREF26. For the Neyman-Pearson formulation of fixing the false alarm probability at $\epsilon $ and maximizing the true detection probability, it is known that the error probability satisfies: for $n$ i.i.d. samples, where $\stackrel{.}{=}$ indicates exponential equality. Thus the error exponent is just the divergence $D_{\mathrm {KL}}(P || Q))$. For more general settings (including ergodic settings), the error exponent is given by the asymptotic Kullback-Leibler divergence rate, defined as the almost-sure limit of: if the limit exists, where $P_n$ and $Q_n$ are the null and alternate joint densities of $(Y_1,\ldots ,Y_n)$, respectively, see further details in BIBREF27, BIBREF28. When considering Bayesian error rather than Neyman-Pearson error, for i.i.d. samples, we have the following upper bound: where $C(\cdot ,\cdot )$ is Chernoff information. Here we will focus on the Neyman-Pearson formulation rather than the Bayesian one. Limits Theorems With the preparation of Sec. SECREF3, we can now establish statistical limits for detection of LM-generated texts. We first consider a given language model, and then introduce a generic model of language models. Limits Theorems ::: Given Language Model Suppose we are given a specific language model such as GPT-2 BIBREF6, GROVER BIBREF8, or CTRL BIBREF7, and it is characterized in terms of estimates of either cross-entropy $H(P,Q)$ or perplexity $\mathrm {PPL}(P,Q)$. We can see directly that the Neyman-Pearson error of detection in the case of i.i.d. tokens is: and similar results hold for ergodic observations. Since we think of $H(P)$ as a constant, we observe that the error exponent for the decision problem is precisely an affine shift of the cross-entropy. Outputs from models that are better in the sense of cross-entropy or perplexity are harder to distinguish from authentic text. Thus we see that intuitive measures of generative text quality match a formal operational measure of indistinguishability that comes from the hypothesis testing limit. Limits Theorems ::: Optimal Language Model Now rather than considering a particular language model, we consider bounding the error probability in detection of the outputs of an empirical maximum likelihood (ML) language model. We specifically consider the empirical ML model among the class of models that are $k$-order Markov approximations of language $L$, which is simply the empirical plug-in estimate. Manning and Schütze argue that, even though not quite correct, language text can be modeled as stationary, ergodic random processes BIBREF29, an assumption that we follow. Moreover, given the diversity of language production, we assume this stationary ergodic random process with finite alphabet $\mathcal {A}$ denoted $X = \lbrace X_i, -\infty < i < \infty \rbrace $ is non-null in the sense that always $P(x_{-m}^{-1}) > 0$ and This is sometimes called the smoothing requirement. We further introduce an additional property of random processes that we assume for language $L$. We define the continuity rate of the process $X$ as: We further let $\gamma = \sum _{k=1}^{\infty } \gamma (k)$, and If $\gamma < \infty $, then the process has summable continuity rate. These specific technical notions of smoothing and continuity are taken from the literature on estimation of stationary, ergodic random processes BIBREF30. As such, the hypothesis test we aim to consider here is between a non-null, stationary, ergodic process with summable continuity rate (genuine language) and its empirical $k$-order Markov approximation based on training data (language model output). We think of the setting where the language model is trained on data with many tokens, a sequence of very long length $m$. For example, the CTRL language model was trained using 140 GB of text BIBREF7. We think of the Markov order $k$ as a large value and so the family of empirical $k$-order Markov approximations encompasses the class of neural language models like GPT-2 and CTRL, which are a fortiori Markov in structure. Empirical perplexity comparisons show that LSTM and similar neural language models have Markov order as small as $k = 13$ BIBREF31. The appropriate Markov order for large-scale neural language models has not been investigated empirically, but is thought to scale with the neural network size. Now we aim to bound the error exponent in hypothesis testing, by first drawing on a bound for the Ornstein $\bar{d}$-distance between a stationary, ergodic process and its Markov approximation, due to Csiszar and Talata BIBREF30. Then we aim to relate the Ornstein $\bar{d}$-distance to the Kullback-Leibler divergence (from error exponent expressions), using a generalization of the so-called reverse Pinsker inequality BIBREF32, BIBREF33. Before proceeding, let us formalize a few measures. Let the per-letter Hamming distance between two strings $x_1^m$ and $y_1^m$ be $d_m(x_1^m,y_1^m)$. Then the Ornstein $\bar{d}$-distance between two random sequences $X_1^m$ and $Y_1^m$ with distributions $P_X$ and $P_Y$ is defined as: where the minimization is over all joint distributions whose marginals equal $P_X$ and $P_Y$. Let $N_m(a_1^k)$ be the number of occurrences of the string $a_1^k$ in the sample $X_1^m$. Then the empirical $k$-order Markov approximation of a random process $X$ based on the sample $X_1^m$ is the stationary Markov chain of order $k$ whose transition probabilities are the following empirical conditional probabilities: We refer to this empirical approximation as $\hat{X}[k]_1^m$. Although they give more refined finitary versions, let us restate Csiszár and Talata's asymptotic result on estimating Markov approximations of stationary, ergodic processes from data. The asymptotics are in the size of the training set, $m \rightarrow \infty $, and we let the Markov order scale logarithmically with $m$. Theorem 1 (BIBREF30) Let $X$ be a non-null stationary ergodic process with summable continuity rate. Then for any $\nu > 0$, the empirical $(\nu \log m)$-order Markov approximation $\hat{X}$ satisfies: eventually almost surely as $m\rightarrow \infty $ if $\nu < \tfrac{\mu }{|\log p_m|}$. Now we consider Kullback-Leibler divergence. Just as Marton had extended Pinsker's inequality between variational distance and Kullback-Leibler divergence to an inequality between Ornstein's $\bar{d}$-distance and Kullback-Leibler divergence BIBREF34, BIBREF35 as given in Theorem UNKREF7 below, is it possible to make a similar conversion for the reverse Pinsker inequality when there is a common finite alphabet $\mathcal {A}$? Theorem 2 (BIBREF35) Let $X$ be a stationary random process from a discrete alphabet $\mathcal {A}$. Then for any other random process $Y$ defined on the same alphabet $\mathcal {A}$, for a computable constant $u$. We conjecture that one can indeed convert the reverse Pinsker inequality BIBREF32: for two probability distributions $P$ and $Q$ defined on a common finite alphabet $\mathcal {A}$, where $Q_{\min } = \min _{a\in \mathcal {A}} Q(a)$. That is, we make the following conjecture. Conjecture 1 Let $X$ be a stationary random process from a finite alphabet $\mathcal {A}$. Then for any other random process $Y$ defined on the same alphabet $\mathcal {A}$, for some constant $\tilde{K}$. If this generalized reverse Pinsker inequality holds, it implies the following further bound on the Kullback-Leibler divergence and therefore the error exponent of the detection problem for the empirical maximum likelihood Markov language model. Conjecture 2 Let $X$ be a non-null stationary ergodic process with summable continuity rate defined on the finite alphabet $\mathcal {A}$. Then for any $\nu > 0$, the empirical $(\nu \log m)$-order Markov approximation $\hat{X}$ satisfies: eventually almost surely as $m\rightarrow \infty $ if $\nu < \tfrac{\mu }{|\log p_m|}$, for some constant $\hat{K}$. Under the conjecture, we have a precise asymptotic characterization of the error exponent in deciding between genuine text and text generated from the empirical maximum likelihood language model, expressed in terms of basic parameters of the language, and of the training data set. Discussion Motivated by the problem of detecting machine-generated misinformation text that may have deleterious societal consequences, we have developed a formal hypothesis testing framework and established limits on the error exponents. For the case of specific language models such as GPT-2 or CTRL, we provide a precise operational interpretation for the perplexity and cross-entropy. For any future large-scale language model, we also conjecture a precise upper bound on the error exponent. It has been said that “in AI circles, identifying fake media has long received less attention, funding and institutional backing than creating it: Why sniff out other people’s fantasy creations when you can design your own? `There's no money to be made out of detecting these things,' [Nasir] Memon said” BIBREF36. Here we have tried to demonstrate that there are, at least, interesting research questions on the detection side, which may also inform practice. As we had considered previously in the context of deepfake images BIBREF17, it is also of interest to understand how error probability in detection parameterizes the dynamics of information spreading processes in social networks, e.g. in determining epidemic thresholds. Many practical fake news detection algorithms use a kind of semantic side information, such as whether the generated text is factually correct, in addition to its statistical properties. Although statistical side information would be straightforward to incorporate in the hypothesis testing framework, it remains to understand how to cast such semantic knowledge in a statistical decision theory framework. Acknowledgment Discussions with Bryan McCann, Kathy Baxter, and Miles Brundage are appreciated.
Outputs from models that are better in the sense of cross-entropy or perplexity are harder to distinguish from authentic text.
5b2480c6533696271ae6d91f2abe1e3a25c4ae73
5b2480c6533696271ae6d91f2abe1e3a25c4ae73_0
Q: Is the assumption that natural language is stationary and ergodic valid? Text: Introduction Building on a long history of language generation models that are based on statistical knowledge that people have BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, large-scale, neural network-based language models (LMs) that write paragraph-length text with the coherence of human writing have emerged BIBREF6, BIBREF7, BIBREF8. Such models have raised concerns about misuse in generating fake news, misleading reviews, and hate speech BIBREF9, BIBREF10, BIBREF8, BIBREF11, BIBREF12. The alarming consequences of such machine-generated misinformation present an urgent need to discern fake content from genuine, as it is becoming more and more difficult for people to do so without cognitive support tools BIBREF13. Several recent studies have used supervised learning to develop classifiers for this task BIBREF8, BIBREF14, BIBREF9, BIBREF15, BIBREF16 and interpreted their properties. Here we take inspiration from our recent work on information-theoretic limits for detecting audiovisual deepfakes generated by GANs BIBREF17 to develop information-theoretic limits for detecting the outputs of language models. In particular, we build on the information-theoretic study of authentication BIBREF18 to use a formal hypothesis testing framework for detecting the outputs of language models. In establishing fundamental limits of detection, we consider two settings. First, we characterize the error exponent for a particular language model in terms of standard performance metrics such as cross-entropy and perplexity. As far as we know, these informational performance metrics had not previously emerged from a formal operational theorem. Second, we consider not just a setting with a specific language model with given performance metrics, but rather consider a universal setting where we take a generic view of language models as empirical maximum likelihood $k$-order Markov approximations of stationary, ergodic random processes. Results on estimation of such random processes are revisited in the context of the error probability, using a conjectured extension of the reverse Pinsker inequality. In closing, we discuss how the semantics of generated text may be a form of side information in detection. Problem Formulation and Basics ::: Language Models and their Performance Metrics Consider a language $L$ like English, which has tokens drawn from a finite alphabet $\mathcal {A}$; tokens can be letters, words, or other such symbols. A language model assigns probabilities to sequences of tokens $(a_1,a_2,\ldots ,a_m)$ so the more likely a sequence is in $L$, the greater its probability. Language models discussed in Sec. SECREF1 estimate this probability $Q$ as a product of each token's probability $q$ given its preceding tokens: a Markov structure. Some language models including BERT BIBREF19 consider tokens on both sides in assessing probabilities (cloze task) and are Markov random field models rather than Markov language models BIBREF20. Using training data, a language model aims to learn a distribution $Q$ that is close to the empirical distribution $P$ of the language $L$. Basic language models can be extended to be conditional language models so as to allow control of style. In certain models, such as CTRL, one can generate text only from a conditioning variable without the need for an initial prompt BIBREF7. An example of text generated from the conditioning text: https://www.cnn.com/2007/09/20/us-president-meets-british-pm is the following: LONDON, England (CNN) – U.S. President George W. Bush met with British Prime Minister Tony Blair on Monday to discuss the war in Iraq, according to a statement from Blair's office.\n\nThe meeting was held at 10 Downing Street and lasted about an hour.\n\n"The two leaders discussed Iraq and other international issues of mutual concern," said Blair.\n\nBush, who is scheduled to meet Wednesday with Russian President Vladimir Putin, will also visit Germany for talks later this week.\n\nIn his statement, Blair said, "We agreed that we should continue our efforts together to bring peace and stability to Iraq. We both reaffirmed our commitment to working closely together, as well as to continuing to work constructively toward achieving lasting security and prosperity throughout the Middle East region."\n\nBush's trip comes after he visited Britain last week where he spoke out against terrorism while visiting Buckingham Palace.\n\nHe has been criticized by some lawmakers over what they say are insufficient military resources being devoted to fighting terrorism. Notwithstanding their limitations BIBREF21, BIBREF22, the standard performance metrics used for assessing language models are the cross-entropy and the perplexity, which quantify how close $Q$ is to $P$. As far as we know, these performance measures have been proposed through the intuitive notion that small values of these quantities seem to correspond, empirically, to higher-quality generated text as judged by people. Within the common task framework BIBREF10, there are leaderboards that assess the perplexity of language models over standard datasets such as WikiText-103 BIBREF23. The cross-entropy of $Q$ with respect to $P$ is defined as: which simplifies, using standard information-theoretic identities, to: where $H(\cdot )$ with one argument is the Shannon entropy and $D_{\mathrm {KL}}( \cdot || \cdot )$ is the Kullback-Leibler divergence (relative entropy). For a given language $L$ being modeled, the first term $H(P)$ can be thought of as fixed BIBREF24. The second term $D_{\mathrm {KL}}(P || Q)$ can be interpreted as the excess information rate needed to represent a language using a mismatched probability distribution BIBREF25. Perplexity is also a measure of uncertainty in predicting the next letter and is simply defined as: when entropies are measured in nats, rather than bits. For a given language, we can consider the ratio of perplexity values or the difference of cross-entropy values of two models $Q_1$ and $Q_2$ as a language-independent notion of performance gap: Problem Formulation and Basics ::: Hypothesis Test and General Error Bounds Recall that the distribution of authentic text is denoted $P$ and the distribution of text generated by the language model is $Q$. Suppose we have access to $n$ tokens of generated text from the language model, which we call $Y_1, Y_2, Y_3, \ldots , Y_n$. We can then formalize a hypothesis test as: If we assume the observed tokens are i.i.d., that only makes the hypothesis test easier than the non-i.i.d. case seen in realistic text samples, and therefore its performance acts as a bound. There are general characterizations of error probability of hypothesis tests as follows BIBREF26. For the Neyman-Pearson formulation of fixing the false alarm probability at $\epsilon $ and maximizing the true detection probability, it is known that the error probability satisfies: for $n$ i.i.d. samples, where $\stackrel{.}{=}$ indicates exponential equality. Thus the error exponent is just the divergence $D_{\mathrm {KL}}(P || Q))$. For more general settings (including ergodic settings), the error exponent is given by the asymptotic Kullback-Leibler divergence rate, defined as the almost-sure limit of: if the limit exists, where $P_n$ and $Q_n$ are the null and alternate joint densities of $(Y_1,\ldots ,Y_n)$, respectively, see further details in BIBREF27, BIBREF28. When considering Bayesian error rather than Neyman-Pearson error, for i.i.d. samples, we have the following upper bound: where $C(\cdot ,\cdot )$ is Chernoff information. Here we will focus on the Neyman-Pearson formulation rather than the Bayesian one. Limits Theorems With the preparation of Sec. SECREF3, we can now establish statistical limits for detection of LM-generated texts. We first consider a given language model, and then introduce a generic model of language models. Limits Theorems ::: Given Language Model Suppose we are given a specific language model such as GPT-2 BIBREF6, GROVER BIBREF8, or CTRL BIBREF7, and it is characterized in terms of estimates of either cross-entropy $H(P,Q)$ or perplexity $\mathrm {PPL}(P,Q)$. We can see directly that the Neyman-Pearson error of detection in the case of i.i.d. tokens is: and similar results hold for ergodic observations. Since we think of $H(P)$ as a constant, we observe that the error exponent for the decision problem is precisely an affine shift of the cross-entropy. Outputs from models that are better in the sense of cross-entropy or perplexity are harder to distinguish from authentic text. Thus we see that intuitive measures of generative text quality match a formal operational measure of indistinguishability that comes from the hypothesis testing limit. Limits Theorems ::: Optimal Language Model Now rather than considering a particular language model, we consider bounding the error probability in detection of the outputs of an empirical maximum likelihood (ML) language model. We specifically consider the empirical ML model among the class of models that are $k$-order Markov approximations of language $L$, which is simply the empirical plug-in estimate. Manning and Schütze argue that, even though not quite correct, language text can be modeled as stationary, ergodic random processes BIBREF29, an assumption that we follow. Moreover, given the diversity of language production, we assume this stationary ergodic random process with finite alphabet $\mathcal {A}$ denoted $X = \lbrace X_i, -\infty < i < \infty \rbrace $ is non-null in the sense that always $P(x_{-m}^{-1}) > 0$ and This is sometimes called the smoothing requirement. We further introduce an additional property of random processes that we assume for language $L$. We define the continuity rate of the process $X$ as: We further let $\gamma = \sum _{k=1}^{\infty } \gamma (k)$, and If $\gamma < \infty $, then the process has summable continuity rate. These specific technical notions of smoothing and continuity are taken from the literature on estimation of stationary, ergodic random processes BIBREF30. As such, the hypothesis test we aim to consider here is between a non-null, stationary, ergodic process with summable continuity rate (genuine language) and its empirical $k$-order Markov approximation based on training data (language model output). We think of the setting where the language model is trained on data with many tokens, a sequence of very long length $m$. For example, the CTRL language model was trained using 140 GB of text BIBREF7. We think of the Markov order $k$ as a large value and so the family of empirical $k$-order Markov approximations encompasses the class of neural language models like GPT-2 and CTRL, which are a fortiori Markov in structure. Empirical perplexity comparisons show that LSTM and similar neural language models have Markov order as small as $k = 13$ BIBREF31. The appropriate Markov order for large-scale neural language models has not been investigated empirically, but is thought to scale with the neural network size. Now we aim to bound the error exponent in hypothesis testing, by first drawing on a bound for the Ornstein $\bar{d}$-distance between a stationary, ergodic process and its Markov approximation, due to Csiszar and Talata BIBREF30. Then we aim to relate the Ornstein $\bar{d}$-distance to the Kullback-Leibler divergence (from error exponent expressions), using a generalization of the so-called reverse Pinsker inequality BIBREF32, BIBREF33. Before proceeding, let us formalize a few measures. Let the per-letter Hamming distance between two strings $x_1^m$ and $y_1^m$ be $d_m(x_1^m,y_1^m)$. Then the Ornstein $\bar{d}$-distance between two random sequences $X_1^m$ and $Y_1^m$ with distributions $P_X$ and $P_Y$ is defined as: where the minimization is over all joint distributions whose marginals equal $P_X$ and $P_Y$. Let $N_m(a_1^k)$ be the number of occurrences of the string $a_1^k$ in the sample $X_1^m$. Then the empirical $k$-order Markov approximation of a random process $X$ based on the sample $X_1^m$ is the stationary Markov chain of order $k$ whose transition probabilities are the following empirical conditional probabilities: We refer to this empirical approximation as $\hat{X}[k]_1^m$. Although they give more refined finitary versions, let us restate Csiszár and Talata's asymptotic result on estimating Markov approximations of stationary, ergodic processes from data. The asymptotics are in the size of the training set, $m \rightarrow \infty $, and we let the Markov order scale logarithmically with $m$. Theorem 1 (BIBREF30) Let $X$ be a non-null stationary ergodic process with summable continuity rate. Then for any $\nu > 0$, the empirical $(\nu \log m)$-order Markov approximation $\hat{X}$ satisfies: eventually almost surely as $m\rightarrow \infty $ if $\nu < \tfrac{\mu }{|\log p_m|}$. Now we consider Kullback-Leibler divergence. Just as Marton had extended Pinsker's inequality between variational distance and Kullback-Leibler divergence to an inequality between Ornstein's $\bar{d}$-distance and Kullback-Leibler divergence BIBREF34, BIBREF35 as given in Theorem UNKREF7 below, is it possible to make a similar conversion for the reverse Pinsker inequality when there is a common finite alphabet $\mathcal {A}$? Theorem 2 (BIBREF35) Let $X$ be a stationary random process from a discrete alphabet $\mathcal {A}$. Then for any other random process $Y$ defined on the same alphabet $\mathcal {A}$, for a computable constant $u$. We conjecture that one can indeed convert the reverse Pinsker inequality BIBREF32: for two probability distributions $P$ and $Q$ defined on a common finite alphabet $\mathcal {A}$, where $Q_{\min } = \min _{a\in \mathcal {A}} Q(a)$. That is, we make the following conjecture. Conjecture 1 Let $X$ be a stationary random process from a finite alphabet $\mathcal {A}$. Then for any other random process $Y$ defined on the same alphabet $\mathcal {A}$, for some constant $\tilde{K}$. If this generalized reverse Pinsker inequality holds, it implies the following further bound on the Kullback-Leibler divergence and therefore the error exponent of the detection problem for the empirical maximum likelihood Markov language model. Conjecture 2 Let $X$ be a non-null stationary ergodic process with summable continuity rate defined on the finite alphabet $\mathcal {A}$. Then for any $\nu > 0$, the empirical $(\nu \log m)$-order Markov approximation $\hat{X}$ satisfies: eventually almost surely as $m\rightarrow \infty $ if $\nu < \tfrac{\mu }{|\log p_m|}$, for some constant $\hat{K}$. Under the conjecture, we have a precise asymptotic characterization of the error exponent in deciding between genuine text and text generated from the empirical maximum likelihood language model, expressed in terms of basic parameters of the language, and of the training data set. Discussion Motivated by the problem of detecting machine-generated misinformation text that may have deleterious societal consequences, we have developed a formal hypothesis testing framework and established limits on the error exponents. For the case of specific language models such as GPT-2 or CTRL, we provide a precise operational interpretation for the perplexity and cross-entropy. For any future large-scale language model, we also conjecture a precise upper bound on the error exponent. It has been said that “in AI circles, identifying fake media has long received less attention, funding and institutional backing than creating it: Why sniff out other people’s fantasy creations when you can design your own? `There's no money to be made out of detecting these things,' [Nasir] Memon said” BIBREF36. Here we have tried to demonstrate that there are, at least, interesting research questions on the detection side, which may also inform practice. As we had considered previously in the context of deepfake images BIBREF17, it is also of interest to understand how error probability in detection parameterizes the dynamics of information spreading processes in social networks, e.g. in determining epidemic thresholds. Many practical fake news detection algorithms use a kind of semantic side information, such as whether the generated text is factually correct, in addition to its statistical properties. Although statistical side information would be straightforward to incorporate in the hypothesis testing framework, it remains to understand how to cast such semantic knowledge in a statistical decision theory framework. Acknowledgment Discussions with Bryan McCann, Kathy Baxter, and Miles Brundage are appreciated.
It is not completely valid for natural languages because of diversity of language - this is called smoothing requirement.
a516b37ad9d977cb9d4da3897f942c1c494405fe
a516b37ad9d977cb9d4da3897f942c1c494405fe_0
Q: Which models do they try out? Text: Introduction [color=red!20,size=,fancyline,caption=,disable]ben:It is a little weird that RECORD is not spelled out in the abstract, but especially odd that it isn't spelled out in the Introduction. I would remove the footnote, put that content in the Introduction [color=red!20,size=,fancyline,caption=,disable]ben:@kev agree. ... Human and Machine Commonsense Reading Comprehension [color=red!20,size=,fancyline,caption=,disable]ben:Methods in machine reading comprehension (MRC) are driven by the datasets available – such as curated by deepmind-cnn-dailymail, cbt, squad, newsqa, and msmarco – where an MRC task is commonly defined as answering a question given some passage. However ... Machine reading comprehension (MRC) is a central task in natural language understanding, with techniques lately driven by a surge of large-scale datasets BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , usually formalized as a task of answering questions given a passage. An increasing number of analyses BIBREF5 , BIBREF6 , BIBREF7 have revealed that a large portion of questions in these datasets can be answered by simply matching the patterns between the question and the answer sentence in the passage. While systems may match or even outperform humans on these datasets, our intuition suggests that there are at least some instances in human reading comprehension that require more than what existing challenge tasks are emphasizing. [color=red!20,size=,fancyline,caption=,disable]ben:This "thus" claim is far too strong. You haven't cited anything that says humans *don't* rely on simple pattern matching, you just rely on an implicit assumption that 'surely humans must be doing something complicated when they read'. If a system performs as well as a human on a task, the conclusion shouldn't immediately be that the task is too easy, it should more subtly be that new datasets are then needed to see if the inference mechanisms hold up, where the creation of the datasets can be based based on an explicitly stated intuition that humans may rely on more than pattern matching. It is a hypothesis at this point in the Introduction, that systems doing well on earlier datasets won't also do well on yours. You expect they will fail, and even design the dataset specifically around their failure cases. [color=red!20,size=,fancyline,caption=,disable]ben:I would say: While systems may match or even outperform humans on these datasets, our intuition suggests that there are at least some instances in human reading comprehension that require more than what existing challenge tasks are stressing. One primary type of questions these datasets lack are the ones that require reasoning over common sense or understanding across multiple sentences in the passage BIBREF2 , BIBREF3 . [color=red!20,size=,fancyline,caption=,disable]ben:This statement is given without citation: why do you claim that common sense is missing? Do you provide an analysis later in this paper that supports it? If so, provide a forward reference. If you can cite earlier work, do so. Otherwise, remove or soften this statement, e.g., "We hypothesize that one type of question ...". And then in next sentence, rather than "To overcome this limitation", which you haven't proven yet actually exists, you would say: "To help evaluate this question, we introduce ..." [color=red!20,size=,fancyline,caption=,disable]ben:rather than "most of which require", say "most of which seem to require some aspect of reasoning beyond immediate pattern matching". The SWAG / BERT case should be fresh in your mind as you write this introduction, and where-ever you are tempted to declare things in absolute terms. The more you go on the record as THIS DATASET REQUIRES COMMONSENSE then the more you look silly later if someone finds a 'trick' to solve it. A more honest and safer way to put this is to exactly reference the SWAG/BERT issue at some point in this paper, acknowledging that prior claims to have constructed commonsense datasets have been shown to either be false, or to imply that commonsense reasoning can be equated to large scale language modeling. You can cite Rachel's Script Induction as Language Modeling paper, JOCI, and the reporting bias article, perhaps all in a footnote, when commenting that researchers have previously raised concerns about the idea that all of common sense can be derived from corpus co-occurrence statistics. To overcome this limitation, we introduce a large-scale dataset for reading comprehension, ReCoRD (), which consists of over 120,000 examples, most of which require deep commonsense reasoning. ReCoRD is an acronym for the Reading Comprehension with Commonsense Reasoning Dataset. fig:example shows a ReCoRD example: the passage describes a lawsuit claiming that the band “Led Zeppelin” had plagiarized the song “Taurus” to their most iconic song, “Stairway to Heaven”. The cloze-style query asks what does “Stairway to Heaven” sound similar to. To find the correct answer, we need to understand from the passage that “a copyright infringement case alleges that `Stairway to Heaven' was taken from `Taurus'”, and from the bullet point that “these two songs are claimed similar”. Then based on the commonsense knowledge that “if two songs are claimed similar, it is likely that (parts of) these songs sound almost identical”, we can reasonably infer that the answer is “Taurus”. [color=purple!20,size=,fancyline,caption=,disable]kev:This example is good, but you might need to make sure the reader reads the whole passage first or else it may be hard to follow. Maybe add a few more sentences to explain Figure 1 in the paragraph here. Differing from most of the existing MRC datasets, all queries and passages in ReCoRD are automatically mined from news articles, which maximally reduces the human elicitation bias BIBREF8 , BIBREF9 , BIBREF10 , and the data collection method we propose is cost-efficient. [color=purple!20,size=,fancyline,caption=,disable]kev:You should have one of these comparison tables that lists multiple MRC datasets and compares different features Further analysis shows that a large portion of ReCoRD requires commonsense reasoning. Experiments on ReCoRD demonstrate that human readers are able to achieve a high performance at 91.69 F1, whereas the state-of-the-art MRC models fall far behind at 46.65 F1. Thus, ReCoRD presents a real challenge for future research to bridge the gap between human and machine commonsense reading comprehension. [color=red!20,size=,fancyline,caption=,disable]ben:this is a bulky URL: I will pay the small fee to register some domain name that is more slick than this [color=red!20,size=,fancyline,caption=,disable]ben:about the leaderboard on the website: I think it a little misleading to have Google Brain and IBM Watson, etc. as the names on the leaderboard, if it is really you running their code. Better would be "JHU (modification of Google Brain system)", "JHU (modification of IBM Watson system)", ... . Task Motivation A program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows. – mccarthy59 Commonsense Reasoning in MRC As illustrated by the example in fig:example, the commonsense knowledge “if two songs are claimed similar, it is likely that (parts of) these songs sound almost identica” is not explicitly described in the passage, but is necessary to acquire in order to generate the answer. Human is able to infer the answer because the commonsense knowledge is commonly known by nearly all people. Our goal is to evaluate whether a machine is able to learn such knowledge. However, since commonsense knowledge is massive and mostly implicit, defining an explicit free-form evaluation is challenging BIBREF11 . Motivated by mccarthy59, we instead evaluate a machine's ability of commonsense reasoning – a reasoning process requiring commonsense knowledge; that is, if a machine has common sense, it can deduce for itself the likely consequences or details of anything it is told and what it already knows rather than the unlikely ones. To formalize it in MRC, given a passage $\mathbf {p}$ (i.e., “anything it is told” and “what it already knows”), and a set of consequences or details $\mathcal {C}$ which are factually supported by the passage $\mathbf {p}$ with different likelihood, if a machine $\mathbf {M}$ has common sense, it can choose the most likely consequence or detail $\mathbf {c}^*$ from $\mathcal {C}$ , i.e., $$\mathbf {c}^* = \operatornamewithlimits{arg\,max}_{\mathbf {c} \in \mathcal {C}}P(\mathbf {c}\mid \mathbf {p},\mathbf {M}).$$ (Eq. 2) [color=purple!20,size=,fancyline,caption=,disable]kev:What are the properties of $o$ ? What can be a consequence? Be more specific or give examples. Task Definition With the above discussion, we propose a specific task to evaluate a machine's ability of commonsense reasoning in MRC: as shown in fig:example, given a passage $\mathbf {p}$ describing an event, a set of text spans $\mathbf {E}$ marked in $\mathbf {p}$ , and a cloze-style query $Q(\mathbf {X})$ with a missing text span indicated by $\mathbf {X}$ , a machine $\mathbf {M}$ is expected to act like human, reading the passage $\mathbf {p}$ and then using its hidden commonsense knowledge to choose a text span $\mathbf {e}\in \mathbf {E}$ that best fits $\mathbf {X}$ , i.e., $$\mathbf {e}^* = \operatornamewithlimits{arg\,max}_{\mathbf {e} \in \mathbf {E}}P(Q(\mathbf {e})\mid \mathbf {p},\mathbf {M}).$$ (Eq. 3) Once the cloze-style query $Q(\mathbf {X})$ is filled in by a text span $\mathbf {e}$ , the resulted statement $Q(\mathbf {e})$ becomes a consequence or detail $\mathbf {c}$ as described in eq:csr-in-mrc, which is factually supported by the passage with certain likelihood. [color=purple!20,size=,fancyline,caption=,disable]kev:There's a disconnect between this paragraph and the previous one. How do you jump from $o$ to Q(e) and the ineqality to argmax? Also, I'm not sure if "cloze" is defined anywhere: you might need a one-sentence explanation in case the reader is not familiar. Data Collection [color=purple!20,size=,fancyline,caption=,disable]kev:First add motivation about general philosophy of data collection We describe the framework for automatically generating the dataset, ReCoRD, for our task defined in eq:task, which consists of passages with text spans marked, cloze-style queries, and reference answers. We collect ReCoRD in four stages as shown in Figure 2 : (1) curating CNN/Daily Mail news articles, (2) generating passage-query-answers triples based on the news articles, (3) filtering out the queries that can be easily answered by state-of-the-art MRC models, and (4) filtering out the queries ambiguous to human readers. News Article Curation We choose to create ReCoRD by exploiting news articles, because the structure of news makes it a good source for our task: normally, the first few paragraphs of a news article summarize the news event, which can be used to generate passages of the task; and the rest of the news article provides consequences or details of the news event, which can be used to generate queries of the task. In addition, news providers such as CNN and Daily Mail supplement their articles with a number of bullet points BIBREF12 , BIBREF13 , BIBREF0 , which outline the highlights of the news and hence form a supplemental source for generating passages. We first downloaded CNN and Daily Mail news articles using the script provided by BIBREF0 , and then sampled 148K articles from CNN and Daily Mail. In these articles, named entities and their coreference information have been annotated by a Google NLP pipeline, and will be used in the second stage of our data collection. Since these articles can be easily downloaded using the public script, we are concerned about potential cheating if using them as the source for generating the dev./test datasets. Therefore, we crawled additional 22K news articles from the CNN and Daily Mail websites. These crawled articles have no overlap with the articles used in BIBREF0 . We then ran the state-of-the-art named entity recognition model BIBREF14 and the end-to-end coreference resolution model BIBREF15 provided by AllenNLP BIBREF16 to annotate the crawled articles. Overall, we have collected 170K CNN/Daily Mail news articles with their named entities and coreference information annotated. Passage-Query-Answers Generation All passages, queries and answers in ReCoRD were automatically generated from the curated news articles. fig:example-for-stage2 illustrates the generation process. (1) we split each news article into two parts as described in sec:news-curation: the first few paragraphs which summarize the news event, and the rest of the news which provides the details or consequences of the news event. These two parts make a good source for generating passages and queries of our task respectively. (2) we enriched the first part of news article with the bullet points provided by the news editors. The first part of news article, together with the bullet points, is considered as a candidate passage. To ensure that the candidate passages are informative enough, we required the first part of news article to have at least 100 tokens and contain at least four different entities. (3) for each candidate passage, the second part of its corresponding news article was split into sentences by Stanford CoreNLP BIBREF17 . Then we selected the sentences that satisfy the following conditions as potential details or consequences of the news event described by the passage: [itemsep=0pt,topsep=6pt,leftmargin=10pt] Sentences should have at least 10 tokens, as longer sentences contain more information and thus are more likely to be inferrable details or consequences. Sentences should not be questions, as we only consider details or consequences of a news event, not questions. Sentences should not have 3-gram overlap with the corresponding passage, so they are less likely to be paraphrase of sentences in the passage. Sentences should have at least one named entity, so that we can replace it with $\mathbf {X}$ to generate a cloze-style query. All named entities in sentences should have precedents in the passage according to coreference, so that the sentences are not too disconnected from the passage, and the correct entity can be found in the passage to fill in $\mathbf {X}$ . Finally, we generated queries by replacing entities in the selected sentences with $\mathbf {X}$ . We only replaced one entity in the selected sentence each time, and generated one cloze-style query. Based on coreference, the precedents of the replaced entity in the passage became reference answers to the query. The passage-query-answers generation process matched our task definition in sec:task, and therefore created queries that require some aspect of reasoning beyond immediate pattern matching. In total, we generated 770k (passage, query, answers) triples. Machine Filtering As discussed in BIBREF5 , BIBREF6 , BIBREF18 , BIBREF7 , existing MRC models mostly learn to predict the answer by simply paraphrasing questions into declarative forms, and then matching them with the sentences in the passages. To overcome this limitation, we filtered out triples whose queries can be easily answered by the state-of-the-art MRC architecture, Stochastic Answer Networks (SAN) BIBREF19 . We choose SAN because it is competitive on existing MRC datasets, and it has components widely used in many MRC architectures such that low bias was anticipated in the filtering (which is confirmed by evaluation in sec:evaluation). We used SAN to perform a five-fold cross validation on all 770k triples. The SAN models correctly answered 68% of these triples. We excluded those triples, and only kept 244k triples that could not be answered by SAN. These triples contain queries which could not be answered by simple paraphrasing, and other types of reasoning such as commonsense reasoning and multi-sentence reasoning are needed. [color=purple!20,size=,fancyline,caption=,disable]kev:Briefly mention why you use SAN, i.e. it's competitive on current benchmarks like SQuAD. Also mention whether this may cause some bias in the filtering, compared to using some other system, and why your methodology is still ok. Human Filtering Since the first three stages of data collection were fully automated, the resulted triples could be noisy and ambiguous to human readers. Therefore, we employed crowdworkers to validate these triples. We used Amazon Mechanical Turk for validation. Crowdworkers were required to: 1) have a 95% HIT acceptance rate, 2) a minimum of 50 HITs, 3) be located in the United States, Canada, or Great Britain, and 4) not be granted the qualification of poor quality (which we will explain later in this section). Workers were asked to spend at least 30 seconds on each assignment, and paid $3.6 per hour on average. fig:hit shows the crowdsourcing web interface. Each HIT corresponds to a triple in our data collection. In each HIT assignment, we first showed the expandable instructions for first-time workers, to help them better understand our task (see the sec:hit-instructions). Then we presented workers with a passage in which the named entities are highlighted and clickable. After reading the passage, workers were given a supported statement with a placeholder (i.e., a cloze-style query) indicating a missing entity. Based on their understanding of the events that might be inferred from the passage, workers were asked to find the correct entity in the passage that best fits the placeholder. If workers thought the answer is not obvious, they were allowed to guess one, and were required to report that case in the feedback box. Workers were also encouraged to write other feedback. To ensure quality and prevent spamming, we used the reference answers in the triples to compute workers' average performance after every 1000 submissions. While there might be coreference or named entity recognition errors in the reference answers, as reported in BIBREF20 (also confirmed by our analysis in sec:data-analysis), they only accounted for a very small portion of all the reference answers. Thus, the reference answers could be used for comparing workers' performance. Specifically, if a worker's performance was significantly lower than the average performance of all workers, we blocked the worker by granting the qualification of poor quality. In practice, workers were able to correctly answer about 50% of all queries. We blocked workers if their average accuracy was lower than 20%, and then republished their HIT assignments. Overall, 2,257 crowdworkers have participated in our task, and 51 of them have been granted the qualification of poor quality. Train / Dev. / Test Splits Among all the 244k triples collected from the third stage, we first obtained one worker answer for each triple. Compared to the reference answers, workers correctly answered queries in 122k triples. We then selected around 100k correctly-answered triples as the training set, restricting the origins of these triples to the news articles used in BIBREF0 . As for the development and test sets, we solicited another worker answer to further ensure their quality. Therefore, each of the rest 22k triples has been validated by two workers. We only kept 20k triples that were correctly answered by both workers. The origins of these triples are either articles used in BIBREF0 or articles crawled by us (as described in sec:news-curation), with a ratio of 3:7. Finally, we randomly split the 20k triples into development and test sets, with 10k triples for each set. tab:statistics summarizes the statistics of our dataset, ReCoRD.
DocQA, SAN, QANet, ASReader, LM, Random Guess
7f5ab9a53aef7ea1a1c2221967057ee71abb27cb
7f5ab9a53aef7ea1a1c2221967057ee71abb27cb_0
Q: Do they compare executionttime of their model against other models? Text: Introduction Speech processing enables natural communication with smart phones or smart home assistants, e.g., Amazon Echo, Google Home. However, continuously performing speech recognition is not energy-efficient and would drain batteries of smart devices. Instead, most speech recognition systems passively listen for utterances of certain wake words such as “Ok Google", “Hey Siri", “Alexa", etc. to trigger the continuous speech recognition system on demand. This task is referred to as keyword spotting (KWS). There are also uses of KWS where a view simple speech commands (e.g. “on", “off") are enough to interact with a device such as a voice-controlled light bulb. Conventional hybrid approaches to KWS first divide their audio signal in time frames to extract features, e.g., Mel Frequency Cepstral Coefficients (MFCC). A neural net then estimates phoneme or state posteriors of the keyword Hidden Markov Model in order to calculate the keyword probability using a Viterbi search. In recent years, end-to-end architectures gained traction that directly classify keyword posterior probabilites based on the previously extracted features, e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Typical application scenarios imply that the device is powered by a battery, and possesses restricted hardware resources to reduce costs. Therefore previous works optimized towards memory footprint and operations per second. In contrast to this, we tune our neural network towards energy conservation in microcontrollers motivated by obervations on power consumption, as detailed in Sec. SECREF4. To extract meaningful and representative features from raw audio, our architecture uses parametrized Sinc-convolutions (SincConv) from SincNet proposed by Ravianelli et al. BIBREF5. We use Depthwise Separable Convolutions (DSConv) BIBREF6, BIBREF7 that preserve time-context information while at the same time compare features in different channels. To further reduce the number of network parameters, which is key for energy efficiency, we group DSConv-layers, a technique we refer to as Grouped DSConv (GDSConv). Our key contributions are: We propose a neural network architecture tuned towards energy efficiency in microcontrollers grounded on the observation that memory access is costly, while computation is cheap BIBREF8. Our keyword-spotting network classifies on raw audio employing SincConvs while at the same time reducing the number of parameters using (G)DSConvs. Our base model with 122k parameters performs with the state-of-the-art accuracy of $96.6\%$ on the test set of Google’s Speech Commands dataset, on par with TC-ResNet BIBREF3 that has 305k parameters and requires separate preprocessing. Our low-parameter model achieves $96.4\%$ with only 62k parameters. Related Work Recently, CNNs have been successfully applied to KWS BIBREF1, BIBREF2, BIBREF3. Zhang et al. evaluated different neural network architectures (such as CNNs, LSTMs, GRUs) in terms of accuracy, computational operations and memory footprint as well as their deployment on embedded hardware BIBREF1. They achieved their best results using a CNN with DSConvs. Tang et al. explored the use of Deep Residual Networks with dilated convolutions to achieve a high accuracy of $95.8\%$ BIBREF2, while keeping the number of parameters comparable to BIBREF1. Choi et al. build on this work as they also use a ResNet-inspired architecture. Instead of using 2D convolution over a time-frequency representation of the data they convolve along the time dimension and treat the frequency dimension as channels BIBREF3. This bears similarities with our approach as we are using 1D convolution along the time dimension as well. However, all the approaches mentioned classify from MFCCs or similar preprocessed features. Our architecture works directly on raw audio signals. There is a recent trend towards using CNNs on raw audio data directly BIBREF5, BIBREF9, BIBREF10, BIBREF11. Ravanelli et al. present an effective method of processing raw audio with CNNs, called SincNet. Kernels of the first convolutional layer are restricted to only learn shapes of parametrized sinc functions. This method was first introduced for Speaker Recognition BIBREF5 and later also used for Phoneme Recognition BIBREF9. To the best of our knowledge, we are the first to apply this method to the task of KWS. The first convolutional layer of our model is inspired by SincNet and we combine it with DSCconv. DSCconvs have first been introduced in the domain of Image Processing BIBREF7, BIBREF12 and have been applied to other domains since: Zhang et al. applied DSCconv to KWS BIBREF1. Kaiser et al. used DSConv for neural machine translation BIBREF6. They also introduce the “super-separable” convolution, a DSConv that also uses grouping and thus reduces the already small number of parameters of DSConv even further. A similar method is used by ShuffleNet where they combine DSConv with grouping and channel shuffling BIBREF13. The idea of Grouped Convolutions was first used in AlexNet BIBREF14 to reduce parameters and operations and to enable distributed computing of the model over multiple GPUs. We denominate the combination of grouping and DSconv as GDSConv in our work and use it for our smallest model. Model ::: Keyword-Spotting on Battery-Powered Devices Typical application scenarios for smart devices imply that the device is powered by a battery, and possesses restricted hardware resources. The requirements for a KWS system in these scenarios are (1) very low power consumption to maximize battery life, (2) real-time or near real-time capability, (3) low memory footprint and (4) high accuracy to avoid random activations and to ensure responsiveness. Regarding real-time capability, our model is designed to operate on a single-core microcontroller capable of 50 MOps per second BIBREF1. We assume that in microcontrollers the memory consumption of a KWS neural network is associated with its power consumption: Reading memory values contributes most to power consumption which makes re-use of weights favorable. While in general large memory modules leak more power than small memory modules, one read operation from RAM costs far more energy than the corresponding multiply-and-accumulate computation BIBREF15, BIBREF8. In addition to the parameter-reducing approach in this work, further steps may be employed to reduce power consumption such as quantization, model compression or optimization strategies regarding dataflows that depend on the utilized hardware platform BIBREF15, BIBREF8, BIBREF16, BIBREF17. Model ::: Feature Extraction using SincConvs SincNet BIBREF5 classifies on raw audio by restricting the filters of the first convolutional layer of a CNN to only learn parametrized sinc functions, i.e., $\operatorname{sinc}⁡(x)=\sin (x)/x$. One sinc function in the time domain represents a rectangular function in the spectral domain, therefore two sinc functions can be combined to an ideal band-pass filter: Performing convolution with such a filter extracts the parts of the input signal that lie within a certain frequency range. SincNet combines Sinc-convolutions with CNNs; as we only use the feature extraction layer of this architecture, we label this layer as SincConv to establish a distinction to SincNet. Compared to one filter of a regular CNN, the number of parameters is derived from its kernel width, e.g., $k=400$ BIBREF10. Sinc-convolutions only require two parameters to derive each filter, the lower and upper cut-off frequencies ($f_1,f_2$), resulting in a small memory footprint. SincConv filters are initialized with the cutoff frequencies of the mel-scale filter bank and then further adjusted during training. Fig. FIGREF7 visualizes this adjustment from initialization to after training. SincConv filter banks can be easily interpreted, as the two learned parameter correspond to a specific frequency band. Fig. FIGREF8 visualizes how a SincConv layer with 7 filters processes an audio sample containing the word “yes”. Model ::: Low-Parameter GDSConv Layers DSConv have been successfully applied to the domain of computer vision BIBREF7, BIBREF12, neural translation BIBREF6 and KWS BIBREF1. Fig. FIGREF10 provides an overview of the steps from a regular convolution to the GDSConv. The number of parameters of one DSConv layer amounts to $N_{\text{DSConv}}=k\cdot c_{in}+c_{in}\cdot c_{out}$ with the kernel size $k$ and the number of input and output channels $c_{in}$ and $c_{out}$ respectively; the first summand is determined by the depthwise convolution, the second summand by the pointwise convolution BIBREF6. In our model configuration, the depthwise convolution only accounts for roughly $5\%$ of parameters in this layer, the pointwise for $95\%$. We therefore reduced the parameters of the pointwise convolution using grouping by a factor $g$ to $N_{\text{GDSConv}}=k\cdot c_{in}+\frac{c_{in}\cdot c_{out}}{g}$, rather than the parameters in the depthwise convolution. To allow information exchange between groups we alternate the number of groups per layer, namely 2 and 3, as proposed in BIBREF6. Model ::: Two Low-Parameter Architectures The SincConv as the first layer extracts features from the raw input samples, as shown in Fig. FIGREF12. As non-linearity after the SincConv we opt to use log-compression, i.e., $y=\log (\operatorname{abs}(x)+1)$, instead of a common activation function (e.g., ReLU). This has also shown to be effective in other CNN architectures for raw audio processing BIBREF10, BIBREF11. Five (G)DSConv layers are then used to process the features further: The first layer has a larger kernel size and scales the number of channels to 160. The other four layers have each 160 input and output channels. Each (G)DSConv block contains the (G)DSConv layer, batch normalization BIBREF19 and spatial dropout BIBREF20 for regularization, as well as average pooling to reduce temporal resolution. After the (G)DSConv blocks, we use global average pooling to receive a 160-element vector that can be transformed to class posteriors using a Softmax layer to classify 12 classes, i.e., 10 keywords as well as a class for unknown and for silence. The low-parameter model is obtained by grouping the DSConv layers with an alternating number of groups between 2 and 3. For the configuration shown in Fig. FIGREF12, the base model has 122k parameters. After grouping, the number of parameters is reduced to a total of 62k. Evaluation ::: Training on the Speech Commands Dataset We train and evaluate our model using Google's Speech Commands data set BIBREF18, an established dataset for benchmarking KWS systems. The first version of the data set consists of 65k one-second long utterances of 30 different keywords spoken by 1881 different speakers. The most common setup consists of a classification of 12 classes: “yes", “no", “up", “down", “left", “right", “on", “off", “stop", “go", unknown, or silence. The remaining 20 keywords are labeled as unknown, samples of provided background noise files as silence. To ensure the benchmark reproducibility, a separate test set was released with a predefined list of samples for the unknown and the silence class. The second version of the dataset contains 105k samples and five additional keywords BIBREF18. However, previous publications on KWS reported only results on the first version, therefore we focused on the first version and additionally report testing results on version 2 of the dataset. Every sample from the training set is used in training, this leads to a class imbalance as there are much more samples for unknown. Class weights in the training phase assign a lower weight to samples labeled as unknown such that the impact on the model is proportional to the other classes. This way, the model can see more unknown word samples during training without getting biased. Our model is trained for 60 epochs with the Adam optimizer BIBREF21 with an initial learning rate of 0.001 and learning rate decay of 0.5 after 10 epochs; the model with the highest validation accuracy is saved to evaluate accuracy on the test set. Evaluation ::: Results and Discussion The base model composed of DSConv layers without grouping achieves the state-of-the-art accuracy of 96.6% on the Speech Commands test set. The low-parameter model with GDSConv achieves almost the same accuracy of 96.4% with only about half the parameters. This validates the effectiveness of GDSConv for model size reduction. Table TABREF15 lists these results in comparison with related work. Compared to the DSConv network in BIBREF1, our network is more efficient in terms of accuracy for a given parameter count. Their biggest model has a 1.2% lower accuracy than our base model while having about 4 times the parameters. Choi et al. BIBREF3 has the most competitive results while we are still able to improve upon their accuracy for a given number of parameters. They are using 1D convolution along the time dimension as well which may be evidence that this yields better performance for audio processing or at least KWS. As opposed to previous works, our architecture does not use preprocessing to extract features, but is able to extract features from raw audio samples with the SincConv layer. That makes it possible to execute a full inference as floating point operations, without requiring additional hardware modules to process or transfer preprocessed features. Furthermore, we deliberately opted to not use residual connections in our network architecture, considering the memory overhead and added difficulty for hardware acceleration modules. For future comparability, we also trained and evaluated our model on the newer version 2 of the Speech Commands data set; see Table TABREF16 for results. On a side note, we observed that models trained on version 2 of the Speech Commands dataset tend to perform better on both the test set for version 2 and the test set for version 1 BIBREF18. Conclusion Always-on, battery-powered devices running keyword spotting require energy efficient neural networks with high accuracy. For this, we identified the parameter count in a neural network as a main contributor to power consumption, as memory accesses contribute far more to power consumption than the computation. Based on this observation, we proposed an energy efficient KWS neural network architecture by combining feature extraction using SincConvs with GDSConv layers. Starting with the base model composed of DSConvs that have already less parameters than a regular convolution, we achieved state-of-the-art accuracy on Google's Speech Commands dataset. We further reduce the number of parameters by grouping the convolutional channels to GDSConv, resulting in a low-parameter model with only 62k parameters.
No
7fbbe191f4d877cc6af89c00fcfd5b5774d2a2bb
7fbbe191f4d877cc6af89c00fcfd5b5774d2a2bb_0
Q: What is the memory footprint decrease of their model in comparison to other models? Text: Introduction Speech processing enables natural communication with smart phones or smart home assistants, e.g., Amazon Echo, Google Home. However, continuously performing speech recognition is not energy-efficient and would drain batteries of smart devices. Instead, most speech recognition systems passively listen for utterances of certain wake words such as “Ok Google", “Hey Siri", “Alexa", etc. to trigger the continuous speech recognition system on demand. This task is referred to as keyword spotting (KWS). There are also uses of KWS where a view simple speech commands (e.g. “on", “off") are enough to interact with a device such as a voice-controlled light bulb. Conventional hybrid approaches to KWS first divide their audio signal in time frames to extract features, e.g., Mel Frequency Cepstral Coefficients (MFCC). A neural net then estimates phoneme or state posteriors of the keyword Hidden Markov Model in order to calculate the keyword probability using a Viterbi search. In recent years, end-to-end architectures gained traction that directly classify keyword posterior probabilites based on the previously extracted features, e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Typical application scenarios imply that the device is powered by a battery, and possesses restricted hardware resources to reduce costs. Therefore previous works optimized towards memory footprint and operations per second. In contrast to this, we tune our neural network towards energy conservation in microcontrollers motivated by obervations on power consumption, as detailed in Sec. SECREF4. To extract meaningful and representative features from raw audio, our architecture uses parametrized Sinc-convolutions (SincConv) from SincNet proposed by Ravianelli et al. BIBREF5. We use Depthwise Separable Convolutions (DSConv) BIBREF6, BIBREF7 that preserve time-context information while at the same time compare features in different channels. To further reduce the number of network parameters, which is key for energy efficiency, we group DSConv-layers, a technique we refer to as Grouped DSConv (GDSConv). Our key contributions are: We propose a neural network architecture tuned towards energy efficiency in microcontrollers grounded on the observation that memory access is costly, while computation is cheap BIBREF8. Our keyword-spotting network classifies on raw audio employing SincConvs while at the same time reducing the number of parameters using (G)DSConvs. Our base model with 122k parameters performs with the state-of-the-art accuracy of $96.6\%$ on the test set of Google’s Speech Commands dataset, on par with TC-ResNet BIBREF3 that has 305k parameters and requires separate preprocessing. Our low-parameter model achieves $96.4\%$ with only 62k parameters. Related Work Recently, CNNs have been successfully applied to KWS BIBREF1, BIBREF2, BIBREF3. Zhang et al. evaluated different neural network architectures (such as CNNs, LSTMs, GRUs) in terms of accuracy, computational operations and memory footprint as well as their deployment on embedded hardware BIBREF1. They achieved their best results using a CNN with DSConvs. Tang et al. explored the use of Deep Residual Networks with dilated convolutions to achieve a high accuracy of $95.8\%$ BIBREF2, while keeping the number of parameters comparable to BIBREF1. Choi et al. build on this work as they also use a ResNet-inspired architecture. Instead of using 2D convolution over a time-frequency representation of the data they convolve along the time dimension and treat the frequency dimension as channels BIBREF3. This bears similarities with our approach as we are using 1D convolution along the time dimension as well. However, all the approaches mentioned classify from MFCCs or similar preprocessed features. Our architecture works directly on raw audio signals. There is a recent trend towards using CNNs on raw audio data directly BIBREF5, BIBREF9, BIBREF10, BIBREF11. Ravanelli et al. present an effective method of processing raw audio with CNNs, called SincNet. Kernels of the first convolutional layer are restricted to only learn shapes of parametrized sinc functions. This method was first introduced for Speaker Recognition BIBREF5 and later also used for Phoneme Recognition BIBREF9. To the best of our knowledge, we are the first to apply this method to the task of KWS. The first convolutional layer of our model is inspired by SincNet and we combine it with DSCconv. DSCconvs have first been introduced in the domain of Image Processing BIBREF7, BIBREF12 and have been applied to other domains since: Zhang et al. applied DSCconv to KWS BIBREF1. Kaiser et al. used DSConv for neural machine translation BIBREF6. They also introduce the “super-separable” convolution, a DSConv that also uses grouping and thus reduces the already small number of parameters of DSConv even further. A similar method is used by ShuffleNet where they combine DSConv with grouping and channel shuffling BIBREF13. The idea of Grouped Convolutions was first used in AlexNet BIBREF14 to reduce parameters and operations and to enable distributed computing of the model over multiple GPUs. We denominate the combination of grouping and DSconv as GDSConv in our work and use it for our smallest model. Model ::: Keyword-Spotting on Battery-Powered Devices Typical application scenarios for smart devices imply that the device is powered by a battery, and possesses restricted hardware resources. The requirements for a KWS system in these scenarios are (1) very low power consumption to maximize battery life, (2) real-time or near real-time capability, (3) low memory footprint and (4) high accuracy to avoid random activations and to ensure responsiveness. Regarding real-time capability, our model is designed to operate on a single-core microcontroller capable of 50 MOps per second BIBREF1. We assume that in microcontrollers the memory consumption of a KWS neural network is associated with its power consumption: Reading memory values contributes most to power consumption which makes re-use of weights favorable. While in general large memory modules leak more power than small memory modules, one read operation from RAM costs far more energy than the corresponding multiply-and-accumulate computation BIBREF15, BIBREF8. In addition to the parameter-reducing approach in this work, further steps may be employed to reduce power consumption such as quantization, model compression or optimization strategies regarding dataflows that depend on the utilized hardware platform BIBREF15, BIBREF8, BIBREF16, BIBREF17. Model ::: Feature Extraction using SincConvs SincNet BIBREF5 classifies on raw audio by restricting the filters of the first convolutional layer of a CNN to only learn parametrized sinc functions, i.e., $\operatorname{sinc}⁡(x)=\sin (x)/x$. One sinc function in the time domain represents a rectangular function in the spectral domain, therefore two sinc functions can be combined to an ideal band-pass filter: Performing convolution with such a filter extracts the parts of the input signal that lie within a certain frequency range. SincNet combines Sinc-convolutions with CNNs; as we only use the feature extraction layer of this architecture, we label this layer as SincConv to establish a distinction to SincNet. Compared to one filter of a regular CNN, the number of parameters is derived from its kernel width, e.g., $k=400$ BIBREF10. Sinc-convolutions only require two parameters to derive each filter, the lower and upper cut-off frequencies ($f_1,f_2$), resulting in a small memory footprint. SincConv filters are initialized with the cutoff frequencies of the mel-scale filter bank and then further adjusted during training. Fig. FIGREF7 visualizes this adjustment from initialization to after training. SincConv filter banks can be easily interpreted, as the two learned parameter correspond to a specific frequency band. Fig. FIGREF8 visualizes how a SincConv layer with 7 filters processes an audio sample containing the word “yes”. Model ::: Low-Parameter GDSConv Layers DSConv have been successfully applied to the domain of computer vision BIBREF7, BIBREF12, neural translation BIBREF6 and KWS BIBREF1. Fig. FIGREF10 provides an overview of the steps from a regular convolution to the GDSConv. The number of parameters of one DSConv layer amounts to $N_{\text{DSConv}}=k\cdot c_{in}+c_{in}\cdot c_{out}$ with the kernel size $k$ and the number of input and output channels $c_{in}$ and $c_{out}$ respectively; the first summand is determined by the depthwise convolution, the second summand by the pointwise convolution BIBREF6. In our model configuration, the depthwise convolution only accounts for roughly $5\%$ of parameters in this layer, the pointwise for $95\%$. We therefore reduced the parameters of the pointwise convolution using grouping by a factor $g$ to $N_{\text{GDSConv}}=k\cdot c_{in}+\frac{c_{in}\cdot c_{out}}{g}$, rather than the parameters in the depthwise convolution. To allow information exchange between groups we alternate the number of groups per layer, namely 2 and 3, as proposed in BIBREF6. Model ::: Two Low-Parameter Architectures The SincConv as the first layer extracts features from the raw input samples, as shown in Fig. FIGREF12. As non-linearity after the SincConv we opt to use log-compression, i.e., $y=\log (\operatorname{abs}(x)+1)$, instead of a common activation function (e.g., ReLU). This has also shown to be effective in other CNN architectures for raw audio processing BIBREF10, BIBREF11. Five (G)DSConv layers are then used to process the features further: The first layer has a larger kernel size and scales the number of channels to 160. The other four layers have each 160 input and output channels. Each (G)DSConv block contains the (G)DSConv layer, batch normalization BIBREF19 and spatial dropout BIBREF20 for regularization, as well as average pooling to reduce temporal resolution. After the (G)DSConv blocks, we use global average pooling to receive a 160-element vector that can be transformed to class posteriors using a Softmax layer to classify 12 classes, i.e., 10 keywords as well as a class for unknown and for silence. The low-parameter model is obtained by grouping the DSConv layers with an alternating number of groups between 2 and 3. For the configuration shown in Fig. FIGREF12, the base model has 122k parameters. After grouping, the number of parameters is reduced to a total of 62k. Evaluation ::: Training on the Speech Commands Dataset We train and evaluate our model using Google's Speech Commands data set BIBREF18, an established dataset for benchmarking KWS systems. The first version of the data set consists of 65k one-second long utterances of 30 different keywords spoken by 1881 different speakers. The most common setup consists of a classification of 12 classes: “yes", “no", “up", “down", “left", “right", “on", “off", “stop", “go", unknown, or silence. The remaining 20 keywords are labeled as unknown, samples of provided background noise files as silence. To ensure the benchmark reproducibility, a separate test set was released with a predefined list of samples for the unknown and the silence class. The second version of the dataset contains 105k samples and five additional keywords BIBREF18. However, previous publications on KWS reported only results on the first version, therefore we focused on the first version and additionally report testing results on version 2 of the dataset. Every sample from the training set is used in training, this leads to a class imbalance as there are much more samples for unknown. Class weights in the training phase assign a lower weight to samples labeled as unknown such that the impact on the model is proportional to the other classes. This way, the model can see more unknown word samples during training without getting biased. Our model is trained for 60 epochs with the Adam optimizer BIBREF21 with an initial learning rate of 0.001 and learning rate decay of 0.5 after 10 epochs; the model with the highest validation accuracy is saved to evaluate accuracy on the test set. Evaluation ::: Results and Discussion The base model composed of DSConv layers without grouping achieves the state-of-the-art accuracy of 96.6% on the Speech Commands test set. The low-parameter model with GDSConv achieves almost the same accuracy of 96.4% with only about half the parameters. This validates the effectiveness of GDSConv for model size reduction. Table TABREF15 lists these results in comparison with related work. Compared to the DSConv network in BIBREF1, our network is more efficient in terms of accuracy for a given parameter count. Their biggest model has a 1.2% lower accuracy than our base model while having about 4 times the parameters. Choi et al. BIBREF3 has the most competitive results while we are still able to improve upon their accuracy for a given number of parameters. They are using 1D convolution along the time dimension as well which may be evidence that this yields better performance for audio processing or at least KWS. As opposed to previous works, our architecture does not use preprocessing to extract features, but is able to extract features from raw audio samples with the SincConv layer. That makes it possible to execute a full inference as floating point operations, without requiring additional hardware modules to process or transfer preprocessed features. Furthermore, we deliberately opted to not use residual connections in our network architecture, considering the memory overhead and added difficulty for hardware acceleration modules. For future comparability, we also trained and evaluated our model on the newer version 2 of the Speech Commands data set; see Table TABREF16 for results. On a side note, we observed that models trained on version 2 of the Speech Commands dataset tend to perform better on both the test set for version 2 and the test set for version 1 BIBREF18. Conclusion Always-on, battery-powered devices running keyword spotting require energy efficient neural networks with high accuracy. For this, we identified the parameter count in a neural network as a main contributor to power consumption, as memory accesses contribute far more to power consumption than the computation. Based on this observation, we proposed an energy efficient KWS neural network architecture by combining feature extraction using SincConvs with GDSConv layers. Starting with the base model composed of DSConvs that have already less parameters than a regular convolution, we achieved state-of-the-art accuracy on Google's Speech Commands dataset. We further reduce the number of parameters by grouping the convolutional channels to GDSConv, resulting in a low-parameter model with only 62k parameters.
Unanswerable
f42e61f9ad06fb782d1574eb973c880add4f76d2
f42e61f9ad06fb782d1574eb973c880add4f76d2_0
Q: What architectural factors were investigated? Text: Introduction Any finite training set is consistent with multiple generalizations. Therefore, the way that a learner generalizes to unseen examples depends not only on the training data but also on properties of the learner. Suppose a learner is told that a blue triangle is an example of a blick. A learner preferring shape-based generalizations would conclude that blick means “triangle,” while a learner preferring color-based generalizations would conclude that blick means “blue object” BIBREF0. Factors that guide a learner to choose one generalization over another are called inductive biases. What properties of a learner cause it to have a particular inductive bias? We investigate this question with respect to sequence-to-sequence neural networks BIBREF1, BIBREF2. As a test case for studying differences in how models generalize, we use the syntactic task of English question formation, such as transforming SECREF1 into SECREF1: . Ṫhe zebra ForestGreendoes chuckle. ForestGreenDoes the zebra chuckle? Following BIBREF3's (BIBREF3) empirical claims about children's linguistic input, we constrain our training set to be consistent with two possible rules illustrated in Figure FIGREF1: move-main (a rule based on hierarchical syntactic structure) and move-first (a rule based on linear order). We then evaluate each trained model on examples where the rules make different predictions, such as SECREF1: given SECREF1, move-main would generate SECREF1 while move-first would generate SECREF1: . Ẏour zebras that bluedon't dance ForestGreendo chuckle. ForestGreenDo your zebras that bluedon't dance chuckle? blueDon't your zebras that dance ForestGreendo chuckle? Since no such examples appear in the training set, a model's behavior on them reveals which rule the model is biased toward. This task allows us to study a particular bias, namely a bias for hierarchical generalization, which is important for models of language because it has been argued to underlie human language acquisition BIBREF4. To test which models have a hierarchical bias, we use the question formation task and a second task: tense reinflection. For both tasks, our training set is ambiguous between a hierarchical generalization and a linear generalization. If a model chooses the hierarchical generalization for only one task, this preference is likely due to task-specific factors rather than a general hierarchical bias. On the other hand, a consistent preference for hierarchical generalizations across tasks would provide converging evidence that a model has a hierarchical bias. We find that all the factors we tested can qualitatively affect how a model generalizes on the question formation task. These factors are the type of recurrent unit, the type of attention, and the choice of sequential vs. tree-based model structure. Even though all these factors affected the model's decision between move-main and move-first, only the use of a tree-based model can be said to impart a hierarchical bias, since this was the only model type that chose a hierarchical generalization across both of our tasks. Specific findings that support these general conclusions include: Generalization behavior is profoundly affected by the type of recurrent unit and the type of attention, and also by the interactions between these factors. LSTMs and GRUs have qualitatively different inductive biases. The difference appears at least partly due to the fact that the values in GRU hidden states are bounded within a particular interval BIBREF5. Only a model built around the correct tree structure displayed a robust hierarchical bias across tasks. Sequentially-structured models failed to generalize hierarchically even when the input contained explicit marking of each sentence's hierarchical structure. Overall, we conclude that many factors can qualitatively affect a model's inductive biases, but human-like syntactic generalization may require specific types of high-level structure, at least when learning from text alone. The question formation task ::: Background The classic discussion of the acquisition of English question formation begins with two empirical claims: (i) disambiguating examples such as SECREF1 rarely occur in a child's linguistic input, but (ii) all learners of English nevertheless acquire move-main rather than move-first. chomsky1965,chomsky1980 uses these points to argue that humans must have an innate bias toward learning syntactic rules that are based on hierarchy rather than linear order (this argument is known as the argument from the poverty of the stimulus). There has been a long debate about this line of argument. Though some have discussed the validity of Chomsky's empirical claims BIBREF6, BIBREF7, BIBREF8, BIBREF9, most of the debate has been about which mechanisms could explain the preference for move-main. These mechanisms include an assumption of substitutability BIBREF10, a bias for simplicity BIBREF11, exploitation of statistical patterns BIBREF12, BIBREF13, and semantic knowledge BIBREF14; see clark2010linguistic for in-depth discussion. These past works focus on the content of the bias that favors move-main (i.e., which types of generalizations the bias supports), but we instead focus on the source of this bias (i.e., which factors of the learner give rise to the bias). In the book Rethinking Innateness, elman1998rethinking argue that innate biases in humans must arise from architectural constraints on the neural connections in the brain rather than from constraints stated at the symbolic level, under the assumption that symbolic constraints are unlikely to be specified in the genome. Here we use artificial neural networks to investigate whether syntactic inductive biases can emerge from architectural constraints. The question formation task ::: Framing of the task Following frank2007 and mccoy2018revisiting, we train models to take a declarative sentence as input and to either output the same sentence unchanged, or transform that sentence into a question. The sentences were generated from a context-free grammar containing only the sentence types shown in Figure FIGREF5 and using a 75-word vocabulary; the full grammar is at the project website.fn:website The different types of sentences vary in the linear position of the main auxiliary, such that a model cannot identify the main auxiliary with a simple positional heuristic. The task to be performed is indicated by the final input token, as in SECREF7 and SECREF7: . Input:your zebra does read . declXX Output: your zebra does read . declXX . Input:your zebra does read . questX Output:does your zebra read ? questX During training, all question formation examples are consistent with both move-first and move-main, such that there is no direct evidence favoring one rule over the other (see Figure FIGREF5). To assess how models generalize, we evaluate them on a generalization set consisting of examples where move-main and move-first make different predictions due to the presence of a relative clause on the subject (see sentence SECREF1). The question formation task ::: Evaluation metrics We focus on two metrics. The first is full-sentence accuracy on the test set. That is, for examples drawn from the same distribution as the training set, does the model get the output exactly right? For testing generalization to the withheld example type, a natural metric would be full-sentence accuracy on the generalization set. However, in preliminary experiments we found that most models rarely produced the exact output predicted by either move-main or move-first, as they tend to truncate the output, confuse similar words, and make other extraneous errors. To abstract away from such errors, we use first-word accuracy on the generalization set. With both move-first and move-main, the first word of the question is the auxiliary that has been moved from within the sentence. If the auxiliaries in the relative and main clauses are distinct, this word alone is sufficient to differentiate the two rules. For example, in the bottom right cell of Figure FIGREF5, move-main predicts having do at the start, while move-first predicts don't. Models almost always produced either the main auxiliary or the first auxiliary as the first word of the output (over 98% of the time for most models), so a low first-word accuracy can be interpreted as high consistency with move-first. The question formation task ::: Architecture We used the sequence-to-sequence architecture in Figure FIGREF12 BIBREF2. This model consists of two neural networks: the encoder and the decoder. The encoder is fed the input sentence one word at a time; after each word, the encoder updates its hidden state, a vector representation of the information encountered so far. After the encoder has been fed the entire input, its final hidden state ($E_6$ in Figure FIGREF12) is fed to the decoder, which generates an output sequence one word at a time based on its own hidden state, which is updated after each output word. The weights that the encoder and decoder use to update their hidden states and generate outputs are learned via gradient descent; for more details, see Appendix SECREF10. The question formation task ::: Overview of experiments Holding the task constant, we first varied two aspects of the architecture that have no clear connection to question formation, namely the recurrent unit and the type of attention; both of these aspects have been central to major advances in natural language processing BIBREF15, BIBREF16, so we investigate them here to see whether their contributions might be partially explained by linguistically-relevant inductive biases that they impart. We also tested a more clearly task-relevant modification of the architecture, namely the use of tree-based models rather than the sequential structure in Figure FIGREF12. Recurrent unit and attention ::: Recurrent unit The recurrent unit is the component that updates the hidden state after each word for the encoder and decoder. We used three types of recurrent units: simple recurrent networks (SRNs; BIBREF17), gated recurrent units (GRUs; BIBREF18), and long short-term memory (LSTM) units BIBREF19. In SRNs and GRUs, the hidden state is represented by a single vector, while LSTMs use two vectors (the hidden state and the cell state). In addition, GRUs and LSTMs both use gates, which control what information is retained across time steps, while SRNs do not; GRUs and LSTMs differ from each other in the number and types of gates they use. Recurrent unit and attention ::: Attention In the basic model in Figure FIGREF12, the final hidden state of the encoder is the decoder's only source of information about the input. To avoid having such a bottleneck, many contemporary sequence-to-sequence models use attention BIBREF16, a feature that enables the decoder to consider all encoder hidden states ($E_0$ through $E_6$ in Figure FIGREF12) when generating hidden state $D_i$. A model without attention has the only inputs to $D_i$ being $D_{i-1}$ and $y_{i-1}$ (the previous output); attention adds a third input, $c_i = \sum _j \alpha _i[j] E_j$, which is a weighted sum of the encoder's hidden states ($E_0$ through $E_n$) using a weight vector $\alpha _i$ whose $j^{th}$ element is denoted by $\alpha _i[j]$. Implementations of attention vary in how the weights $\alpha _i[j]$ are derived BIBREF20, BIBREF21, BIBREF22. Attention can be solely location-based, where each $\alpha _i$ is determined solely from $D_{i-1}$ (and potentially also $y_{i-1}$), so that the model chooses where to attend without first checking what it is attending to. Alternately, attention could be content-based, in which case each $\alpha _i[j]$ is determined from both $D_{i-1}$ and $E_j$, such that the model does consider what it might attend to before attending to it. We test both location-based and content-based attention, and we also test models without attention. Recurrent unit and attention ::: Results We trained models with all nine possible combinations of recurrent unit and attention type, using the hyperparameters and training procedure described in Appendix SECREF10. The results are in Figure FIGREF13. The SRN without attention failed on the test set, mainly because it often confused words that had the same part of speech, a known weakness of SRNs BIBREF23. Therefore, its generalization set behavior is uninformative. The other architectures performed strongly on the test set ($>$ 50% full-sentence accuracy), so we now consider their generalization set performance. The GRU with location-based attention and the SRN with content-based attention both preferred move-main, while the remaining architectures preferred move-first. These results suggest that both the recurrent unit and the type of attention can qualitatively affect a model's inductive biases. Moreover, the interactions of these factors can have drastic effects: with SRNs, content-based attention led to behavior consistent with move-main while location-based attention led to behavior consistent with move-first; these types of attention had opposite effects with GRUs. Recurrent unit and attention ::: Differences between LSTMs and GRUs One striking result in Figure FIGREF13 is that LSTMs and GRUs display qualitative differences, even though the two architectures are often viewed as interchangeable and achieve similar performance in applied tasks BIBREF24. One difference between LSTMs and GRUs is that a squashing function is applied to the hidden state of a GRU to keep its values within the range $(-1,1)$, while the cell state of an LSTM is not bounded. weiss2018 demonstrate that such squashing leads to a qualitative difference in how well these models generalize counting behavior. Such squashing may also explain the qualitative differences that we observe: counting the input elements is equivalent to keeping track of their linear positions, so we might expect that a tendency to count would make the linear generalization more accessible. To test whether squashing increases a model's preference for move-main, we created a modified LSTM that included squashing in the calculation of its cell state, and a modified GRU that did not have the squashing usually present in GRUs. See Appendix SECREF11 for more details. Using the same training setup as before, we trained models with these modified recurrent units and with location-based attention. LSTMs and GRUs with squashing chose move-main more often than the corresponding models without squashing (Figure FIGREF20), suggesting that such squashing is one factor that causes GRUs to behave differently than LSTMs. Recurrent unit and attention ::: Hyperparameters and random seed In addition to variation across architectures, we also observed considerable variation across multiple instances of the same architecture that differed only in random seed; the random seeds determined both the initial weights of each model and the order in which training examples were sampled. For example, the generalization set first-word accuracy for SRNs with content-based attention ranged from 0.17 to 0.90. Based on our exploration of hyperparameters, it also appears that the learning rate and hidden size can qualitatively affect generalization. The effects of these details are difficult to interpret systematically, and we leave the characterization of their effects for future work. Results for all individual re-runs are at the project website.fn:website Tree models So far we have tested whether properties that are not interpretably related to hierarchical structure nevertheless affect how a model generalizes on a syntactic task. We now turn to a related but opposite question: when a model's design is meant to give it a hierarchical inductive bias, does this design succeed at giving the model this bias? Tree models ::: Tree model that learns implicit structure The first hierarchical model that we test is the Ordered Neurons LSTM (ON-LSTM; BIBREF25). This model is not given the tree structure of each sentence as part of its input. Instead, its processing is structured in a way that leads to the implicit construction of a soft parse tree. This implicit tree structure is created by imposing a stack-like constraint on the updates to the values in the cell state of an LSTM: the degree to which the $i^{\textrm {th}}$ value is updated must always be less than or equal to the degree to which the $j^{\textrm {th}}$ value is updated for all $j \le i$. This hierarchy of cell-state values adds an implicit tree structure to the model, where each level in the tree is defined by a soft depth in the cell state to which that level extends. We re-implemented the ON-LSTM and trained 100 instances of it using the hyperparameters specified in Appendix SECREF10. This model achieved a test set full-sentence accuracy of 0.93 but a generalization set first-word accuracy of 0.05, showing a strong preference for move-first over move-main, contrary to what one would expect from a model with a hierarchical inductive bias. This lack of hierarchical behavior might be explained by BIBREF26's (BIBREF26) finding that ON-LSTMs do not perform much better than standard LSTMs at implicitly recovering hierarchical structure, even though ON-LSTMs (but not standard LSTMs) were designed in a way intended to impart a hierarchical bias. According to BIBREF26, the ON-LSTM's apparent success reported in shen2018ordered was largely due to the method used to analyze the model rather than the model itself. Tree models ::: Tree models given explicit structure The ON-LSTM results show that hierarchically structured processing alone is not sufficient to induce a bias for move-main, suggesting that constraints on which trees are used may also be necessary. We therefore tested a second type of hierarchical model, namely Tree-RNNs, that were explicitly fed the correct parse tree. Parse trees can be used to guide the encoder, the decoder, or both; Figure FIGREF28 shows a model where both the encoder and decoder are tree-based. For the tree-based encoder, we use the Tree-GRU from chen2017improved. This model composes the vector representations for a pair of sister nodes to generate a vector representing their parent. It performs this composition bottom-up, starting with the word embeddings at the leaves and ending with a single vector representing the root ($E_4$ in Figure FIGREF28); this vector acts as the encoding of the input. For the tree-based decoder, we use a model based on the Tree-LSTM decoder from BIBREF27, but using a GRU instead of an LSTM, for consistency with the tree encoder. This tree decoder is the mirror image of the tree encoder: starting with the vector representation of the root node ($D_0$ in Figure FIGREF28), it takes the vector representation of a parent node and outputs two vectors, one for the left child and one for the right child, until it reaches a leaf node, where it outputs a word. We test models with a tree-based encoder and sequential decoder, a sequential encoder and tree-based decoder, or a tree-based encoder and tree-based decoder, all without attention; we investigate these variations to determine whether hierarchical generalization is determined by the encoder, the decoder, or both. The results for these models are in Figure FIGREF31, along with the previous results of the fully sequential GRU (sequential encoder + sequential decoder) without attention for comparison. The model with a tree-based encoder and sequential decoder preferred move-first, like the fully sequential model. Only the models with a tree-based decoder preferred move-main, consistent with the finding of mccoy2018rnns that it is the decoder that determines an encoder-decoder model's representations. However, the model with a sequential encoder and a tree decoder failed on the test set, so the only model that both succeeded on the test set and showed a bias toward a move-main generalization was the fully tree-based model (Tree/Tree). The behavior of this Tree/Tree model was striking in another way as well: Its generalization set full-sentence accuracy was 69%, while all other models—even those that achieved high first-word accuracy on the generalization set—had close to 0% generalization set full-sentence accuracy. The ON-LSTM and Tree-GRU results show that an architecture designed to have a certain inductive bias might, but will not necessarily, display the intended bias. Tense reinflection We have shown that several models reliably preferred move-main over move-first. However, this behavior alone does not necessarily mean that these models have a hierarchical bias, because a preference for move-main might arise not from a hierarchical bias but rather from some task-specific factors such as the prevalence of certain n-grams BIBREF28, BIBREF29. A true hierarchical bias would lead a model to adopt hierarchical generalizations across training tasks; by contrast, we hypothesize that other factors (such as a bias for focusing on n-gram statistics) will be more sensitive to details of the task and will thus be unlikely to consistently produce hierarchical preferences. To test the robustness of the hierarchical preferences of our models, then, we introduce a second task, tense reinflection. Tense reinflection ::: Reinflection task The reinflection task uses English subject-verb agreement to illuminate a model's syntactic generalizations BIBREF30. The model is fed a past-tense English sentence as input. It must then output that sentence either unchanged or transformed to the present tense, with the final word of the input indicating the task to be performed: . my yak swam . past $\rightarrow $ my yak swam . . my yak swam . present $\rightarrow $ my yak swims . Because the past tense in English does not inflect for number (e.g., the past tense of swim is swam whether the subject is singular or plural), the model must determine from context whether each verb being turned to present tense should be singular or plural. Example SECREF32 is consistent with two salient rules for determining which aspects of the context are relevant: . agree-subject: Each verb should agree with its hierarchically-determined subject. . agree-recent: Each verb should agree with the linearly most recent noun. Though these rules make the same prediction for SECREF32, they make different predictions for other examples, such as SECREF32, for which agree-subject predicts SECREF32 while agree-recent predicts SECREF32: . ṁy zebra by the yaks swam . present my zebra by the yaks swims . my zebra by the yaks swim . Similar to the setup for the question formation experiments, we trained models on examples for which agree-subject and agree-recent made the same predictions and evaluated the trained models on examples where the rules make different predictions. We ran this experiment with all 9 sequential models ([SRN, GRU, LSTM] x [no attention, location-based attention, content-based attention]), the ON-LSTM, and the model with a tree-based encoder and tree-based decoder that were provided the correct parse trees, using the hyperparameters in Appendix SECREF10. The example sentences were generated using the same context-free grammar used for the question formation task, except with inflected verbs instead of auxiliary/verb bigrams (e.g., reads instead of does read). We evaluated these models on the full-sentence accuracy on the test set and also main-verb accuracy for the generalization set—that is, the proportion of generalization set examples for which the main verb was correctly predicted, such as when swims rather than swim was chosen in the output for SECREF32. Models usually chose the correct lemma for the main verb (at least 87% of the time for all tense reinflection models), with most main verb errors involving the correct verb but with incorrect inflection (i.e., being singular instead of plural, or vice versa). Thus, a low main-verb accuracy can be interpreted as consistency with agree-recent. All sequential models, even the ones that generalized hierarchically with question formation, overwhelmingly chose agree-recent for this reinflection task (Figure FIGREF33), consistent with the results of a similar experiment done by ravfogel2019studying. The ON-LSTM also preferred agree-recent. By contrast, the fully tree-based model preferred the hierarchical generalization agree-subject. Thus, although the question formation experiments showed qualitative differences in sequential models' inductive biases, this experiment shows that those differences cannot be explained by positing that there is a general hierarchical bias in some of our sequential models. What the relevant bias for these models is remains unclear; we only claim to show that it is not a hierarchical bias. Overall, the model with both a tree-based encoder and a tree-based decoder is the only model we tested that plausibly has a generic hierarchical bias, as it is the only one that behaved consistently with such a bias across both tasks. Are tree models constrained to generalize hierarchically? It may seem that the tree-based models are constrained by their structure to make only hierarchical generalizations, rendering their hierarchical generalization trivial. In this section, we test whether they are in fact constrained in this way, and similarly whether sequential models are constrained to make only linear generalizations. Earlier, the training sets for our two tasks were ambiguous between two generalizations, but we now used training sets that unambiguously supported either a linear transformation or a hierarchical transformation. For example, we used a move-main training set that included some examples like SECREF6, while the move-first training set included some examples like SECREF6: . ṁy yaks that do read don't giggle . quest $\rightarrow $ don't my yaks that do read giggle ? my yaks that do read don't giggle . quest $\rightarrow $ do my yaks that read don't giggle ? Similarly, for the tense reinflection task, we created an agree-subject training set and an agree-recent training set. For each of these four training sets, we trained 100 sequential GRUs and 100 Tree/Tree GRUs, all without attention. Each model learned to perform linear and hierarchical transformations with similar accuracy: On the move-main and move-first datasets, both the sequential and tree-based models achieved 100% first-word accuracy. On both the agree-subject and agree-recent datasets, the sequential model achieved 91% main-verb accuracy and the tree-based model achieved 99% main-verb accuracy. Thus, the fact that the tree-based model preferred hierarchical generalizations when the training set was ambiguous arose not from any constraint imposed by the tree structure but rather from the model's inductive biases—biases that can be overridden given appropriate training data. Tree structure vs. tree information Our sequential and tree-based models differ not only in structure but also in the information they have been provided: the tree-based models have been given correct parse trees for their input and output sentences, while the sequential models have not been given parse information. Therefore, it is unclear whether the hierarchical generalization displayed by the tree-based models arose from the tree-based model structure, from the parse information provided to the models, or both. To disentangle these factors, we ran two further experiments. First, we retrained the Tree/Tree GRU but using uniformly right-branching trees (as in (11b)) instead of correct parses (as in (11a)). Thus, these models make use of tree structure but not the kind of parse structure that captures linguistic information. Second, we retrained the sequential GRU without attention but modified the input and output by adding brackets that indicate each sentence's parse; for example, SECREF7 would be changed to SECREF7. Thus, these models are provided with parse information in the input but such structure does not guide the neural network computation as it does with tree RNNs. . a. [ [ [ my yak ] [ does giggle ] ] $.$ ] b. [ my [ yak [ does [ giggle $.$ ] ] ] ] . ṁy yak does giggle . quest $\rightarrow $ does my yak giggle ? [ [ [ my yak ] [ does giggle ] . ] quest ] $\rightarrow $ [ [ does [ [ my yak ] giggle ] ] ? ] We ran 100 instances of each experiment using different random seeds. For the experiment with bracketed input, the brackets significantly increased the lengths of the sentences, making the learning task harder; we therefore found it necessary to use a patience of 6 instead of the patience of 3 we used elsewhere, but all other hyperparameters remained as described in Appendix SECREF10. For both tasks, neither the sequential GRU that was given brackets in its input nor the Tree/Tree model that was given right-branching trees displayed a hierarchical bias (Figure FIGREF36). The lack of hierarchical bias in the sequential GRU with bracketed input indicates that simply providing parse information in the input and target output is insufficient to induce a model to favor hierarchical generalization; it appears that such parse information must be integrated into the model's structure to be effective. On the other hand, the lack of a hierarchical bias in the Tree/Tree model using right-branching trees shows that simply having tree structure is also insufficient; it is necessary to have the correct tree structure. Will models generalize across transformations? Each experiment discussed so far involved a single linguistic transformation. By contrast, humans acquiring language are not exposed to phenomena in isolation but rather to a complete language encompassing many phenomena. This fact has been pointed to as a possible way to explain hierarchical generalization in humans without needing to postulate any innate preference for hierarchical structure. While one phenomenon, such as question formation, might be ambiguous in the input, there might be enough direct evidence among other phenomena to conclude that the language as a whole is hierarchical, a fact which learners can then extend to the ambiguous phenomenon BIBREF8, BIBREF11, under the non-trivial assumption that the learner will choose to treat the disparate phenomena in a unified fashion. While our training sets are ambiguous with respect to whether the phenomenon underlying the mapping is structurally driven, they do contain other cues that the language is more generally governed by hierarchical regularities. First, certain structural units are reused across positions in a sentence; for example, prepositional phrases can appear next to subjects or objects. Such reuse of structure can be represented more efficiently with a hierarchical grammar than a linear one. Second, in the question formation task, subject-verb agreement can also act as a cue to hierarchical structure: e.g., in the sentence my walrus by the yaks does read, the inflection of does depends on the verb's hierarchically-determined subject (walrus) rather than the linearly closest noun (yaks). For the sequential RNNs we have investigated, it appears that these indirect cues to hierarchical structure were not sufficient to guide the models towards hierarchical generalizations. However, perhaps the inclusion of some more direct evidence for hierarchy would be more successful. To take a first step toward investigating this possibility, we use a multi-task learning setup, where we train a single model to perform both question formation and tense reinflection. We set up the training set such that one task was unambiguously hierarchical while the other was ambiguous between the hierarchical generalization and the linear generalization. This gave two settings: One where question formation was ambiguous, and one where tense reinflection was ambiguous. We trained 100 instances of a GRU without attention on each setting and assessed how each model generalized for the task that was ambiguous. For both cases, generalization behavior in the multi-task setting differed only minimally from the single-task setting (Figure FIGREF41). One potential explanation for the lack of transfer across tasks is that the two tasks operated over different sentence structures: the question formation sentences always contained overt auxiliaries on their verbs (e.g., my walrus does giggle), while the tense reinflection sentences did not (e.g., my walrus giggles). To test this possibility, we reran the multi-task experiments but with overt auxiliaries added to the tense reinflection sentences (Figure FIGREF41, “Multi-task + auxiliaries” row). In this setting, the model still generalized linearly when it was question formation that was ambiguous. However, when it was tense reinflection that was ambiguous, the model generalized hierarchically. We hypothesize that the directionality of this transfer is due to the fact that the question formation training set includes unambiguous long-distance subject-verb agreement as in SECREF8, which might help the model on generalization-set examples for tense reinflection such as SECREF8: . my zebras by the yak do read . decl $\rightarrow $ my zebras by the yak do read . . my zebras by the yak did read . present $\rightarrow $ my zebras by the yak do read . By contrast, the tense reinflection training set does not contain any outputs of the type withheld from the question formation training set. If this explanation is correct, it would mean that the improvement on the tense reinflection task derived not from the question formation transformation but rather from the subject-verb agreement incidentally present in the question formation dataset. Therefore, even the single potential case of generalization across transformations is likely spurious. Recent NLP work has also found that neural networks do not readily transfer knowledge across tasks; e.g., pretrained models often perform worse than non-pretrained models BIBREF31. This lack of generalization across tasks might be due to the tendency of multi-task neural networks to create largely independent representations for different tasks even when a shared representation could be used BIBREF32. Therefore, to make cross-phenomenon generalizations, neural networks may need to be given an explicit bias for sharing processing across phenomena. Discussion We have found that all factors we tested can qualitatively affect a model's inductive biases but that a hierarchical bias—which has been argued to underlie children's acquisition of syntax—only arose in a model whose inputs and computations were governed by syntactic structure. Discussion ::: Relation to Rethinking Innateness Our experiments were motivated in part by the book Rethinking Innateness BIBREF33 which argued that humans' inductive biases must arise from constraints on the wiring patterns of the brain. Our results support two conclusions from this book. First, those authors argued that “Dramatic effects can be produced by small changes” (p. 359). This claim is supported by our observation that low-level factors, such as the size of the hidden state, qualitatively affect how models generalize (Section SECREF26). Second, they argued that “[w]hat appear to be single events or behaviors may have a multiplicity of underlying causes” (p. 359); in our case, we found that a model's generalization behavior results from some combination of factors that interact in hard-to-interpret ways; e.g., changing the type of attention had different effects in SRNs than in GRUs. The dramatic effects of these low-level factors offer some support for the claim that humans' inductive biases can arise from fine-grained architectural constraints in the brain. However, this support is only partial. Our only model that robustly displayed the kind of preference for hierarchical generalization that is necessary for language learning did not derive such a preference from low-level architectural properties but rather from the explicit encoding of linguistic structure. Discussion ::: Relation to human language acquisition Our experiments showed that some tree-based models displayed a hierarchical bias, while non-tree-based models never displayed such a bias, even when provided with strong cues to hierarchical structure in their input (through bracketing or multi-task learning). These findings suggest that the hierarchical preference displayed by humans when acquiring English requires making explicit reference to hierachical structure, and cannot be argued to emerge from more general biases applied to input containing cues to hierarchical structure. Moreover, since the only successful hierarchical model was one that took the correct parse trees as input, our results suggest that a child's set of biases includes biases governing which specific trees will be learned. Such biases could involve innate knowledge of likely tree structures, but they do not need to; they might instead involve innate tendencies to bootstrap parse trees from other sources, such as prosody BIBREF34 or semantics BIBREF35. With such information, children might learn their language's basic syntax before beginning to acquire question formation, and this knowledge might then guide their acquisition of question formation. There are three important caveats for extending our conclusions to humans. First, humans may have a stronger bias to share processing across phenomena than neural networks do, in which case multi-task learning would be a viable explanation for the biases displayed by humans even though it had little effect on our models. Indeed, this sort of cross-phenomenon consistency is similar in spirit to the principle of systematicity, and it has long been argued that humans have a strong bias for systematicity while neural networks do not BIBREF36, BIBREF37. Second, some have argued that children's input actually does contain utterances unambiguously supporting a hierarchical transformation BIBREF8, whereas we have assumed a complete lack of such examples. Finally, our training data omit many cues to hierarchical structure that are available to children, including prosody and real-world grounding. It is possible that, with data closer to a child's input, more general inductive biases might succeed. However, there is still significant value in studying what can be learned from strings alone, because we are unlikely to understand how the multiple components of a child's input interact without a better understanding of each component. Furthermore, during the acquisition of abstract aspects of language, real-world grounding is not always useful in the absence of linguistic biases BIBREF38. More generally, it is easily possible for learning to be harder when there is more information available than when there is less information available BIBREF39. Thus, our restricted experimental setup may actually make learning easier than in the more informationally-rich scenario faced by children. Discussion ::: Practical takeaways Our results leave room for three possible approaches to imparting a model with a hierarchical bias. First, one could search the space of hyperparameters and random seeds to find a setting that leads to the desired generalization. However, this may be ineffective: At least in our limited exploration of these factors, we did not find a hyperparameter setting that led to hierarchical generalization across tasks for any non-tree-based model. A second option is to add a pre-training task or use multi-task learning BIBREF40, BIBREF41, BIBREF42, where the additional task is designed to highlight hierarchical structure. Most of our multi-task experiments only achieved modest improvements over the single-task setting, suggesting that this approach is also not very viable. However, it is possible that further secondary tasks would bring further gains, making this approach more effective. A final option is to use more interpretable architectures with explicit hierachical structure. Our results suggest that this approach is the most viable, as it yielded models that reliably generalized hierarchically. However, this approach only worked when the architectural bias was augmented with rich assumptions about the input to the learner, namely that it provided correct hierarchical parses for all sentences. We leave for future work an investigation of how to effectively use tree-based models without providing correct parses. Acknowledgments For helpful comments we thank Joe Pater, Paul Smolensky, the JHU Computation and Psycholinguistics lab, the JHU Neurosymbolic Computation lab, the Computational Linguistics at Yale (CLAY) lab, the anonymous reviewers, and audiences at the University of Pavia Center for Neurocognition, Epistemology, and Theoretical Syntax, the Penn State Dept. of Computer Science and Engineering, and the MIT Dept. of Brain and Cognitive Sciences. Any errors are our own. This material is based upon work supported by the NSF Graduate Research Fellowship Program under Grant No. 1746891, and by NSF Grant Nos. BCS-1920924 and BCS-1919321. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. Our experiments were conducted with resources from the Maryland Advanced Research Computing Center (MARCC). Architecture and training details We used a word embedding size of 256 (with word embeddings learned from scratch), a hidden size of 256, a learning rate of 0.001, and a batch size of 5. Models were evaluated on a validation set after every 1000 training batches, and we halted training if the model had been trained for at least 30,000 batches and had shown no improvement over 3 consecutive evaluations on the validation set (the number 3 in this context is called the patience). The training set contained 100,000 examples, while the validation, test, and generalization sets contained 10,000 examples each. The datasets were held constant across experiments, but models sampled from the training set in different orders across experiments. During training, we used teacher forcing on 50% of examples. Equations for squashing experiments The equations governing a standard LSTM are: To create a new LSTM whose cell state exhibits squashing, like the hidden state of the GRU, we modified the LSTM cell state update in (DISPLAY_FORM45) to (DISPLAY_FORM47), where the new coefficients now add to 1: The equations governing a standard GRU are: The GRU's hidden state is squashed because its update gate $z$ merges the functions of the input and forget gates ($i$ and $f$) of the LSTM (cf. equations DISPLAY_FORM45 and DISPLAY_FORM48). As a result, the input and forget weights are tied in the GRU but not the LSTM. To create a non-squashed GRU, we added an input gate $i$ and changed the hidden state update (Equation DISPLAY_FORM48) to Equation DISPLAY_FORM49 to make $z$ act solely as a forget gate:
type of recurrent unit, type of attention, choice of sequential vs. tree-based model structure
f197e0f61f7980c64a76a3a9657762f1f0edb65b
f197e0f61f7980c64a76a3a9657762f1f0edb65b_0
Q: Any other bias may be detected? Text: Introduction Recent work has shown evidence of substantial bias in machine learning systems, which is typically a result of bias in the training data. This includes both supervised BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 and unsupervised natural language processing systems BIBREF4 , BIBREF5 , BIBREF6 . Machine learning models are currently being deployed in the field to detect hate speech and abusive language on social media platforms including Facebook, Instagram, and Youtube. The aim of these models is to identify abusive language that directly targets certain individuals or groups, particularly people belonging to protected categories BIBREF7 . Bias may reduce the accuracy of these models, and at worst, will mean that the models actively discriminate against the same groups they are designed to protect. Our study focuses on racial bias in hate speech and abusive language detection datasets BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , all of which use data collected from Twitter. We train classifiers using each of the datasets and use a corpus of tweets with demographic information to compare how each classifier performs on tweets written in African-American English (AAE) versus Standard American English (SAE) BIBREF13 . We use bootstrap sampling BIBREF14 to estimate the proportion of tweets in each group that each classifier assigns to each class. We find evidence of systematic racial biases across all of the classifiers, with AAE tweets predicted as belonging to negative classes like hate speech or harassment significantly more frequently than SAE tweets. In most cases the bias decreases in magnitude when we condition on particular keywords which may indicate membership in negative classes, yet it still persists. We expect that these biases will result in racial discrimination if classifiers trained on any of these datasets are deployed in the field. Related works Scholars and practitioners have recently been devoting more attention to bias in machine learning models, particularly as these models are becoming involved in more and more consequential decisions BIBREF15 . Bias often derives from the data used to train these models. For example, BIBREF16 show how facial recognition technologies perform worse for darker-skinned people, particularly darker-skinned women, due to the disproportionate presence of white, male faces in the training data. Natural language processing systems also inherit biases from the data they were trained on. For example, in unsupervised learning, word embeddings often contain biases BIBREF4 , BIBREF5 , BIBREF6 which persist even after attempts to remove them BIBREF17 . There are many examples of bias in supervised learning contexts: YouTube's captioning models make more errors when transcribing women BIBREF1 , AAE is more likely to be misclassified as non-English by widely used language classifiers BIBREF0 , numerous gender and racial biases exist in sentiment classification systems BIBREF2 , and errors in both co-reference resolution systems and occupational classification models reflect gendered occupational patterns BIBREF18 , BIBREF3 . While hate speech and abusive language detection has become an important area for natural language processing research BIBREF19 , BIBREF7 , BIBREF20 , there has been little work addressing the potential for these systems to be biased. The danger posed by bias in such systems is, however, particularly acute, since it could result in negative impacts on the same populations the systems are designed to protect. For example, if we mistakenly consider speech by a targeted minority group as abusive we might unfairly penalize the victim, but if we fail to identify abuse against them we will be unable to take action against the perpetrator. Although no model can perfectly avoid such problems, we should be particularly concerned about the potential for such models to be systematically biased against certain social groups, particularly protected classes. A number of studies have shown that false positive cases of hate speech are associated with the presence of terms related to race, gender, and sexuality BIBREF21 , BIBREF22 , BIBREF10 . While not directly measuring bias, prior work has explored how annotation schemes BIBREF10 and the identity of the annotators BIBREF8 might be manipulated to help to avoid bias. BIBREF23 directly measured biases in the Google Perspective API classifier, trained on data from Wikipedia talk comments, finding that it tended to give high toxicity scores to innocuous statements like “I am a gay man”. They called this “false positive bias”, caused by the model overgeneralizing from the training data, in this case from examples where “gay” was used pejoratively. They find that a number of such “identity terms” are disproportionately represented in the examples labeled as toxic. BIBREF24 build upon this study, using templates to study gender differences in performance across two hate speech and abusive language detection datasets. They find that classifiers trained on these data tend to perform worse when female identity terms used, indicating gender bias in performance. We build upon this work by auditing a series of abusive language and hate speech detection datasets for racial biases. We evaluate how classification models trained on these datasets perform in the field, comparing their predictions for tweets written in language used by whites or African-Americans. Hate speech and abusive language datasets We focus on Twitter, the most widely used data source in abusive language research. We use all available datasets where tweets are labeled as various types of abuse and are written in English. We now briefly describe each of these datasets in chronological order. BIBREF9 collected 130k tweets containing one of seventeen different terms or phrases they considered to be hateful. They then annotated a sample of these tweets themselves, using guidelines inspired by critical race theory. These annotators were then reviewed by “a 25 year old woman studying gender studies and a nonactivist feminist” to check for bias. This dataset consists of 16,849 tweets labeled as either racism, sexism, or neither. Most of the tweets categorized as sexist relate to debates over an Australian TV show and most of those considered as racist are anti-Muslim. To account for potential bias in the previous dataset, BIBREF8 relabeled 2876 tweets in the dataset, along with a new sample from the tweets originally collected. The tweets were annotated by “feminist and anti-racism activists”, based upon the assumption that they are domain-experts. A fourth category, racism and sexism was also added to account for the presence of tweets which exhibit both types of abuse. The dataset contains 6,909 tweets. BIBREF10 collected tweets containing terms from the Hatebase, a crowdsourced hate speech lexicon, then had a sample coded by crowdworkers located in the United States. To avoid false positives that occurred in prior work which considered all uses of particular terms as hate speech, crowdworkers were instructed not to make their decisions based upon any words or phrases in particular, no matter how offensive, but on the overall tweet and the inferred context. The dataset consists of 24,783 tweets annotated as hate speech, offensive language, or neither. BIBREF11 selected tweets using ten keywords and phrases related to anti-black racism, Islamophobia, homophobia, anti-semitism, and sexism. The authors developed a coding scheme to distinguish between potentially offensive content and serious harassment, such as threats or hate speech. After an initial round of coding, where tweets were assigned to a number of different categories, they simplified their analysis to include a binary harassment or non-harassment label for each tweet. The dataset consists of 20,360 tweets, each hand-labeled by the authors. BIBREF12 constructed a dataset intended to better approximate a real-world setting where abuse is relatively rare. They began with a random sample of tweets then augmented it by adding tweets containing one or more terms from the Hatebase lexicon and that had negative sentiment. They criticized prior work for defining labels in an ad hoc manner. To develop a more comprehensive annotation scheme they initially labeled a sample of tweets, allowing each tweet to belong to multiple classes. After analyzing the overlap between different classes they settled on a coding scheme with four distinct classes: abusive, hateful, spam, and normal. We use a dataset they published containing 91,951 tweets coded into these categories by crowdworkers. Training classifiers For each dataset we train a classifier to predict the class of unseen tweets. We use regularized logistic regression with bag-of-words features, a commonly used approach in the field. While we expect that we could improve predictive performance by using more sophisticated classifiers, we expect that any bias is likely a function of the training data itself rather than the classifier. Moreover, although features like word embeddings can work well for this task BIBREF25 we wanted to avoid inducing any bias in our models by using pre-trained embeddings BIBREF24 . We pre-process each tweet by removing excess white-space and replacing URLs and mentions with placeholders. We then tokenize them, stem each token, and construct n-grams with a maximum length of three. Next we transform each dataset into a TF-IDF matrix, with a maximum of 10,000 features. We use 80% of each dataset to train models and hold out the remainder for validation. Each model is trained using stratified 5-fold cross-validation. We conduct a grid-search over different regularization strength parameters to identify the best performing model. Finally, for each dataset we identify the model with the best average F1 score and retrain it using all of the training data. The performance of these models on the 20% held-out validation data is reported in Table 1 . Overall we see varying performance across the classifiers, with some performing much better out-of-sample than others. In particular, we see that hate speech and harassment are particularly difficult to detect. Since we are primarily interested in within classifier, between corpora performance, any variation between classifiers should not impact our results. Race dataset We use a dataset of tweets labeled by race from BIBREF13 to measure racial biases in these classifiers. They collected geo-located tweets in the U.S. and matched them with demographic data from the Census on the population of non-Hispanic whites, non-Hispanic blacks, Hispanics, and Asians in the block group where the tweets originated. They then identified words associated with particular demographics and trained a probabilistic mixed-membership language model. This model learns demographically-aligned language models for each of the four demographic categories and is used to calculate the posterior proportion of language from each category in each tweet. Their validation analyses indicate that tweets with a high posterior proportion of non-Hispanic black language exhibit lexical, phonological, and syntactic variation consistent with prior research on AAE. Their publicly-available dataset contains 59.2 million tweets. We define a user as likely non-Hispanic black if the average posterior proportion across all of their tweets for the non-Hispanic black language model is $\ge 0.80$ (and $\le 0.10$ Hispanic and Asian combined) and as non-Hispanic white using the same formula but for the white language model. This allows us to restrict our analysis to tweets written by users who predominantly use one of the language models. Due to space constraints we discard users who predominantly use either the Hispanic or the Asian language model. This results in a set of 1.1m tweets written by people who generally use non-Hispanic black language and 14.5m tweets written by users who tend to use non-Hispanic white language. Following BIBREF0 , we call these datasets black-aligned and white-aligned tweets, reflecting the fact that they contain language associated with either demographic category but which may not all be produced by members of these categories. We now describe how we use these data in our experiments.
Unanswerable
b5484a0f03d63d091398d3ce4f841a45062438a7
b5484a0f03d63d091398d3ce4f841a45062438a7_0
Q: What is the introduced meta-embedding method introduced in this paper? Text: Introduction Representing the meanings of words is a fundamental task in Natural Language Processing (NLP). One popular approach to represent the meaning of a word is to embed it in some fixed-dimensional vector space (). In contrast to sparse and high-dimensional counting-based distributional word representation methods that use co-occurring contexts of a word as its representation (), dense and low-dimensional prediction-based distributed word representations have obtained impressive performances in numerous NLP tasks such as sentiment classification (), and machine translation (). Several distributed word embedding learning methods based on different learning strategies have been proposed (;;;;). Previous works studying the differences in word embedding learning methods (;) have shown that word embeddings learnt using different methods and from different resources have significant variation in quality and characteristics of the semantics captured. For example, Hill:NIPS:2014,Hill:ICLR:2015 showed that the word embeddings trained from monolingual vs. bilingual corpora capture different local neighbourhoods. Bansal:ACL:2014 showed that an ensemble of different word representations improves the accuracy of dependency parsing, implying the complementarity of the different word embeddings. This suggests the importance of meta-embedding – creating a new embedding by combining different existing embeddings. We refer to the input word embeddings to the meta-embedding process as the source embeddings. Yin:ACL:2016 showed that by meta-embedding five different pre-trained word embeddings, we can overcome the out-of-vocabulary problem, and improve the accuracy of cross-domain part-of-speech (POS) tagging. Encouraged by the above-mentioned prior results, we expect an ensemble containing multiple word embeddings to produce better performances than the constituent individual embeddings in NLP tasks. There are three main challenges a meta-embedding learning method must overcome. First, the vocabularies covered by the source embeddings might be different because they have been trained on different text corpora. Therefore, not all words will be equally represented by all the source embeddings. Even in situations where the implementations of the word embedding learning methods are publicly available, it might not be possible to retrain those embeddings because the text corpora on which those methods were originally trained might not be publicly available. Moreover, it is desirable if the meta-embedding method does not require the original resources upon which they were trained such as corpora or lexicons, and can directly work with the pre-trained word embeddings. This is particularly attractive from a computational point of view because re-training source embedding methods on large corpora might require significant processing times and resources. Second, the vector spaces and their dimensionalities of the source embeddings might be different. In most prediction-based word embedding learning methods the word vectors are randomly initialised. Therefore, there is no obvious correspondence between the dimensions in two word embeddings learnt even from two different runs of the same method, let alone from different methods (). Moreover, the pre-trained word embeddings might have different dimensionalities, which is often a hyperparameter set experimentally. This becomes a challenging task when incorporating multiple source embeddings to learn a single meta-embedding because the alignment between the dimensionalities of the source embeddings is unknown. Third, the local neighbourhoods of a particular word under different word embeddings show a significant diversity. For example, as the nearest neighbours of the word bank, GloVe (), a word sense insensitive embedding, lists credit, financial, cash, whereas word sense sensitive embeddings created by Huang:ACL:2012 lists river, valley, marsh when trained on the same corpus. We see that the nearest neighbours for the different senses of the word bank (i.e. financial institution vs. river bank) are captured by the different word embeddings. Meta-embedding learning methods that learn a single global projection over the entire vocabulary are insensitive to such local variations in the neighbourhoods (). To overcome the above-mentioned challenges, we propose a locally-linear meta-embedding learning method that (a) requires only the words in the vocabulary of each source embedding, without having to predict embeddings for missing words, (b) can meta-embed source embeddings with different dimensionalities, (c) is sensitive to the diversity of the neighbourhoods of the source embeddings. Our proposed method comprises of two steps: a neighbourhood reconstruction step (Section "Nearest Neighbour Reconstruction" ), and a projection step (Section "Projection to Meta-Embedding Space" ). In the reconstruction step, we represent the embedding of a word by the linearly weighted combination of the embeddings of its nearest neighbours in each source embedding space. Although the number of words in the vocabulary of a particular source embedding can be potentially large, the consideration of nearest neighbours enables us to limit the representation to a handful of parameters per each word, not exceeding the neighbourhood size. The weights we learn are shared across different source embeddings, thereby incorporating the information from different source embeddings in the meta-embedding. Interestingly, vector concatenation, which has found to be an accurate meta-embedding method, can be derived as a special case of this reconstruction step. Next, the projection step computes the meta-embedding of each word such that the nearest neighbours in the source embedding spaces are embedded closely to each other in the meta-embedding space. The reconstruction weights can be efficiently computed using stochastic gradient descent, whereas the projection can be efficiently computed using a truncated eigensolver. It is noteworthy that we do not directly compare different source embeddings for the same word in the reconstruction step nor in the projection step. This is important because the dimensions in source word embeddings learnt using different word embedding learning methods are not aligned. Moreover, a particular word might not be represented by all source embeddings. This property of the proposed method is attractive because it obviates the need to align source embeddings, or predict missing source word embeddings prior to meta-embedding. Therefore, all three challenges described above are solved by the proposed method. The above-mentioned properties of the proposed method enables us to compute meta-embeddings for five different source embeddings covering 2.7 million unique words. We evaluate the meta-embeddings learnt by the proposed method on semantic similarity prediction, analogy detection, relation classification, and short-text classification tasks. The proposed method significantly outperforms several competitive baselines and previously proposed meta-embedding learning methods () on multiple benchmark datasets. Related Work Yin:ACL:2016 proposed a meta-embedding learning method (1TON) that projects a meta-embedding of a word into the source embeddings using separate projection matrices. The projection matrices are learnt by minimising the sum of squared Euclidean distance between the projected source embeddings and the corresponding original source embeddings for all the words in the vocabulary. They propose an extension (1TON+) to their meta-embedding learning method that first predicts the source word embeddings for out-of-vocabulary words in a particular source embedding, using the known word embeddings. Next, 1TON method is applied to learn the meta-embeddings for the union of the vocabularies covered by all of the source embeddings. Experimental results in semantic similarity prediction, word analogy detection, and cross-domain POS tagging tasks show the effectiveness of both 1TON and 1TON+. In contrast to our proposed method which learns locally-linear projections that are sensitive to the variations in the local neighbourhoods in the source embeddings, 1TON and 1TON+ can be seen as globally linear projections between meta and source embedding spaces. As we see later in Section "Meta-Embedding Results" , our proposed method outperforms both of those methods consistently in all benchmark tasks demonstrating the importance of neighbourhood information when learning meta-embeddings. Moreover, our proposed meta-embedding method does not directly compare different source embeddings, thereby obviating the need to predict source embeddings for out-of-vocabulary words. Locally-linear embeddings are attractive from a computational point-of-view as well because during optimisation we require information from only the local neighbourhood of each word. Although not learning any meta-embeddings, several prior work have shown that by incorporating multiple word embeddings learnt using different methods improve performance in various NLP tasks. For example, tsuboi:2014:EMNLP2014 showed that by using both word2vec and GloVe embeddings together in a POS tagging task, it is possible to improve the tagging accuracy, if we had used only one of those embeddings. Similarly, Turian:ACL:2010 collectively used Brown clusters, CW and HLBL embeddings, to improve the performance of named entity recognition and chucking tasks. Luo:AAAI:2014 proposed a multi-view word embedding learning method that uses a two-sided neural network. They adapt pre-trained CBOW () embeddings from Wikipedia and click-through data from a search engine. Their problem setting is different from ours because their source embeddings are trained using the same word embedding learning method but on different resources whereas, we consider source embeddings trained using different word embedding learning methods and resources. Although their method could be potentially extended to meta-embed different source embeddings, the unavailability of their implementation prevented us from exploring this possibility. AAAI:2016:Goikoetxea showed that concatenation of word embeddings learnt separately from a corpus and the WordNet to produce superior word embeddings. Moreover, performing Principal Component Analysis (PCA) on the concatenated embeddings slightly improved the performance on word similarity tasks. In Section "Baselines" , we discuss the relationship between the proposed method and vector concatenation. Problem Settings To explain the proposed meta-embedding learning method, let us consider two source word embeddings, denoted by $_{1}$ and $_{2}$ . Although we limit our discussion here to two source embeddings for the simplicity of the description, the proposed meta-embedding learning method can be applied to any number of source embeddings. Indeed in our experiments we consider five different source embeddings. Moreover, the proposed method is not limited to meta-embedding unigrams, and can be used for $n$ -grams of any length $n$ , provided that we have source embeddings for those $n$ -grams. We denote the dimensionalities of $_{1}$ and $_{2}$ respectively by $d_{1}$ and $d_{2}$ (in general, $d_{1} \ne d_{2}$ ). The sets of words covered by each source embedding (i.e. vocabulary) are denoted by $_{1}$ and $_{2}$ . The source embedding of a word $v \in _{1}$ is represented by a vector $\vec{v}^{(1)} \in ^{d_{1}}$ , whereas the same for a word $v \in _{2}$ by a vector $_{2}$0 . Let the set union of $_{2}$1 and $_{2}$2 be $_{2}$3 containing $_{2}$4 words. In particular, note that our proposed method does not require a word $_{2}$5 to be represented by all source embeddings, and can operate on the union of the vocabularies of the source embeddings. The meta-embedding learning problem is then to learn an embedding $_{2}$6 in a meta-embedding space $_{2}$7 with dimensionality $_{2}$8 for each word $_{2}$9 . For a word $v$ , we denote its $k$ -nearest neighbour set in embedding spaces $_{1}$ and $_{2}$ respectively by $_{1}(v)$ and $_{2}(v)$ (in general, $|_{1}(v)| \ne |_{2}(v)|$ ). As discussed already in Section "Problem Settings" , different word embedding methods encode different aspects of lexical semantics, and are likely to have different local neighbourhoods. Therefore, by requiring the meta embedding to consider different neighbourhood constraints in the source embedding spaces we hope to exploit the complementarity in the source embeddings. Nearest Neighbour Reconstruction The first-step in learning a locally linear meta-embedding is to reconstruct each source word embedding using a linearly weighted combination of its $k$ -nearest neighbours. Specifically, we construct each word $v \in $ separately from its $k$ -nearest neighbours $_{1}(v)$ , and $_{2}(v)$ . The reconstruction weight $w_{vu}$ assigned to a neighbour $u \in _{1}(v) \cup _{2}(v)$ is found by minimising the reconstruction error $\Phi ({W})$ defined by ( "Nearest Neighbour Reconstruction" ), which is the sum of local distortions in the two source embedding spaces. (W) = i=12v v(i) - u i(v) wvu u(i)22 Words that are not $k$ -nearest neighbours of $v$ in either of the source embedding spaces will have their weights set to zero (i.e. $v \in $0 ). Moreover, we require the sum of reconstruction weights for each $v \in $1 to be equal to one (i.e. $v \in $2 ). To compute the weights $w_{vu}$ that minimise ( "Nearest Neighbour Reconstruction" ), we compute its error gradient $\frac{\partial \Phi ({W})}{\partial w_{vu}}$ as follows: -2i=12(v(i) - x i(v) wvx x(i))u(i)I[u i(v)] Here, the indicator function, $\mathbb {I}[x]$ , returns 1 if $x$ is true and 0 otherwise. We uniformly randomly initialise the weights $w_{vu}$ for each neighbour $u$ of $v$ , and use stochastic gradient descent (SGD) with the learning rate scheduled by AdaGrad () to compute the optimal values of the weights. The initial learning rate is set to $0.01$ and the maximum number of iterations to 100 in our experiments. Empirically we found that these settings to be adequate for convergence. Finally, we normalise the weights $w_{uv}$ for each $v$ such that they sum to 1 (i.e. $\frac{\partial \Phi ({W})}{\partial w_{vu}}$0 ). Exact computation of $k$ nearest neighbours for a given data point in a set of $n$ points requires all pairwise similarity computations. Because we must repeat this process for each data point in the set, this operation would require a time complexity of $Ø(n^{3})$ . This is prohibitively large for the vocabularies we consider in NLP where typically $n>10^{3}$ . Therefore, we resort to approximate methods for computing $k$ nearest neighbours. Specifically, we use the BallTree algorithm () to efficiently compute the approximate $k$ -nearest neighbours, for which the time complexity of tree construction is $Ø(n \log n)$ for $n$ data points. The solution to the least square problem given by ( "Nearest Neighbour Reconstruction" ) subjected to the summation constraints can be found by solving a set of linear equations. Time complexity of this step is $(N (d_{1} |_{1}|^{3} + d_{2} |_{2}|^{3}))$ , which is cubic in the neighbourhood size and linear in both the dimensionalities of the embeddings and vocabulary size. However, we found that the iterative estimation process using SGD described above to be more efficient in practice. Because $k$ is significantly smaller than the number of words in the vocabulary, and often the word being reconstructed is contained in the neighbourhood, the reconstruction weight computation converges after a small number (less than 5 in our experiments) of iterations. Projection to Meta-Embedding Space In the second step of the proposed method, we compute the meta-embeddings $\vec{v}^{()}, \vec{u}^{()} \in ^{d_{}}$ for words $v, u \in $ using the reconstruction weights $w_{vu}$ we computed in Section "Nearest Neighbour Reconstruction" . Specifically, the meta-embeddings must minimise the projection cost, $\Psi ()$ , defined by ( 4 ). $$\Psi () = \sum _{v \in } {\vec{v}^{()} - \sum _{i=1}^{2}\sum _{u \in _{i}(v)} w_{vu}\vec{u}^{()}}_{2}^{2}$$ (Eq. 4) By finding a $$ space that minimises ( 4 ), we hope to preserve the rich neighbourhood diversity in all source embeddings within the meta-embedding. The two summations in ( 4 ) over $N_{1}(v)$ and $N_{2}(v)$ can be combined to re-write ( 4 ) as follows: $$\Psi () = \sum _{v \in } {\vec{v}^{()} - \sum _{u \in _{1}(v) \cup _{2}(v)} w^{\prime }_{vu} \vec{u}^{()}}_{2}^{2}$$ (Eq. 5) Here, $w^{\prime }_{uv}$ is computed using ( 6 ). $$w^{\prime }_{vu} = w_{vu}\sum _{i=1}^{2} \mathbb {I}[u \in _{i}(v)]$$ (Eq. 6) The $d_{}$ dimensional meta-embeddings are given by the eigenvectors corresponding to the smallest $(d_{} + 1)$ eigenvectors of the matrix ${M}$ given by ( 7 ). $${M} = ({I} - {W}^{\prime }){I} - {W}^{\prime })$$ (Eq. 7) Here, ${W}^{\prime }$ is a matrix with the $(v,u)$ element set to $w^{\prime }_{vu}$ . The smallest eigenvalue of ${M}$ is zero and the corresponding eigenvector is discarded from the projection. The eigenvectors corresponding to the next smallest $d_{}$ eigenvalues of the symmetric matrix ${M}$ can be found without performing a full matrix diagonalisation (). Operations involving ${M}$ such as the left multiplication by ${M}$ , which is required by most sparse eigensolvers, can exploit the fact that ${M}$ is expressed in ( 7 ) as the product between two sparse matrices. Moreover, truncated randomised methods () can be used to find the smallest eigenvectors, without performing full eigen decompositions. In our experiments, we set the neighbourhood sizes for all words in all source embeddings equal to $n$ (i.e $(v,u)$0 ), and project to a $(v,u)$1 dimensional meta-embedding space. Source Word Embeddings We use five previously proposed pre-trained word embedding sets as the source embeddings in our experiments: (a) HLBL – hierarchical log-bilinear () embeddings released by Turian:ACL:2010 (246,122 word embeddings, 100 dimensions, trained on Reuters Newswire (RCV1) corpus), (b) Huang – Huang:ACL:2012 used global contexts to train multi-prototype word embeddings that are sensitive to word senses (100,232 word embeddings, 50 dimensions, trained on April 2010 snapshot of Wikipedia), (c) GloVe – Pennington:EMNLP:2014 used global co-occurrences of words over a corpus to learn word embeddings (1,193,514 word embeddings, 300 dimensions, trained on 42 billion corpus of web crawled texts), (d) CW – Collobert:ICML:2008 learnt word embeddings following a multitask learning approach covering multiple NLP tasks (we used the version released by () trained on the same corpus as HLBL containing 268,810 word embeddings, 200 dimensions), (e) CBOW – Mikolov:NIPS:2013 proposed the continuous bag-of-words method to train word embeddings (we discarded phrase embeddings and selected 929,922 word embeddings, 300 dimensions, trained on the Google News corpus containing ca. 100 billion words). The intersection of the five vocabularies is 35,965 words, whereas their union is 2,788,636. Although any word embedding can be used as a source we select the above-mentioned word embeddings because (a) our goal in this paper is not to compare the differences in performance of the source embeddings, and (b) by using the same source embeddings as in prior work (), we can perform a fair evaluation. In particular, we could use word embeddings trained by the same algorithm but on different resources, or different algorithms on the same resources as the source embeddings. We defer such evaluations to an extended version of this conference submission. Evaluation Tasks The standard protocol for evaluating word embeddings is to use the embeddings in some NLP task and to measure the relative increase (or decrease) in performance in that task. We use four such extrinsic evaluation tasks: We measure the similarity between two words as the cosine similarity between the corresponding embeddings, and measure the Spearman correlation coefficient against the human similarity ratings. We use Rubenstein and Goodenough's dataset () (RG, 65 word-pairs), rare words dataset (RW, 2034 word-pairs) (), Stanford's contextual word similarities (SCWS, 2023 word-pairs) (), the MEN dataset (3000 word-pairs) (), and the SimLex dataset () (SL 999 word-pairs). In addition, we use the Miller and Charles' dataset () (MC, 30 word-pairs) as a validation dataset to tune various hyperparameters such as the neighbourhood size, and the dimensionality of the meta-embeddings for the proposed method and baselines. Using the CosAdd method, we solve word-analogy questions in the Google dataset (GL) () (19544 questions), and in the SemEval (SE) dataset (). Specifically, for three given words $a$ , $b$ and $c$ , we find a fourth word $d$ that correctly answers the question $a$ to $b$ is $c$ to what? such that the cosine similarity between the two vectors $(\vec{b} - \vec{a} + \vec{c})$ and $\vec{d}$ is maximised. We use the DiffVec (DV) () dataset containing 12,458 triples of the form $(\textrm {relation}, \textrm {word}_{1}, \textrm {word}_{2})$ covering 15 relation types. We train a 1-nearest neighbour classifer where for each target tuple we measure the cosine similarity between the vector offset for its two word embeddings, and those of the remaining tuples in the dataset. If the top ranked tuple has the same relation as the target tuple, then it is considered to be a correct match. We compute the (micro-averaged) classification accuracy over the entire dataset as the evaluation measure. We use two binary short-text classification datasets: Stanford sentiment treebank (TR) (903 positive test instances and 903 negative test instances), and the movie reviews dataset (MR) () (5331 positive instances and 5331 negative instances). Each review is represented as a bag-of-words and we compute the centroid of the embeddings of the words in each bag to represent that review. Next, we train a binary logistic regression classifier with a cross-validated $\ell _{2}$ regulariser using the train portion of each dataset, and evaluate the classification accuracy using the test portion of the dataset. Baselines A simple baseline method for combining pre-trained word embeddings is to concatenate the embedding vectors for a word $w$ to produce a meta-embedding for $w$ . Each source embedding of $w$ is $\ell _{2}$ normalised prior to concatenation such that each source embedding contributes equally (a value in $[-1,1]$ ) when measuring the word similarity using the dot product. As also observed by Yin:ACL:2016 we found that CONC performs poorly without emphasising GloVe and CBOW by a constant factor (which is set to 8 using MC as a validation dataset) when used in conjunction with HLBL, Huang, and CW source embeddings. Interestingly, concatenation can be seen as a special case in the reconstruction step described in Section "Nearest Neighbour Reconstruction" . To see this, let us denote the concatenation of column vectors $\vec{v}^{(1)}$ and $\vec{v}^{(2)}$ by $\vec{x} = (\vec{v}^{(1)}; \vec{v}^{(2)})$ , and $\vec{u}^{(1)}$ and $\vec{u}^{(2)}$ by $\vec{y} = (\vec{u}^{(1)}; \vec{u}^{(2)})$ , where $\vec{x}, \vec{y} \in ^{d_{1} + d_{2}}$ . Then, the reconstruction error defined by ( "Nearest Neighbour Reconstruction" ) can be written as follows: $$\Phi ({W}) = \sum _{v \in } {\vec{x} - \sum _{u \in (v)} w_{vu}\vec{y} }_{2}^{2}$$ (Eq. 23) Here, the vocabulary $$ is constrained to the intersection $_{} \cap _{}$ because concatenation is not defined for missing words in a source embedding. Alternatively, one could use zero-vectors for missing words or (better) predict the word embeddings for missing words prior to concatenation. However, we consider such extensions to be beyond the simple concatenation baseline we consider here. On the other hand, the common neighbourhood $(v)$ in ( 23 ) can be obtained by either limiting $(v)$ to $_{1}(v) \cap _{2}(v)$ or, by extending the neighbourhoods to the entire vocabulary ( $(v) = $ ). ( 23 ) shows that under those neighbourhood constraints, the first step in our proposed method can be seen as reconstructing the neighbourhood of the concatenated space. The second step would then find meta-embeddings that preserve the locally linear structure in the concatenated space. One drawback of concatenation is that it increases the dimensionality of the meta-embeddings compared to the source-embeddings, which might be problematic when storing or processing the meta-embeddings (for example, for the five source embeddings we use here $d_{} = 100 + 50 + 300 + 200 + 300 = 950$ ). We create an $N \times 950$ matrix ${C}$ by arranging the CONC vectors for the union of all source embedding vocabularies. For words that are missing in a particular source embedding, we assign zero vectors of that source embedding's dimensionality. Next, we perform SVD on ${C} = {U} {D} {V}, where $ U $ and $ V $ are unitary matrices and the diagonal matrix $ D $ contains the singular values of $ C $. We then select the $ d $ largest left singular vectors from $ U $ to create a $ d $ dimensional embeddings for the $ N ${C}$0 d = 300 ${C}$1 U ${C}$2 Meta-Embedding Results Using the MC dataset, we find the best values for the neighbourhood size $n = 1200$ and dimensionality $d_{} = 300$ for the Proposed method. We plan to publicly release our meta-embeddings on acceptance of the paper. We summarise the experimental results for different methods on different tasks/datasets in Table 1 . In Table 1 , rows 1-5 show the performance of the individual source embeddings. Next, we perform ablation tests (rows 6-20) where we hold-out one source embedding and use the other four with each meta-embedding method. We evaluate statistical significance against best performing individual source embedding on each dataset. For the semantic similarity benchmarks we use Fisher transformation to compute $p < 0.05$ confidence intervals for Spearman correlation coefficients. In all other (classification) datasets, we used Clopper-Pearson binomial exact confidence intervals at $p < 0.05$ . Among the individual source embeddings, we see that GloVe and CBOW stand out as the two best embeddings. This observation is further confirmed from ablation results, where the removal of GloVe or CBOW often results in a decrease in performance. Performing SVD (rows 11-15) after concatenating, does not always result in an improvement. SVD is a global projection that reduces the dimensionality of the meta-embeddings created via concatenation. This result indicates that different source embeddings might require different levels of dimensionality reductions, and applying a single global projection does not always guarantee improvements. Ensemble methods that use all five source embeddings are shown in rows 21-25. 1TON and 1TON+ are proposed by Yin:ACL:2016, and were detailed in Section "Nearest Neighbour Reconstruction" . Because they did not evaluate on all tasks that we do here, to conduct a fair and consistent evaluation we used their publicly available meta-embeddings without retraining by ourselves. Overall, from Table 1 , we see that the Proposed method (row 25) obtains the best performance in all tasks/datasets. In 6 out of 12 benchmarks, this improvement is statistically significant over the best single source embedding. Moreover, in the MEN dataset (the largest among the semantic similarity benchmarks compared in Table 1 with 3000 word-pairs), and the Google dataset, the improvements of the Proposed method over the previously proposed 1TON and 1TON+ are statistically significant. The ablation results for the Proposed method show that, although different source embeddings are important to different degrees, by using all source embeddings we can obtain the best results. Different source embeddings are trained from different resources and by optimising different objectives. Therefore, for different words, the local neighbours predicted by different source embeddings will be complementary. Unlike the other methods, the Proposed method never compares different source embeddings' vectors directly, but only via the neighbourhood reconstruction weights. Consequently, the Proposed method is unaffected by relative weighting of source embeddings. In contrast, the CONC is highly sensitive against the weighting. In fact, we confirmed that the performance scores of the CONC method were decreased by 3–10 points when we did not do the weight tuning described in Section "Evaluation Tasks" . The unnecessity of the weight tuning is thus a clear advantage of the Proposed method. To investigate the effect of the dimensionality $d^{}$ on the meta-embeddings learnt by the proposed method, in fig:k, we fix the neighbourhood size $N = 1200$ and measure the performance on semantic similarity measurement tasks when varying $d^{}$ . Overall, we see that the performance peaks around $d^{} = 300$ . Such behaviour can be explained by the fact that smaller $d^{}$ dimensions are unable to preserve information contained in the source embeddings, whereas increasing $d^{}$ beyond the rank of the weight matrix ${W}$ is likely to generate noisy eigenvectors. In fig:n, we study the effect of increasing the neighbourhood size $n$ equally for all words in all source embeddings, while fixing the dimensionality of the meta-embedding $d^{} = 300$ . Initially, performance increases with the neighbourhood size and then saturates. This implies that in practice a small local neighbourhood is adequate to capture the differences in source embeddings. Complementarity of Resources We have shown empirically in Section "Meta-Embedding Results" that using the proposed method it is possible to obtain superior meta-embeddings from a diverse set of source embeddings. One important scenario where meta-embedding could be potentially useful is when the source embeddings are trained on different complementary resources, where each resource share little common vocabulary. For example, one source embedding might have been trained on Wikipedia whereas a second source embedding might have been trained on tweets. To evaluate the effectiveness of the proposed meta-embedding learning method under such settings, we design the following experiment. We select MEN dataset, the largest among all semantic similarity benchmarks, which contains 751 unique words in 3000 human-rated word-pairs for semantic similarity. Next, we randomly split the set of words into two sets with different overlap ratios. We then select sentences from 2017 January dump of Wikipedia that contains words from only one of the two sets. We create two corpora of roughly equal number of sentences via this procedure for different overlap ratios. We train skip-gram with negative sampling (SGNS) () on one corpus to create source embedding $S_{1}$ and GloVe () on the other corpus to create source embedding $S_{2}$ . Finally, we use the proposed method to meta-embed $S_{1}$ and $S_{2}$ . Figure 2 shows the Spearman correlation between the human similarity ratings and cosine similarities computed using the word embeddings on the MEN dataset for $S_{1}$ , $S_{2}$ and their meta-embeddings created using the proposed method (Meta) and concatenation baseline (CONC). From Figure 2 , we see that the meta embeddings obtain the best performance across all overlap ratios. The improvements are larger when the overlap between the corpora is smaller, and diminishes when the two corpora becomes identical. This result shows that our proposed meta-embedding learning method captures the complementary information available in different source embeddings to create more accurate word embeddings. Moreover, it shows that by considering the local neighbourhoods in each of the source embeddings separately, we can obviate the need to predict embeddings for missing words in a particular source embedding, which was a limitation in the method proposed by Yin:ACL:2016. Conclusion We proposed an unsupervised locally linear method for learning meta-embeddings from a given set of pre-trained source embeddings. Experiments on several NLP tasks show the accuracy of the proposed method, which outperforms previously proposed meta-embedding learning methods on multiple benchmark datasets. In future, we plan to extend the proposed method to learn cross-lingual meta-embeddings by incorporating both cross-lingual as well as monolingual information.
proposed method comprises of two steps: a neighbourhood reconstruction step (Section "Nearest Neighbour Reconstruction" ), and a projection step (Section "Projection to Meta-Embedding Space" ). In the reconstruction step, we represent the embedding of a word by the linearly weighted combination of the embeddings of its nearest neighbours in each source embedding space.
18d8b52b4409c718bf1cc90ce9e013206034bbd9
18d8b52b4409c718bf1cc90ce9e013206034bbd9_0
Q: How long are dialogue recordings used for evaluation? Text: Introduction Our main goal is to develop a monaural conversation transcription system that can not only perform automatic speech recognition (ASR) of multiple talkers but also determine who spoke the utterance when, known as speaker diarization BIBREF0, BIBREF1. For both ASR and speaker diarization, the main difficulty comes from speaker overlaps. For example, a speaker-overlap ratio of about 15% was reported in real meeting recordings BIBREF2. For such overlapped speech, neither conventional ASR nor speaker diarization provides a result with sufficient accuracy. It is known that mixing two speech significantly degrades ASR accuracy BIBREF3, BIBREF4, BIBREF5. In addition, no speaker overlaps are assumed with most conventional speaker diarization techniques, such as clustering of speech partitions (e.g. BIBREF0, BIBREF6, BIBREF7, BIBREF8, BIBREF9), which works only if there are no speaker overlaps. Due to these difficulties, it is still very challenging to perform ASR and speaker diarization for monaural recordings of conversation. One solution to the speaker-overlap problem is applying a speech-separation method such as deep clustering BIBREF10 or deep attractor network BIBREF11. However, a major drawback of such a method is that the training criteria for speech separation do not necessarily maximize the accuracy of the final target tasks. For example, if the goal is ASR, it will be better to use training criteria that directly maximize ASR accuracy. In one line of research using ASR-based training criteria, multi-speaker ASR based on permutation invariant training (PIT) has been proposed BIBREF3, BIBREF12, BIBREF13, BIBREF14, BIBREF15. With PIT, the label-permutation problem is solved by considering all possible permutations when calculating the loss function BIBREF16. PIT was first proposed for speech separation BIBREF16 and soon extended to ASR loss with promising results BIBREF3, BIBREF12, BIBREF13, BIBREF14, BIBREF15. However, a PIT-ASR model produces transcriptions for each utterance of speakers in an unordered manner, and it is no longer straightforward to solve speaker permutations across utterances. To make things worse, a PIT model trained with ASR-based loss normally does not produce separated speech waveforms, which makes speaker tracing more difficult. In another line of research, target-speaker (TS) ASR, which automatically extracts and transcribes only the target speaker's utterances given a short sample of that speaker's speech, has been proposed BIBREF17, BIBREF4. Žmolíková et al. proposed a target-speaker neural beamformer that extracts a target speaker's utterances given a short sample of that speaker's speech BIBREF17. This model was recently extended to handle ASR-based loss to maximize ASR accuracy with promising results BIBREF4. TS-ASR can naturally solve the speaker-permutation problem across utterances. Importantly, if we can execute TS-ASR for each speaker correctly, speaker diarization is solved at the same time just by extracting the start and end time information of the TS-ASR result. However, one obvious drawback of TS-ASR is that it cannot be applied when the speakers in the recordings are unknown because it requires a sample of the target speakers in advance of decoding. Based on this background, we propose a speech recognition and speaker diarization method that is based on TS-ASR but can be applied without knowing the speaker information in advance. To remove the limitation of TS-ASR, we propose an iterative method, in which (i) the estimation of target-speaker embeddings and (ii) TS-ASR based on the estimated embeddings are alternately executed. As an initial trial, we evaluated the proposed method by using real dialogue recordings in the Corpus of Spontaneous Japanese (CSJ). Although it contains the speech of only two speakers, the speaker-overlap ratio of the dialogue speech is very high; 20.1% . Thus, this is very challenging even for state-of-the-art ASR and speaker diarization. We show that the proposed method effectively reduced both word error rate (WER) and diarizaton error rate (DER). Simultaneous ASR and Speaker Diarization In this section, we first explain the problem we targeted then the proposed method with reference to Figure FIGREF1. Simultaneous ASR and Speaker Diarization ::: Problem statement The overview of the problem is shown in Figure FIGREF1 (left). We assume a sequence of observations $\mathcal {X}=\lbrace {\bf X}_1,...,{\bf X}_U\rbrace $, where $U$ is the number of observations, and ${\bf X}_u$ is the $u$-th observation consisting of a sequence of acoustic features. Such a sequence is naturally generated when we separate a long recording into small segments based on voice activity detection which is a basic preprocess for ASR so as not to generate overly large lattices. We also assume a tuple of word hypotheses ${\bf W}_u=(W_{1,u},...,W_{J,u})$ for an observation ${\bf X}_u$ where $J$ is the number of speakers, and $W_{j,u}$ represents the speech-recognition hypothesis of the $j$-th speaker given observation ${\bf X}_u$. We assume $W_{j,u}$ contains not only word sequences but also their corresponding frame-level time alignments of phonemes and silences. Finally, we assume a tuple of speaker embeddings $\mathcal {E}=(e_1, ..., e_J)$, where $e_j\in \mathbb {R}^d$ represents the $d$-dim speaker embedding of the $j$-th speaker. Then, our objective is to find the best possible $\mathcal {W}=\lbrace {\bf W}_1,...,{\bf W}_U\rbrace $ given a sequence of observations $\mathcal {X}$ as follows. Here, the starting point is the conventional maximum a posteriori-based decoding given $\mathcal {X}$ but for multiple speakers. We then introduce the speaker embeddings $\mathcal {E}$ as a hidden variable (Eq. ). Finally, we approximate the summation by using a max operation (Eq. ). Our motivation to introduce $\mathcal {E}$, which is constant across all observation indices $u$, is to explicitly enforce the order of speakers in $\mathcal {W}$ to be constant over indices $u$. It should be emphasized that if we can solve the problem, speaker diarization is solved at the same time just by extracting the start and end time information of each hypothesis in $\mathcal {W}$. Also note that there are $J!$ possible solutions by swapping the order of speakers in $\mathcal {E}$, and it is sufficient to find just one such solution. Simultaneous ASR and Speaker Diarization ::: Iterative maximization It is not easy to directly solve $P(\mathcal {W},\mathcal {E}|\mathcal {X})$, so we propose to alternately maximize $\mathcal {W}$ and $\mathcal {E}$. Namely, we first fix $\underline{\mathcal {W}}$ and find $\mathcal {E}$ that maximizes $P(\underline{\mathcal {W}},\mathcal {E}|\mathcal {X})$. We then fix $\underline{\mathcal {E}}$ and find $\mathcal {W}$ that maximizes $P(\mathcal {W},\underline{\mathcal {E}}|\mathcal {X})$. By iterating this procedure, $P(\mathcal {W},\mathcal {E}|\mathcal {X})$ can be increased monotonically. Note that it can be said by a simple application of the chain rule that finding $\mathcal {E}$ that maximizes $P(\underline{\mathcal {W}},\mathcal {E}|\mathcal {X})$ with a fixed $\underline{\mathcal {W}}$ is equivalent to finding $\mathcal {E}$ that maximizes $P(\mathcal {E}|\underline{\mathcal {W}},\mathcal {X})$. The same thing can be said for the estimation of $\mathcal {W}$ with a fixed $\underline{\mathcal {E}}$. For the $(i)$-th iteration of the maximization ($i\in \mathbb {Z}^{\ge 0}$), we first find the most plausible estimation of $\mathcal {E}$ given the $(i-1)$-th speech-recognition hypothesis $\tilde{\mathcal {W}}^{(i-1)}$ as follows. Here, the estimation of $\tilde{\mathcal {E}}^{(i)}$ is dependent on $\tilde{\mathcal {W}}^{(i-1)}$ for $i \ge 1$. Assume that the overlapped speech corresponds to a “third person” who is different from any person in the recording, Eq. DISPLAY_FORM5 can be achieved by estimating the speaker embeddings only from non-overlapped regions (upper part of Figure FIGREF1 (right)). In this study, we used i-vector BIBREF18 as the representation of speaker embeddings, and estimated i-vector based only on the non-overlapped region given $\tilde{\mathcal {W}}^{(i-1)}$ for each speaker. Note that, since we do not have an estimation of $\mathcal {W}$ for the first iteration, $\tilde{\mathcal {E}}^{(0)}$ is initialized only by $\mathcal {X}$. In this study, we estimated the i-vector for each speaker given the speech region that was estimated by the clustering-based speaker diarization method. More precicely, we estimated the i-vector for each ${\bf X}_u$ then applied $J$-cluster K-means clustering. The center of each cluster was used for the initial speaker embeddings $\tilde{\mathcal {E}}^{(0)}$. We then update $\mathcal {W}$ given speaker embeddings $\tilde{\mathcal {E}}^{(i)}$. Here, we estimate the most plausible hypotheses $\mathcal {W}$ given estimated embeddings $\tilde{\mathcal {E}}^{(i)}$ and observation $\mathcal {X}$ (Eq. DISPLAY_FORM8). We then assume the conditional independence of ${\bf W}_u$ given ${\bf X}_u$ for each segment $u$ (Eq. ). Finally, we further assume the conditional independence of $W_{j,u}$ given $\tilde{e}_j^{(i)}$ for each speaker $j$ (Eq. ). The final equation can be solved by applying TS-ASR for each segment $u$ for each speaker $j$ (lower part of Figure FIGREF1 (right)). We will review the detail of TS-ASR in the next section. TS-ASR: Review ::: Overview of TS-ASR TS-ASR is a technique to extract and recognize only the speech of a target speaker given a short sample utterance of that speaker BIBREF17, BIBREF21, BIBREF4. Originally, the sample utterance was fed into a special neural network that outputs an averaged embedding to control the weighting of speaker-dependent blocks of the acoustic model (AM). However, to make the problem simpler, we assume that a $d$-dimensional speaker embedding $e_{\rm tgt}\in \mathbb {R}^d$ is extracted from the sample utterance. In this context, TS-ASR can be expressed as the problem to find the best hypothesis $W_{\rm tgt}$ given observation ${\bf X}$ and speaker embedding $e_{\rm tgt}$ as follows. If we have a well-trained TS-ASR, Eq. can be solved by simply applying the TS-ASR for each segment $u$ for each speaker $j$. TS-ASR: Review ::: TS-AM with auxiliary output network ::: Overview Although any speech recognition architecture can be used for TS-ASR, we adopted a variant of the TS-AM that was recently proposed and has promising accuracy BIBREF5. Figure FIGREF13 describes the TS-AM that we applied for this study. This model has two input branches. One branch accepts acoustic features ${\bf X}$ as a normal AM while the other branch accepts an embedding $e_{\rm tgt}$ that represents the characteristics of the target speaker. In this study, we used a log Mel-filterbank (FBANK) and i-vector BIBREF18, BIBREF22 for the acoustic features and target-speaker embedding, respectively. A unique component of the model is in its output branch. The model has multiple output branches that produce outputs ${\bf Y}^{\rm tgt}$ and ${\bf Y}^{\rm int}$ for the loss functions for the target and interference speakers, respectively. The loss for the target speaker is defined to maximize the target-speaker ASR accuracy, while the loss for interference speakers is defined to maximize the interference-speaker ASR accuracy. We used lattice-free maximum mutual information (LF-MMI) BIBREF23 for both criteria. The original motivation of the output branch for interference speakers was the improvement of TS-ASR by achieving a better representation for speaker separation in the shared layers. However, it was also shown that the output branch for interference speakers can be used for the secondary ASR for interference speakers given the embedding of the target speaker BIBREF5. In this paper, we found out that the latter property worked very well for the ASR for dialogue recordings, which will be explained in the evaluation section. The network is trained with a mixture of multi-speaker speech given their transcriptions. We assume that, for each training sample, (a) transcriptions of at least two speakers are given, (b) the transcription for the target speaker is marked so that we can identify the target speaker's transcription, and (c) a sample for the target speaker can be used to extract speaker embeddings. These assumptions can be easily satisfied by artificially generating training data by mixing the speech of multiple speakers. TS-ASR: Review ::: TS-AM with auxiliary output network ::: Loss function The main loss function for the target speaker is defined as where $u$ corresponds to the index of training samples in this case. The term $\mathcal {G}^{\rm tgt}_u$ indicates a numerator (or reference) graph that represents a set of possible correct state sequences for the utterance of the target speaker of the $u$-th training sample, ${\bf S}$ denotes a hypothesis state sequence for the $u$-th training sample, and $\mathcal {G}^{D}$ denotes a denominator graph, which represents a possible hypothesis space and normally consists of a 4-gram phone language model in LF-MMI training BIBREF23. The auxiliary interference speaker loss is then defined to maximize the interference-speaker ASR accuracy, which we expect to enhance the speaker separation ability of the neural network. This loss is defined as where $\mathcal {G}^{\rm int}_u$ denotes a numerator (or reference) graph that represents a set of possible correct state sequences for the utterance of the interference speaker of the $u$-th training sample. Finally, the loss function $\mathcal {F}^{\rm comb}$ for training is defined as the combination of the target and interference losses, where $\alpha $ is the scaling factor for the auxiliary loss. In our evaluation, we set $\alpha =1.0$. Setting $\alpha =0.0$, however, corresponds to normal TS-ASR. Evaluation ::: Experimental settings ::: Main evaluation data: real dialogue recordings We conducted our experiments on the CSJ BIBREF25, which is one of the most widely used evaluation sets for Japanese speech recognition. The CSJ consists of more than 600 hrs of Japanese recordings. While most of the content is lecture recordings by a single speaker, CSJ also contains 11.5 hrs of 54 dialogue recordings (average 12.8 min per recording) with two speakers, which were the main target of ASR and speaker diarization in this study. During the dialogue recordings, two speakers sat in two adjacent sound proof chambers divided by a glass window. They could talk with each other over voice connection through a headset for each speaker. Therefore, speech was recorded separately for each speaker, and we generated mixed monaural recordings by mixing the corresponding speeches of two speakers. When mixing two recordings, we did not apply any normalization of speech volume. Due to this recording procedure, we were able to use non-overlapped speech to evaluate the oracle WERs. It should be noted that, although the dialogue consisted of only two speakers, the speaker overlap ratio of the recordings was very high due to many backchannels and natural turn-taking. Among all recordings, 16.7% of the region was overlapped speech while 66.4% was spoken by a single speaker. The remaining 16.9% was silence. Therefore, 20.1% (=16.7/(16.7+66.4)) of speech regions was speaker overlap. From the viewpoint of ASR, 33.5% (= (16.7*2)/(16.7*2+66.4)) of the total duration to be recognized was overlapped. These values were even higher than those reported for meetings with more than two speakers BIBREF26, BIBREF2. Therefore, these dialogue recordings are very challenging for both ASR and speaker diarization. We observed significantly high WER and DER, which is discussed in the next section. Evaluation ::: Experimental settings ::: Sub evaluation data: simulated 2-speaker mixture To evaluate TS-ASR, we also used the simulated 2-speaker-mixed data by mixing the three official single-speaker evaluation sets of CSJ, i.e., E1, E2, and E3 BIBREF27. Each set includes different groups of 10 lectures (5.6 hrs, 30 lectures in total). The E1 set consists of 10 lectures of 10 male speakers, and E2 and E3 each consists of 10 lectures of 5 female and 5 male speakers. We generate two-speaker mixed speech by adding randomly selected speech (= interference-speaker speech) to the original speech (= target-speaker speech) with the constraint that the target and interference speakers were different, and each interference speaker was selected only once from the dataset. When we mixed the two speeches, we configured them to have the same power level, and shorter speech was mixed with the longer speech from a random starting point selected to ensure the end point of the shorter one did not exceed that of the longer one. Evaluation ::: Experimental settings ::: Training data and training settings The rest of the 571 hrs of 3,207 lecture recordings (excluding the same speaker's lectures in the evaluation sets) were used for AM and language model (LM) training. We generated two-speaker mixed speech for training data in accordance with the following protocol. Prepare a list of speech samples (= main list). Shuffle the main list to create a second list under the constraint that the same speaker does not appear in the same line in the main and second lists. Mix the audio in the main and second lists one-by-one with a specific signal-to-interference ratio (SIR). For training data, we randomly sampled an SIR as follows. In 1/3 probability, sample the SIR from a uniform distribution between -10 and 10 dB. In 1/3 probability, sample the SIR from a uniform distribution between 10 and 60 dB. The transcription of the interference speaker was set to null. In 1/3 probability, sample the SIR from a uniform distribution between -60 and -10 dB. The transcription of the target speaker was set to null. The volume of each mixed speech was randomly changed to enhance robustness against volume difference. A speech for extracting a speaker embedding was also randomly selected for each speech mixture from the main list. Note that the random perturbation of volume was applied only for the training data, not for evaluation data. We trained a TS-AM consisting of a convolutional neural network (CNN), time-delay NN (TDNN) BIBREF28, and long short-term memory (LSTM) BIBREF29, as shown in fig:ts-am. The input acoustic feature for the network was a 40-dimensional FBANK without normalization. A 100-dimensional i-vector was also extracted and used for the target-speaker embedding to indicate the target speaker. For extracting this i-vector, we randomly selected an utterance of the same speaker. We conducted 8 epochs of training on the basis of LF-MMI, where the initial learning rate was set to 0.001 and exponentially decayed to 0.0001 by the end of the training. We applied $l2$-regularization and CE-regularization BIBREF23 with scales of 0.00005 and 0.1, respectively. The leaky hidden Markov model coefficient was set to 0.1. A backstitch technique BIBREF30 with a backstitch scale of 1.0 and backstitch interval of 4 was also used. For comparison, we trained another TS-AM without the auxiliary loss. We also trained a “clean AM” using clean, non-speaker-mixed speech. For this clean model, we used a model architecture without the auxiliary output branch, and an i-vector was extracted every 100 msec for online speaker/environment adaptation. In decoding, we used a 4-gram LM trained using the transcription of the training data. All our experiments were conducted on the basis of the Kaldi toolkit BIBREF31. Evaluation ::: Preliminary experiment with simulated 2-speaker mixture ::: Evaluation of TS-ASR We first evaluated the TS-AM with two-speaker mixture of the E1, E2, and E3 evaluation sets. For each test utterance, a sample of the target speaker was randomly selected from the other utterances in the test set. We used the same random seed over all experiments, so that they could be conducted under the same conditions. The results are listed in Table TABREF32. Although the clean AM produced a WER of 7.90% for the original clean dataset, the WER severely degraded to 88.03% by mixing two speakers. The TS-AM then significantly recovered the WER to 20.78% ($\alpha =0.0$). Although the improvement was not so significant compared with that reported in BIBREF5, the auxiliary loss further improved the WER to 20.53% ($\alpha =1.0$). Note that E1 contains only male speakers while E2 and E3 contain both female and male speakers. Because of this, E1 showed larger degradation of WER when 2 speakers were mixed.
average 12.8 min per recording
43d8057ff0d3f0c745a7164aed7ed146674630e0
43d8057ff0d3f0c745a7164aed7ed146674630e0_0
Q: What do the models that they compare predict? Text: Syntactic Variation Around the World This paper combines grammar induction (Dunn, 2018a, 2018b, 2019) and text classification (Joachims, 1998) to model syntactic variation across national varieties of English. This classification-based approach is situated within the task of dialect identification (Section 2) and evaluated against other baselines for the task (Sections 7 and 8). But the focus is modelling syntactic variation on a global-scale using corpus data. On the one hand, the problem is to use a model of syntactic preferences to predict an author's dialect membership (Dunn, 2018c). On the other hand, the problem is to take a spatially-generic grammar of English that is itself learned from raw text (c.f., Zeman, et al., 2017; Zeman, et al., 2018) and adapt that grammar using dialect identification as an optimization task: which constructions are more likely to occur in a specific regional variety? Because we want a complete global-scale model, we first have to ask: how many national varieties of English are there? This question, considered in Sections 3 and 4, is essential for determining the inventory of regional varieties that need to be included in the dialect identification task. This paper uses data-driven language mapping to find out where English is consistently used, given web data and Twitter data, in order to avoid the arbitrary selection of dialect areas. This is important for ensuring that each construction in the grammar receives the best regional weighting. What syntactic features are needed to represent variation in English? As discussed in Section 6, this paper uses grammar induction on a large background corpus to provide a replicable and dynamic feature space in order to avoid arbitrary limitations (e.g., lists of function words). The other side of this problem is to optimize grammar induction for regional dialects by using an identification task to learn regional weights for each part of the grammar: how much does a single generic grammar of English vary across dialects? To what degree does it represent a single dominant dialect? Finally, a corpus-based approach to variation is restricted to the specific domains or registers that are present in the corpus. To what degree is such a model of variation limited to a specific register? This paper uses both web-crawled corpora and social media corpora to explore the robustness of dialect models across domains (Section 8). Along these same lines, how robust is a model of syntactic variation to the presence of a few highly predictive features? This paper uses unmasking, a method from authorship verification (Koppel, et al., 2007), to evaluate the stability of dialect models over rounds of feature pruning (Section 9). Previous Work Because of its long history as a colonial language (Kachru, 1990), English is now used around the world by diverse national communities. In spite of the global character of English, dialectology and sociolinguistics continue to focus largely on sub-national dialects of English within so-called inner-circle varieties (for example, Labov, et al., 2016; Strelluf, 2016; Schreier, 2016; Clark & Watson, 2016). This paper joins recent work in taking a global approach by using geo-referenced texts to represent national varieties (e.g., Dunn, 2018c; Tamaredo, 2018; Calle-Martin & Romero-Barranco, 2017; Szmrecsanyi, et al., 2016; Sanders, 2010, 2007; c.f., Davies & Fuchs, 2015). For example, this study of dialect classification contains inner-circle (Australia, Canada, United Kingdom, Ireland, New Zealand, United States), outer-circle (India, Malaysia, Nigeria, Philippines, Pakistan, South Africa), and expanding-circle (Switzerland, Portugual) varieties together in a single model. The problem is that these more recent approaches, while they consider more varieties of English, have arbitrarily limited the scope of variation by focusing on a relatively small number of features (Grafmiller & Szmrecsanyi, 2018; Kruger & van Rooy, 2018; Schilk & Schaub, 2016; Collins, 2012). In practical terms, such work uses a smaller range of syntactic representations than comparable work in authorship analysis (c.f., Grieve, 2007; Hirst & Feiguina, 2007; Argamon & Koppel, 2013). From a different perspective, we could view the modelling of dialectal variation as a classification task with the goal of predicting which dialect a sample belongs to. Previous work has draw on many representations that either directly or indirectly capture syntactic patterns (Gamallo, et al., 2016; Barbaresi, 2018; Kreutz & Daelemans, 2018; Kroon, et al., 2018). Given a search for the highest-performing approach, other work has shown that methods and features without a direct linguistic explanation can still achieve impressive accuracies (McNamee, 2016; Ionescu & Popescu, 2016; Belinkov & Glass, 2016; Ali, 2018). On the other hand, there is a conceptual clash between potentially topic-based methods for dialect identification and other tasks that explicitly model place-specific language use. For example, text-based geo-location can use place-based topics to identify where a document is from (c.f., Wing & Baldridge, 2014; Hulden, et al., 2015; Lourentzou, et al., 2017). And, at the same time, place-based topics can be used for both characterizing the functions of a location (c.f., Adams & McKenzie, 2018; Adams, 2015) and disambiguating gazeteers (c.f., Ju, et al., 2016). This raises an important conceptual problem: when does predictive accuracy reflect dialects as opposed to either place-references or place-based content? While geo-referenced corpora capture both types of information, syntactic representations focus specifically on linguistic variation while place-references and place-based topics are part of document content rather than linguistic structure. Where Is English Used? The goal of this paper is to model syntactic variation across all major or robust varieties of English. But how do we know which varieties should be included? Rather than select some set of varieties based on convenience, we take a data-driven approach by collecting global web-crawled data and social media data to determine where English is used. This approach is biased towards developed countries with access to digital technologies. As shown in Table 1, however, enough global language data is available from both sources to determine where national varieties of English exist. Data comes from two sources of digital texts: web pages from the Common Crawl and social media from Twitter. Both types of data have been used previously to study dialectal and spatial variation in language. More commonly, geo-referenced Twitter data has been taken to represent language-use in specific places (e.g., Eisenstein, et al., 2010; Roller, et al., 2012; Kondor, et al., 2013; Mocanu, et al., 2013; Eisenstein, et al., 2014; Graham, et al., 2014; Donoso & Sanchez, 2017); regional variation in Twitter usage was also the subject of a shared task at PAN-17 (Rangel, et al., 2017). Web-crawled data has also been curated and prepared for the purpose of studying spatial variation (Goldhahn, et al., 2012; Davies & Fuchs, 2015), including the use of country-level domains for geo-referencing (Cook & Brinton, 2017). This paper builds on such previous work by systematically collecting geo-referenced data from both sources on a global scale. The full web corpus is available for download. For the Common Crawl data (abbreviated as CC), language samples are geo-located using country-specific top-level domains. The assumption is that a language sample from a web-site under the .ca domain originated from Canada (c.f., Cook & Brinton, 2017). This approach to regionalization does not assume that whoever produced that language sample was born in Canada or represents a traditional Canadian dialect group; rather, the assumption is only that the sample represents someone in Canada who is producing language data. Some countries are not available because their top-level domains are used for other purposes (i.e., .ai, .fm, .io, .ly, .ag, .tv). Domains that do not contain geographic information are also removed from consideration (e.g., .com sites). The Common Crawl dataset covers 2014 through the end of 2017, totalling 81.5 billion web pages. As shown in Table 1, after processing this produces a corpus of 16.65 billion words. The basic procedure for processing the Common Crawl data is to look at text within paragraph tags: any document with at least 40 words within paragraph tags from a country-level domain is processed. Noise like navigational items, boilerplate text, and error messages is removed using heuristic searches and also using deduplication: any text that occurs multiple times on the same site or multiple times within the same month is removed. A second round of deduplication is used over the entire dataset to remove texts in the same language that occur in the same country. Its limited scope makes this final deduplication stage possible. For reproducibility, the code used for collecting and processing the Common Crawl data is also made available. The use of country-level domains for geo-referencing raises two questions: First, are there many domains that are not available because they are not used or are used for non-geographic purposes? After removing irrelevant domains like .tv, the CC dataset covers 166 countries (30 of which are not included in the Twitter corpus) while the Twitter corpus covers 169 countries (33 of which are not included in the CC corpus). Thus, while the use of domains does remove some countries from consideration, the effect is limited. Second, does the amount of data for each country domain reflect the actual number of web pages from that country? In other words, some countries like the United States are less likely to use their top-level codes. However, the United States is still well-represented in the model. The bigger worry is that regional varieties from Africa or East Asia, both of which are under-represented in these datasets, might be missing from the model. For the Twitter corpus, a spatial search is used to collect Tweets from within a 50km radius of 10k cities. Such a search avoids biasing the selection by using language-specific keywords or hashtags. The Twitter data covers the period from May of 2017 until early 2019. This creates a corpus containing 1,066,038,000 Tweets. The language identification component, however, only provides reliable predictions for samples containing at least 50 characters. Thus, the corpus is pruned to include only those Tweets above that length threshold. As shown in Table 1, this produces a corpus containing 4.14 billion words with a global distribution. Language identification (LID) is important here because a failure to identify some regional varieties of English will ultimately bias the model. The LID system used is available for testing. But given that the focus is a major language, English, the performance of LID is not a significant factor in the overall model of syntactic variation. The datasets summarized in Table 1 include many languages other than English. The purpose is to provide background information about where robust varieties of English are found: where is English discovered when the search is not biased by looking only for English? On the one hand, some regions may be under-represented in these datasets; if national varieties are missing from a region, it could be (i) that there is no national variety of English or (ii) that there is not enough data available from that region. On the other hand, Table 1 shows that each region is relatively well-represented, providing confidence that we are not missing other important varieties. How Many Varieties of English? We take a simple threshold-based approach to the question of which regional varieties to include: any national variety that has at least 15 million words in both the Common Crawl and Twitter datasets is included in the attempt to model all global varieties of English. This threshold is chosen in order to ensure that sufficient training/testing/development samples are available for each variety. The inventory of national varieties in Table 2 is entirely data-driven and does not depend on distinctions like dialects vs. varieties, inner-circle vs. outer-circle, or native vs. non-native. Instead, the selection is empirical: any area with a large amount of observed English usage is assumed to represent a regional variety. Since the regions here are based on national boundaries, we call these national varieties. We could just as easily call them national dialects. Nevertheless, the inventory (sorted by region) contains within it some important combinations. There are two African varieties, two south Asian varieties, two southeast Asian varieties, two native-speaker European varieties and two non-native-speaker European varieties. Taken together, these pairings provide a rich ground for experimentation. Are geographically closer varieties more linguistically similar? Is there an empirical reality to the distinction between inner-circle and outer-circle varieties (e.g., American English vs. Malaysian English)? The importance of this language-mapping approach is that it does not assume the inventory of regions. Data Preparation and Division The goal of this paper is to model syntactic variation using geo-referenced documents taken from web-crawled and social media corpora. Such geo-referenced documents represent language use in a particular place but, unlike traditional dialect surveys, there is no assurance that individual authors are native speakers from that place. We have to assume that most language samples from a given country represent the native English variety of that country. For example, many non-local residents live in Australia; we only have to assume that most speakers observed in Australia are locals. In order to average out the influence of out-of-place samples, we use random aggregation to create samples of exactly 1,000 words in both corpora. For example, in the Twitter corpus this means that an average of 59 individual Tweets from a place are combined into a single sample. First, this has the effect of providing more constructions per sample, making the modeling task more approachable. Second and more importantly, individual out-of-place Tweets are reduced in importance because they are aggregated with other Tweets presumably produced by local speakers. The datasets are formed into training, testing, and development sets as follows: First, 2k samples are used for development purposes regardless of the amount of data from a given regional variety. Depending on the size of each variety, at least 12k training and 2.5k testing samples are available. Because some varieties are represented by much larger corpora (i.e., Tweets from American English), a maximum of 25k training samples and 5k testing samples are allowed per variety per register. This creates a corpus with 327,500 training and 66,500 testing samples (CC) and a corpus with 308,000 training and 64,000 testing samples (TW). As summarized in Table 3, these datasets contain significantly more observations than have been used in previous work (c.f., Dunn, 2018c). Learning the Syntactic Feature Space Past approaches to syntactic representation for this kind of task used part-of-speech n-grams (c.f., Hirst & Feiguina, 2007) or lists of function words (c.f., Argamon & Koppel, 2013) to indirectly represent grammatical patterns. Recent work (Dunn, 2018c), however, has introduced the use of a full-scale syntactic representations based on grammar induction (Dunn, 2017, 2018a, 2019) within the Construction Grammar paradigm (CxG: Langacker, 2008; Goldberg, 2006). The idea is that this provides a replicable syntactic representation. A CxG, in particular, is useful for text classification tasks because it is organized around complex constructions that can be quantified using frequency. For example, the ditransitive construction in (1) is represented using a sequence of slot-constraints. Some of these slots have syntactic fillers (i.e., noun) and some have joint syntactic-semantic fillers (i.e., V:transfer). Any utterance, as in (2) or (3), that satisfies these slot-constraints counts as an example or instance of the construction. This provides a straight-forward quantification of a grammar as a one-hot encoding of construction frequencies. (1) [noun – V:transfer – N:animate – noun] (2) “He mailed Mary a letter." (3) “She gave me a hand." This paper compares two learned CxGs: first, the same grammar used in previous work (Dunn, 2018c); second, a new grammar learned with an added association-based transition extraction algorithm (Dunn, 2019). These are referred to as CxG-1 (the frequency-based grammar in Dunn, 2019) and CxG-2 (the association-based grammar), respectively. Both are learned from web-crawled corpora separate from the corpora used for modeling regional varieties (from Baroni, et al., 2009; Majli̧s & Žabokrtský, 2012; Benko, 2014; and the data provided for the CoNLL 2017 Shared Task: Ginter, et al., 2017). The exact datasets used are available. In both cases a large background corpus is used to represent syntactic constructions that are then quantified in samples from regional varieties. The grammar induction algorithm itself operates in folds, optimizing grammars against individual test sets and then aggregating these fold-specific grammars at the end. This creates, in effect, one large umbrella-grammar that potentially over-represents a regional dialect. From the perspective of the grammar, we can think of false positives (the umbrella-grammar contains constructions that a regional dialect does not use) and false negatives (the umbrella-grammar is missing constructions that are important to a regional dialect). For dialect identification as a task, only missing constructions will reduce prediction performance. How well do CxG-1 and CxG-2 represent the corpora from each regional variety? While prediction accuracies are the ultimate evaluation, we can also look at the average frequency across all constructions for each national dialect. Because the samples are fixed in length, we would expect the same frequencies across all dialects. On the other hand, false positive constructions (which are contained in the umbrella-grammar but do not occur frequently in a national dialect) will reduce the overall feature density for that dialect. Because the classification results do not directly evaluate false positive constructions, we investigate this in Table 4 using the average feature density: the total average frequency per sample, representing how many syntactic constructions from the umbrella-grammar are present in each regional dialect. This is adjusted to show differences from the average for each grammar (i.e., CxG-1 and CxG-2 are each calculated independently). First, CxG-1 has a smaller range of feature densities, with the lowest variety (Portugal English) being only 10.41% different from the highest variety (UK English). This range is much higher for CxG-2, with a 36.01% difference between the lowest variety (Philippines English) and the highest variety (Irish English). One potential explanation for the difference is that CxG-2 is a better fit for the inner-circle dominated training data. This is a question for future work. For now, both grammars pattern together in a general sense: the highest feature density is found in UK English and varieties more similar to UK English (Ireland, Australia). The lowest density is found in under-represented varieties such as Portugal English or Philippines English. Any grammar-adaptation based on dialect identification will struggle to add unknown constructions from these varieties. Modeling National Varieties The main set of experiments uses a Linear Support Vector Machine (Joachims, 1998) to classify dialects using CxG features. Parameters are tuned using the development data. Given the general robust performance of SVMs in the literature relative to other similar classifiers on variation tasks (c.f., Dunn, et al., 2016), we forego a systematic evaluation of classifiers. We start, in Table 5, with an evaluation of baselines by feature type and dataset. We have two general types of features: purely syntactic representations (CxG-1, CxG-2, Function words) and potentially topic-based features (unigrams, bigrams, trigrams). The highest performing feature on both datasets is simple lexical unigrams, at 30k dimensions. We use a hashing vectorizer to avoid a region-specific bias: the vectorizer does not need to be trained or initialized against a specific dataset so there is no chance that one of the varieties will be over-represented in determining which n-grams are included. But this has the side-effect of preventing the inspection of individual features. Vectors for all experiments are available, along with the trained models that depend on these vectors. As n increases, n-grams tend to represent structural rather than topical information. In this case, performance decreases as n increases. We suggest that this decrease provides an indication that the performance of unigrams is based on location-specific content (e.g., “Chicago" vs. “Singapore") rather than on purely linguistic lexical variation (e.g., “jeans" vs. “denim"). How do we differentiate between predictions based on place-names, those based on place-specific content, and those based on dialectal variation? That is a question for future work. For example, is it possible to identify and remove location-specific content terms? Here we focus instead on using syntactic representations that are not subject to such interference. Within syntactic features, function words perform the worst on both datasets with F1s of 0.65 and 0.55. This is not surprising because function words in English do not represent syntactic structures directly; they are instead markers of the types of structures being used. CxG-1 comes next with F1s of 0.80 and 0.76, a significant improvement over the function-word baseline but not approaching unigrams. Note that the experiments using this same grammar in previous work (Dunn, 2018c) were applied to samples of 2k words each. Finally, CxG-2 performs the best, with F1s of 0.96 and 0.92, falling behind unigrams but rivaling bigrams and surpassing trigrams. Because of this, the more detailed experiments below focus only on the CxG-2 grammar. A closer look at both datasets by region for CxG-2 is given in Table 6. The two datasets (web-crawled and social media) present some interesting divergences. For example, Australian English is among the better performing varieties on the CC dataset (F1 = 0.97) but among the worst performing varieties on Twitter (F1 = 0.83). This is the case even though the variety we would assume would be most-often confused with Australian English (New Zealand English) has a stable F1 across domains (both are 0.91). An examination of the confusion matrix (not shown), reveals that errors between New Zealand and Australia are similar between datasets but that the performance of Australian English on Twitter data is reduced by confusion between Australian and Canadian English. In Table 4 we saw that the umbrella-grammar (here, CxG-2) better represents inner-circle varieties, specifically UK English and more closely related varieties. This is probably an indication of the relative representation of the different varieties used to train the umbrella-grammar: grammar induction will implicitly model the variety it is exposed to. It is interesting, then, that less typical varieties like Pakistan English and Philippines English (which had lower feature densities) have higher F1s in the dialect identification task. On the one hand, the syntactic differences between these varieties and inner-circle varieties means that the umbrella-grammar misses some of their unique constructions. On the other hand, their greater syntactic difference makes these varieties easier to identify: they are more distinct in syntactic terms even though they are less well represented. Which varieties are the most similar syntactically given this model? One way to quantify similarity is using errors: which varieties are the most frequently confused? American and Canadian English have 221 misclassified samples (CC), while Canadian and UK English are only confused 36 times. This reflects an intuition that Canadian English is much more similar to American English than it is to UK English. New Zealand and Australian English have 101 misclassifications (again, on CC); but New Zealand and South African English have 266. This indicates that New Zealand English is more syntactically similar to South African English than to Australian English. However, more work on dialect similarity is needed to confirm these findings across different datasets. Varieties on the Web and Social Media How robust are models of syntactic variation across domains: in other words, does web-crawled data provide the same patterns as social media data? We conduct two types of experiments to evaluate this: First, we take dialect as a cross-domain phenomenon and train/test models on both datasets together, ignoring the difference between registers. Second, we evaluate models trained entirely on web-crawled data against testing data from social media (and vice-versa), evaluating a single model across registers. The point is to evaluate the impact of registers on syntactic variation: does Australian English have the same profile on both the web and on Twitter? Starting with the register-agnostic experiments, Table 8 shows the classification performance if we lump all the samples into a single dataset (however, the same training and testing data division is still maintained). The overall F1 is the same as the Twitter-only results in Table 6. On the other hand, varieties like Australian English that performed poorly in Twitter perform somewhat better under these conditions. Furthermore, the observation made above that outer-circle varieties are more distinct remains true: the highest performing varieties are the least proto-typical (i.e., Indian English and Philippines English). But a single model does not perform well across the two datasets, as shown in Table 7. The model trained on Twitter data does perform somewhat better than its counterpart, but in both cases there is a significant drop in performance. On the one hand, this is not surprising given differences in the two registers: we expect some reduction in classification performance across domains like this. For example, the unigram baseline suffers a similar reduction to F1s of 0.49 (trained on CC) and 0.55 (trained on Twitter). On the other hand, we would have more confidence in this model of syntactic variation if there was a smaller drop in accuracy. How can we better estimate grammars and variations in grammars across these different registers? Is it a problem of sampling different populations or is there a single population that is showing different linguistic behaviours? These are questions for future work. Unmasking Dialects How robust are classification-based dialect models to a small number of highly predictive features? A high predictive accuracy may disguise a reliance on just a few syntactic variants. Within authorship verification, unmasking has been used as a meta-classification technique to measure the depth of the difference between two text types (Koppel, et al., 2007). The technique uses a linear classifier to distinguish between two texts using chunks of the texts as samples. Here we distinguish between dialects with individual samples as chunks. After each round of classification, the most predictive features are removed. In this case, the highest positive and negative features for each regional dialect are removed for the next classification round. Figure 1 shows the unmasking curve over 100 rounds using the F1 score. Given that there are 14 regional dialects in the model, Figure 1 represents the removal of approximately 2,800 features. For both datasets, the unigram baseline degrades less quickly than the syntactic model. On the one hand, it has significantly more features in total, so that there are more features to support the classification. On the other hand, given that the most predictive features are being removed, this shows that the lexical model has a deeper range of differences available to support classification than the syntactic model. Within the syntactic models, the classifier trained on web-crawled data degrades less quickly than the Twitter model and maintains a higher performance throughout. This unmasking curve is simply a method for visualizing the robustness of a classification model. The syntactic model is less robust to unmasking than the lexical model. At the same time, we know that the syntactic model does not rely on place-names and place-based content and thus represents a more traditional linguistic approach to variation. Discussion This paper has used data-driven language mapping to select national dialects of English to be included in a global dialect identification model. The main experiments have focused on a dynamic syntactic feature set, showing that it is possible to predict dialect membership within-domain with only a small loss of performance against lexical models. This work raises two remaining problems: First, we know that location-specific content (i.e., place names, place references, national events) can be used for geo-location and text-based models of place. To what degree does a lexical approach capture linguistic variation (i.e., “pop" vs. “soda") and to what degree is it capturing non-linguistic information (i.e., “Melbourne" vs. “London")? This is an essential problem for dialect identification models. A purely syntactic model does not perform as well as a lexical model, but it does come with more guarantees. Second, we have seen that inner-circle varieties have higher feature densities given the grammars used here. This implies that there are syntactic constructions in varieties like Philippines English that have not been modeled by the grammar induction component. While dialect identification can be used to optimize regional weights for known constructions, how can such missing constructions be adapted? This remains a challenge. While the less proto-typical dialects have higher F1s (i.e., Pakistan English), they also have lower feature densities. This indicates that some of their constructions are missing from the grammar. Nevertheless, this paper has shown that a broader syntactic feature space can be used to model the difference between many national varieties of English.
national dialects of English
ebb7313eee2ea447abc83cb08b658b57c7eaa600
ebb7313eee2ea447abc83cb08b658b57c7eaa600_0
Q: What SMT models did they look at? Text: Introduction The availability of cross-language parallel corpora is one of the basis of current Statistical and Neural Machine Translation systems (e.g. SMT and NMT). Acquiring a high-quality parallel corpus that is large enough to train MT systems, specially NMT ones, is not a trivial task, since it usually demands human curating and correct alignment. In light of that, the automated creation of parallel corpora from freely available resources is extremely important in Natural Language Processing (NLP), enabling the development of accurate MT solutions. Many parallel corpora are already available, some with bilingual alignment, while others are multilingually aligned, with 3 or more languages, such as Europarl BIBREF0 , from the European Parliament, JRC-Acquis BIBREF1 , from the European Commission, OpenSubtitles BIBREF2 , from movies subtitles. The extraction of parallel sentences from scientific writing can be a valuable language resource for MT and other NLP tasks. The development of parallel corpora from scientific texts has been researched by several authors, aiming at translation of biomedical articles BIBREF3 , BIBREF4 , or named entity recognition of biomedical concepts BIBREF5 . Regarding Portuguese/English and English/Spanish language pairs, the FAPESP corpus BIBREF6 , from the Brazilian magazine revista pesquisa FAPESP, contains more than 150,000 aligned sentences per language pair, constituting an important language resource. In Brazil, the governmental body responsible for overseeing post-graduate programs across the country, called CAPES, tracks every enrolled student and scientific production. In addition, CAPES maintains a freely accessible database of theses and dissertations produced by the graduate students (i.e. Theses and Dissertations Catalog - TDC) since 1987, with abstracts available since 2013. Under recent governmental efforts in data sharing, CAPES made TDC available in CSV format, making it easily accessible for data mining tasks. Recent data files, from 2013 to 2016, contain valuable information for NLP purposes, such as abstracts in Portuguese and English, scientific categories, and keywords. Thus, TDC can be an important source of parallel Portuguese/English scientific abstracts. In this work, we developed a sentence aligned parallel corpus gathered from CAPES TDC comprised of abstracts in English and Portuguese spanning the years from 2013 to 2016. In addition, we included metadata regarding the respective theses and dissertations. Material and Methods In this section, we detail the information retrieved from CAPES website, the filtering process, the sentence alignment, and the evaluation experiments. An overview of the steps employed in this article is shown in Figure FIGREF1 . Document retrieval and parsing The TDC datasets are available in the CAPES open data website divided by years, from 2013 to 2016 in CSV and XLSX formats. We downloaded all CSV files from the respective website and loaded them into an SQL database for better manipulation. The database was then filtered to remove documents without both Portuguese and English abstracts, and additional metadata selected. After the initial filtering, the resulting documents were processed for language checking to make sure that there was no misplacing of English abstracts in the Portuguese field, or the other way around, removing the documents that presented such inconsistency. We also performed a case folding to lower case letters, since the TDC datasets present all fields with uppercase letters. In addition, we also removed newline/carriage return characters (i.e \n and \r), as they would interfere with the sentence alignment tool. Sentence alignment For sentence alignment, we used the LF aligner tool, a wrapper around the Hunalign tool BIBREF7 , which provides an easy to use and complete solution for sentence alignment, including pre-loaded dictionaries for several languages. Hunalign uses Gale-Church sentence-length information to first automatically build a dictionary based on this alignment. Once the dictionary is built, the algorithm realigns the input text in a second iteration, this time combining sentence-length information with the dictionary. When a dictionary is supplied to the algorithm, the first step is skipped. A drawback of Hunalign is that it is not designed to handle large corpora (above 10 thousand sentences), causing large memory consumption. In these cases, the algorithm cuts the large corpus in smaller manageable chunks, which may affect dictionary building. The parallel abstracts were supplied to the aligner, which performed sentence segmentation followed by sentence alignment. A small modification in the sentence segmentation algorithm was performed to handle the fact that all words are in lowercase letters, which originally prevented segmentation. After sentence alignment, the following post-processing steps were performed: (i) removal of all non-aligned sentences; (ii) removal of all sentences with fewer than three characters, since they are likely to be noise. Machine translation evaluation To evaluate the usefulness of our corpus for SMT purposes, we used it to train an automatic translator with Moses BIBREF8 . We also trained an NMT model using the OpenNMT system BIBREF9 , and used the Google Translate Toolkit to produce state-of-the-art comparison results. The produced translations were evaluated according to the BLEU score BIBREF10 . Manual evaluation Although the Hunalign tool usually presents a good alignment between sentences, we also conducted a manual validation to evaluate the quality of the aligned sentences. We randomly selected 400 pairs of sentences. If the pair was fully aligned, we marked it as "correct"; if the pair was incompletely aligned, due to segmentation errors, for instance, we marked it as "partial"; otherwise, when the pair was incorrectly aligned, we marked it as "no alignment". Results and Discussion In this section, we present the corpus' statistics and quality evaluation regarding SMT and NMT systems, as well as the manual evaluation of sentence alignment. Corpus statistics Table TABREF12 shows the statistics (i.e. number of documents and sentences) for the aligned corpus according to the 9 main knowledge areas defined by CAPES. The dataset is available in TMX format BIBREF11 , since it is the standard format for translation memories. We also made available the aligned corpus in an SQLite database in order to facilitate future stratification according to knowledge area, for instance. In this database, we included the following metadata information: year, university, title in Portuguese, type of document (i.e. theses or dissertation), keywords in both languages, knowledge areas and subareas according to CAPES, and URL for the full-text PDF in Portuguese. An excerpt of the corpus is shown in Table TABREF13 Translation experiments Prior to the MT experiments, sentences were randomly split in three disjoint datasets: training, development, and test. Approximately 13,000 sentences were allocated in the development and test sets, while the remaining was used for training. For the SMT experiment, we followed the instructions of Moses baseline system. For the NMT experiment, we used the Torch implementation to train a 2-layer LSTM model with 500 hidden units in both encoder and decoder, with 12 epochs. During translation, the option to replace UNK words by the word in the input language was used, since this is also the default in Moses. Table TABREF17 presents the BLEU scores for both translation directions with English and Portuguese on the development and test partitions for Moses and OpenNMT models. We also included the scores for Google Translate (GT) as a benchmark of a state-of-the-art system which is widely used. NMT model achieved better performance than the SMT one for EN INLINEFORM0 PT direction, with approximately 2.17 percentage points (pp) higher, while presenting almost the same score for PT INLINEFORM1 EN. When comparing our models to GT, both of them presented better BLEU scores, specially for the EN INLINEFORM2 PT direction, with values ranging from 1.27 pp to 4.30 pp higher than GT. We highlight that these results may be due to two main factors: corpus size, and domain. Our corpus is fairly large for both SMT and NMT approaches, comprised of almost 1.3M sentences, which enables the development of robust models. Regarding domain, GT is a generic tool not trained for a specific domain, thus it may produce lower results than a domain specific model such as ours. Scientific writing usually has a strict writing style, with less variation than novels or speeches, for instance, favoring the development of tailored MT systems. Below, we demonstrate some sentences translated by Moses and OpenNMT compared to the suggested human translation. One can notice that in fact NMT model tend to produce more fluent results, specially regarding verbal regency. Human translation: this paper presents a study of efficiency and power management in a packaging industry and plastic films. OpenNMT: this work presents a study of efficiency and electricity management in a packaging industry and plastic films. Moses: in this work presents a study of efficiency and power management in a packaging industry and plastic films. GT: this paper presents a study of the efficiency and management of electric power in a packaging and plastic film industry. Human translation: this fact corroborates the difficulty in modeling human behavior. OpenNMT: this fact corroborates the difficulty in modeling human behavior. Moses: this fact corroborated the difficulty in model the human behavior. GT: this fact corroborates the difficulty in modeling human behavior. Sentence alignment quality We manually validated the alignment quality for 400 sentences randomly selected from the parsed corpus and assigned quality labels according Section SECREF9 . From all the evaluated sentences, 82.30% were correctly aligned, while 13.33% were partially aligned, and 4.35% presented no alignment. The small percentage of no alignment is probably due to the use of Hunalign tool with the provided EN/PT dictionary. Regarding the partial alignment, most of the problems are result of segmentation issues previous to the alignment, which wrongly split the sentences. Since all words were case folded to lowercase letters, the segmenter lost an important source of information for the correct segmentation, generating malformed sentences. Some examples of partial alignment errors are shown in Table TABREF19 , where most senteces were truncated in the wrong part. Conclusion and future work We developed a parallel corpus of theses and dissertations abstracts in Portuguese and English. Our corpus is based on the CAPES TDC dataset, which contains information regarding all theses and dissertations presented in Brazil from 2013 to 2016, including abstracts and other metadata. Our corpus was evaluated through SMT and NMT experiments with Moses and OpenNMT systems, presenting superior performance regarding BLEU score than Google Translate. The NMT model also presented superior results than the SMT one for the EN INLINEFORM0 PT translation direction. We also manually evaluated sentences regarding alignment quality, with average 82.30% of sentences correctly aligned. For future work, we foresee the use of the presented corpus in mono and cross-language text mining tasks, such as text classification and clustering. As we included several metadata, these tasks can be facilitated. Other machine translation approaches can also be tested, including the concatenation of this corpus with other multi-domain ones.
automatic translator with Moses
df934aa1db09c14b3bf4bc617491264e2192390b
df934aa1db09c14b3bf4bc617491264e2192390b_0
Q: Which NMT models did they experiment with? Text: Introduction The availability of cross-language parallel corpora is one of the basis of current Statistical and Neural Machine Translation systems (e.g. SMT and NMT). Acquiring a high-quality parallel corpus that is large enough to train MT systems, specially NMT ones, is not a trivial task, since it usually demands human curating and correct alignment. In light of that, the automated creation of parallel corpora from freely available resources is extremely important in Natural Language Processing (NLP), enabling the development of accurate MT solutions. Many parallel corpora are already available, some with bilingual alignment, while others are multilingually aligned, with 3 or more languages, such as Europarl BIBREF0 , from the European Parliament, JRC-Acquis BIBREF1 , from the European Commission, OpenSubtitles BIBREF2 , from movies subtitles. The extraction of parallel sentences from scientific writing can be a valuable language resource for MT and other NLP tasks. The development of parallel corpora from scientific texts has been researched by several authors, aiming at translation of biomedical articles BIBREF3 , BIBREF4 , or named entity recognition of biomedical concepts BIBREF5 . Regarding Portuguese/English and English/Spanish language pairs, the FAPESP corpus BIBREF6 , from the Brazilian magazine revista pesquisa FAPESP, contains more than 150,000 aligned sentences per language pair, constituting an important language resource. In Brazil, the governmental body responsible for overseeing post-graduate programs across the country, called CAPES, tracks every enrolled student and scientific production. In addition, CAPES maintains a freely accessible database of theses and dissertations produced by the graduate students (i.e. Theses and Dissertations Catalog - TDC) since 1987, with abstracts available since 2013. Under recent governmental efforts in data sharing, CAPES made TDC available in CSV format, making it easily accessible for data mining tasks. Recent data files, from 2013 to 2016, contain valuable information for NLP purposes, such as abstracts in Portuguese and English, scientific categories, and keywords. Thus, TDC can be an important source of parallel Portuguese/English scientific abstracts. In this work, we developed a sentence aligned parallel corpus gathered from CAPES TDC comprised of abstracts in English and Portuguese spanning the years from 2013 to 2016. In addition, we included metadata regarding the respective theses and dissertations. Material and Methods In this section, we detail the information retrieved from CAPES website, the filtering process, the sentence alignment, and the evaluation experiments. An overview of the steps employed in this article is shown in Figure FIGREF1 . Document retrieval and parsing The TDC datasets are available in the CAPES open data website divided by years, from 2013 to 2016 in CSV and XLSX formats. We downloaded all CSV files from the respective website and loaded them into an SQL database for better manipulation. The database was then filtered to remove documents without both Portuguese and English abstracts, and additional metadata selected. After the initial filtering, the resulting documents were processed for language checking to make sure that there was no misplacing of English abstracts in the Portuguese field, or the other way around, removing the documents that presented such inconsistency. We also performed a case folding to lower case letters, since the TDC datasets present all fields with uppercase letters. In addition, we also removed newline/carriage return characters (i.e \n and \r), as they would interfere with the sentence alignment tool. Sentence alignment For sentence alignment, we used the LF aligner tool, a wrapper around the Hunalign tool BIBREF7 , which provides an easy to use and complete solution for sentence alignment, including pre-loaded dictionaries for several languages. Hunalign uses Gale-Church sentence-length information to first automatically build a dictionary based on this alignment. Once the dictionary is built, the algorithm realigns the input text in a second iteration, this time combining sentence-length information with the dictionary. When a dictionary is supplied to the algorithm, the first step is skipped. A drawback of Hunalign is that it is not designed to handle large corpora (above 10 thousand sentences), causing large memory consumption. In these cases, the algorithm cuts the large corpus in smaller manageable chunks, which may affect dictionary building. The parallel abstracts were supplied to the aligner, which performed sentence segmentation followed by sentence alignment. A small modification in the sentence segmentation algorithm was performed to handle the fact that all words are in lowercase letters, which originally prevented segmentation. After sentence alignment, the following post-processing steps were performed: (i) removal of all non-aligned sentences; (ii) removal of all sentences with fewer than three characters, since they are likely to be noise. Machine translation evaluation To evaluate the usefulness of our corpus for SMT purposes, we used it to train an automatic translator with Moses BIBREF8 . We also trained an NMT model using the OpenNMT system BIBREF9 , and used the Google Translate Toolkit to produce state-of-the-art comparison results. The produced translations were evaluated according to the BLEU score BIBREF10 . Manual evaluation Although the Hunalign tool usually presents a good alignment between sentences, we also conducted a manual validation to evaluate the quality of the aligned sentences. We randomly selected 400 pairs of sentences. If the pair was fully aligned, we marked it as "correct"; if the pair was incompletely aligned, due to segmentation errors, for instance, we marked it as "partial"; otherwise, when the pair was incorrectly aligned, we marked it as "no alignment". Results and Discussion In this section, we present the corpus' statistics and quality evaluation regarding SMT and NMT systems, as well as the manual evaluation of sentence alignment. Corpus statistics Table TABREF12 shows the statistics (i.e. number of documents and sentences) for the aligned corpus according to the 9 main knowledge areas defined by CAPES. The dataset is available in TMX format BIBREF11 , since it is the standard format for translation memories. We also made available the aligned corpus in an SQLite database in order to facilitate future stratification according to knowledge area, for instance. In this database, we included the following metadata information: year, university, title in Portuguese, type of document (i.e. theses or dissertation), keywords in both languages, knowledge areas and subareas according to CAPES, and URL for the full-text PDF in Portuguese. An excerpt of the corpus is shown in Table TABREF13 Translation experiments Prior to the MT experiments, sentences were randomly split in three disjoint datasets: training, development, and test. Approximately 13,000 sentences were allocated in the development and test sets, while the remaining was used for training. For the SMT experiment, we followed the instructions of Moses baseline system. For the NMT experiment, we used the Torch implementation to train a 2-layer LSTM model with 500 hidden units in both encoder and decoder, with 12 epochs. During translation, the option to replace UNK words by the word in the input language was used, since this is also the default in Moses. Table TABREF17 presents the BLEU scores for both translation directions with English and Portuguese on the development and test partitions for Moses and OpenNMT models. We also included the scores for Google Translate (GT) as a benchmark of a state-of-the-art system which is widely used. NMT model achieved better performance than the SMT one for EN INLINEFORM0 PT direction, with approximately 2.17 percentage points (pp) higher, while presenting almost the same score for PT INLINEFORM1 EN. When comparing our models to GT, both of them presented better BLEU scores, specially for the EN INLINEFORM2 PT direction, with values ranging from 1.27 pp to 4.30 pp higher than GT. We highlight that these results may be due to two main factors: corpus size, and domain. Our corpus is fairly large for both SMT and NMT approaches, comprised of almost 1.3M sentences, which enables the development of robust models. Regarding domain, GT is a generic tool not trained for a specific domain, thus it may produce lower results than a domain specific model such as ours. Scientific writing usually has a strict writing style, with less variation than novels or speeches, for instance, favoring the development of tailored MT systems. Below, we demonstrate some sentences translated by Moses and OpenNMT compared to the suggested human translation. One can notice that in fact NMT model tend to produce more fluent results, specially regarding verbal regency. Human translation: this paper presents a study of efficiency and power management in a packaging industry and plastic films. OpenNMT: this work presents a study of efficiency and electricity management in a packaging industry and plastic films. Moses: in this work presents a study of efficiency and power management in a packaging industry and plastic films. GT: this paper presents a study of the efficiency and management of electric power in a packaging and plastic film industry. Human translation: this fact corroborates the difficulty in modeling human behavior. OpenNMT: this fact corroborates the difficulty in modeling human behavior. Moses: this fact corroborated the difficulty in model the human behavior. GT: this fact corroborates the difficulty in modeling human behavior. Sentence alignment quality We manually validated the alignment quality for 400 sentences randomly selected from the parsed corpus and assigned quality labels according Section SECREF9 . From all the evaluated sentences, 82.30% were correctly aligned, while 13.33% were partially aligned, and 4.35% presented no alignment. The small percentage of no alignment is probably due to the use of Hunalign tool with the provided EN/PT dictionary. Regarding the partial alignment, most of the problems are result of segmentation issues previous to the alignment, which wrongly split the sentences. Since all words were case folded to lowercase letters, the segmenter lost an important source of information for the correct segmentation, generating malformed sentences. Some examples of partial alignment errors are shown in Table TABREF19 , where most senteces were truncated in the wrong part. Conclusion and future work We developed a parallel corpus of theses and dissertations abstracts in Portuguese and English. Our corpus is based on the CAPES TDC dataset, which contains information regarding all theses and dissertations presented in Brazil from 2013 to 2016, including abstracts and other metadata. Our corpus was evaluated through SMT and NMT experiments with Moses and OpenNMT systems, presenting superior performance regarding BLEU score than Google Translate. The NMT model also presented superior results than the SMT one for the EN INLINEFORM0 PT translation direction. We also manually evaluated sentences regarding alignment quality, with average 82.30% of sentences correctly aligned. For future work, we foresee the use of the presented corpus in mono and cross-language text mining tasks, such as text classification and clustering. As we included several metadata, these tasks can be facilitated. Other machine translation approaches can also be tested, including the concatenation of this corpus with other multi-domain ones.
2-layer LSTM model with 500 hidden units in both encoder and decoder
346f10ddb34503dfba72b0e49afcdf6a08ecacfa
346f10ddb34503dfba72b0e49afcdf6a08ecacfa_0
Q: How big PIE datasets are obtained from dictionaries? Text: Introduction Idiomatic expressions pose a major challenge for a wide range of applications in natural language processing BIBREF0. These include machine translation BIBREF1, BIBREF2, semantic parsing BIBREF3, sentiment analysis BIBREF4, and word sense disambiguation BIBREF5. Idioms show significant syntactic and morphological variability (e.g. beans being spilled for spill the beans), which makes them hard to find automatically. Moreover, their non-compositional nature makes idioms really hard to interpret, because their meaning is often very different from the meanings of the words that make them up. Hence, successful systems need not only be able to recognise idiomatic expressions in text or dialogue, but they also need to give a proper interpretation to them. As a matter of fact, current language technology performs badly on idiom understanding, a phenomenon that perhaps has not received enough attention. Nearly all current language technology used in NLP applications is based on supervised machine learning. This requires large amounts of labelled data. In the case of idiom interpretation, however, only small datasets are available. These contain just a couple of thousand idiom instances, covering only about fifty different types of idiomatic expressions. In fact, existing annotated corpora tend to cover only a small set of idiom types, comprising just a few syntactic patterns (e.g., verb-object combinations), of which a limited number of instances are extracted from a large corpus. This is not surprising as preparing and compiling such corpora involves a large amount of manual extraction work, especially if one wants to allow for form variation in the idiomatic expressions (for example, extracting cooking all the books for cook the books). This work involves both the crafting of syntactic patterns to match potential idiomatic expressions and the filtering of false extractions (non-instances of the target expression e.g. due to wrong parses), and increases with the amount of idiom types included in the corpus (which, in the worst case, means an exponential increase in false extractions). Thus, building a large corpus of idioms, especially one that covers many types in many syntactic constructions, is costly. If a high-precision, high-recall system can be developed for the task of extracting the annotation candidates, this cost will be greatly reduced, making the construction of a large corpus much more feasible. The variability of idioms has been a significant topic of interest among researchers of idioms. For example, BIBREF6 investigates the internal and external modification of a set of idioms in a large English corpus, whereas BIBREF7, quantifies and classifies the variation of a set of idioms in a large corpus of Dutch, setting up a useful taxonomy of variation types. Both find that, although idiomatic expressions mainly occur in their dictionary form, there is a significant minority of idiom instances that occur in non-dictionary variants. Additionally, BIBREF8 show that idiom variants retain their idiomatic meaning more often and are processed more easily than previously assumed. This emphasises the need for corpora covering idiomatic expressions to include these variants, and for tools to be robust in dealing with them. As such, the aim of this article is to describe methods and provide tools for constructing larger corpora annotated with a wider range of idiom types than currently in existence due to the reduced amount of manual labour required. In this way we hope to stimulate further research in this area. In contrast to previous approaches, we want to catch as many idiomatic expressions as possible, and we achieve this by casting a wide net, that is, we consider the widest range of possible idiom variants first and then filter out any bycatch in a way that requires the least manual effort. We expect research will benefit from having larger corpora by improving evaluation quality, by allowing for the training of better supervised systems, and by providing additional linguistic insight into idiomatic expressions. A reliable method for extracting idiomatic expressions is not only needed for building an annotated corpus, but can also be used as part of an automatic idiom processing pipeline. In such a pipeline, extracting potentially idiomatic expressions can be seen as a first step before idiom disambiguation, and the combination of the two modules then functions as an complete idiom extraction system. The main research question that we aim to answer in this article is whether dictionary-based extraction of potentially idiomatic expressions is robust and reliable enough to facilitate the creation of wide-coverage sense-annotated idiom corpora. By answering this question we make several contributions to research on multiword expressions, in particular that of idiom extraction. Firstly, we provide an overview of existing research on annotating idiomatic expressions in corpora, showing that current corpora cover only small sets of idiomatic types (Section SECREF3). Secondly, we quantify the coverage and reliability of a set of idiom dictionaries, demonstrating that there is little overlap between resources (Section SECREF4). Thirdly, we develop and release an evaluation corpus for extracting potentially idiomatic expressions from text (Section SECREF5). Finally, various extraction systems and combinations thereof are implemented, made available to the research community, and evaluated empirically (Section SECREF6). New Terminology: Potentially Idiomatic Expression (PIE) The ambiguity of phrases like wake up and smell the coffee poses a terminological problem. Usually, these phrases are called idiomatic expressions, which is suitable when they are used in an idiomatic sense, but not so much when they are used in a literal sense. Therefore, we propose a new term: potentially idiomatic expressions, or PIEs for short. The term potentially idiomatic expression refers to those expressions which can have an idiomatic meaning, regardless of whether they actually have that meaning in a given context. So, see the light is a PIE in both `After another explanation, I finally saw the light' and `I saw the light of the sun through the trees', while it is an idiomatic expression in the first context, and a literal phrase in the latter context. The processing of PIEs involves three main challenges: the discovery of (new) PIE types, the extraction of instances of known PIE types in text, and the disambiguation of PIE instances in context. Here, we propose calling the discovery task simply PIE discovery, the extraction task simply PIE extraction, and the disambiguation task PIE disambiguation. Note that these terms contrast with the terms used in existing research. There, the discovery task is called type-based idiom detection and the disambiguation task is called token-based idiom detection (cf. BIBREF10, BIBREF11), although this usage is not always consistent. Because these terms are very similar, they are potentially confusing, and that is why we propose novel terminology. Other terminology comes from literature on multiword expressions (MWEs) more generally, i.e. not specific to idioms. Here, the task of finding new MWE types is called MWE discovery and finding instances of known MWE types is called MWE identification BIBREF12. Note, however, that MWE identification generally consists of finding only the idiomatic usages of these types (e.g. BIBREF13). This means that MWE identification consists of both the extraction and disambiguation tasks, performed jointly. In this work, we propose to split this into two separate tasks, and we are concerned only with the PIE extraction part, leaving PIE disambiguation as a separate problem. Related Work This section is structured so as to reflect the dual contribution of the present work. First, we discuss existing resources annotated for idiomatic expressions. Second, we discuss existing approaches to the automatic extraction of idioms. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms There are four sizeable sense-annotated PIE corpora for English: the VNC-Tokens Dataset BIBREF9, the Gigaword dataset BIBREF14, the IDIX Corpus BIBREF10, and the SemEval-2013 Task 5 dataset BIBREF15. An overview of these corpora is presented in Table TABREF7. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: VNC-Tokens The VNC-Tokens dataset contains 53 different PIE types. BIBREF9 extract up to 100 instances from the British National Corpus for each type, for a total of 2,984 instances. These types are based on a pre-existing list of verb-noun combinations and were filtered for frequency and whether two idiom dictionaries both listed them. Instances were extracted automatically, by parsing the corpus and selecting all sentences with the right verb and noun in a direct-object relation. It is unclear whether the extracted sentences were manually checked, but no false extractions are mentioned in the paper or present in the dataset. All extracted PIE instances were annotated for sense as either idiomatic, literal or unclear. This is a self-explanatory annotation scheme, but BIBREF9 note that senses are not binary, but can form a continuum. For example, the idiomaticity of have a word in `You have my word' is different from both the literal sense in `The French have a word for this' and the figurative sense in `My manager asked to have a word'. They instructed annotators to choose idiomatic or literal even in ambiguous middle-of-the-continuum cases, and restrict the unclear label only to cases where there is not enough context to disambiguate the meaning of the PIE. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: Gigaword BIBREF14 present a corpus of 17 PIE types, for which they extracted all instances from the Gigaword corpus BIBREF18, yielding a total of 3,964 instances. BIBREF14 extracted these instances semi-automatically by manually defining all inflectional variants of the verb in the PIE and matching these in the corpus. They did not allow for inflectional variations in non-verb words, nor did they allow intervening words. They annotated these potential idioms as either literal or figurative, excluding ambiguous and unclear instances from the dataset. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: IDIX BIBREF10 build on the methodology of BIBREF14, but annotate a larger set of idioms (52 types) and extract all occurrences from the BNC rather than the Gigaword corpus, for a total of 4,022 instances including false extractions. BIBREF10 use a more complex semi-automatic extraction method, which involves parsing the corpus, manually defining the dependency patterns that match the PIE, and extracting all sentences containing those patterns from the corpus. This allows for larger form variations, including intervening words and inflectional variation of all words. In some cases, this yields many non-PIE extractions, as for recharge one's batteries in Example SECREF10. These were not filtered out before annotation, but rather filtered out as part of the annotation process, by having false extraction as an additional annotation label. For sense annotation, they use an extensive tagset, distinguishing literal, non-literal, both, meta-linguistic, embedded, and undecided labels. Here, the both label (Example SECREF10) is used for cases where both senses are present, often as a form of deliberate word play. The meta-linguistic label (Example SECREF10) applies to cases where the PIE instance is used as a linguistic item to discuss, not as part of a sentence. The embedded label (Example SECREF10) applies to cases where the PIE is embedded in a larger figurative context, which makes it impossible to say whether a literal or figurative sense is more applicable. The undecided label is used for unclear and undecidable cases. They take into account the fact that a PIE can have multiple figurative senses, and enumerate these separately as part of the annotation. . These high-performance, rugged tools are claimed to offer the best value for money on the market for the enthusiastic d-i-yer and tradesman, and for the first time offer the possibility of a battery recharging time of just a quarter of an hour. (from IDIX corpus, ID #314) . Left holding the baby, single mothers find it hard to fend for themselves. (from BIBREF10, p.642) . It has long been recognised that expressions such as to pull someone's leg, to have a bee in one's bonnet, to kick the bucket, to cook someone's goose, to be off one's rocker, round the bend, up the creek, etc. are semantically peculiar. (from BIBREF10, p.642) . You're like a restless bird in a cage. When you get out of the cage, you'll fly very high. (from BIBREF10, p.642) The both, meta-linguistic, and embedded labels are useful and linguistically interesting distinctions, although they occur very rarely (0.69%, 0.15%, and an unknown %, respectively). As such, we include these cases in our tagset (see Section SECREF5), but group them under a single label, other, to reduce annotation complexity. We also follow BIBREF10 in that we combine both the PIE/non-PIE annotation and the sense annotation in a single task. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: SemEval-2013 Task 5b BIBREF15 created a dataset for SemEval-2013 Task 5b, a task on detecting semantic compositionality in context. They selected 65 PIE types from Wiktionary, and extracted instances from the ukWaC corpus BIBREF17, for a total of 4,350 instances. It is unclear how they extracted the instances, and how much variation was allowed for, although there is some inflectional variation in the dataset. An unspecified amount of manual filtering was done on the extracted instances. The extracted PIE instances were labelled as literal, idiomatic, both, or undecidable. Interestingly, they crowdsourced the sense annotations using CrowdFlower, with high agreement (90%–94% pairwise). Undecidable cases and instances on which annotators disagreed were removed from the dataset. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: General Multiword Expression Corpora In addition to the aforementioned idiom corpora, there are also corpora focused on multiword expressions (MWEs) in a more general sense. As idioms are a subcategory of MWEs, these corpora also include some idioms. The most important of these are the PARSEME corpus BIBREF19 and the DiMSUM corpus BIBREF20. DiMSUM provides annotations of over 5,000 MWEs in approximately 90K tokens of English text, consisting of reviews, tweets and TED talks. However, they do not categorise the MWEs into specific types, meaning we cannot easily quantify the number of idioms in the corpus. In contrast to the corpus-specific sense labels seen in other corpora, DiMSUM annotates MWEs with WordNet supersenses, which provide a broad category of meaning for each MWE. Similarly, the PARSEME corpus consists of over 62K MWEs in almost 275K tokens of text across 18 different languages (with the notable exception of English). The main differences with DiMSUM, except for scale and multilingualism, are that it only includes verbal MWEs, and that subcategorisation is performed, including a specific category for idioms. Idioms make up almost a quarter of all verbal MWEs in the corpus, although the proportion varies wildly between languages. In both corpora, MWE annotation was done in an unrestricted manner, i.e. there was not a predefined set of expressions to which annotation was restricted. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: Overview In sum, there is large variation in corpus creation methods, regarding PIE definition, extraction method, annotation schemes, base corpus, and PIE type inventory. Depending on the goal of the corpus, the amount of deviation that is allowed from the PIE's dictionary form to the instances can be very little BIBREF14, to quite a lot BIBREF10. The number of PIE types covered by each corpus is limited, ranging from 17 to 65 types, often limited to one or more syntactic patterns. The extraction of PIE instances is usually done in a semi-automatic manner, by manually defining patterns in a text or parse tree, and doing some manual filtering afterwards. This works well, but an extension to a large number of PIE types (e.g. several hundreds) would also require a large increase in the amount of manual effort involved. Considering the sense annotations done on the PIE corpora, there is significant variation, with BIBREF9 using only three tags, whereas BIBREF10 use six. Outside of PIE-specific corpora there are MWE corpora, which provide a different perspective. A major difference there is that annotation is not restricted to a pre-specified set of expressions, which has not been done for PIEs specifically. Related Work ::: Extracting Idioms from Corpora There are two main approaches to idiom extraction. The first approach aims to distinguish idioms from other multiword phrases, where the main purpose is to expand idiom inventories with rare or novel expressions BIBREF21, BIBREF22, BIBREF23, BIBREF24. The second approach aims to extract all occurrences of a known idiomatic expression in a text. In this paper, we focus on the latter approach. We rely on idiom dictionaries to provide a list of PIE types, and build a system that extracts all instances of those PIE types from a corpus. High-quality idiom dictionaries exist for most well-resourced languages, but their reliability and coverage is not known. As such, we quantify the coverage of dictionaries in Section SECREF4. There is, to the best of our knowledge, no existing work that focuses on dictionary-based PIE extraction. However, there is closely-related work by BIBREF25, who present a system for the dictionary-based extraction of verb-noun combinations (VNCs) in English and Spanish. In their case, the VNCs can be any kind of multiword expression, which they subdivide into literal expressions, collocations, light verb constructions, metaphoric expressions, and idioms. They extract 173 English VNCs and 150 Spanish VNCs and annotate these with both their lexico-semantic MWE type and the amount of morphosyntactic variation they exhibit. BIBREF25 then compare a word sequence-based method, a chunking-based method, and a parse-based method for VNC extraction. Each method relies on the morpho-syntactic information in order to limit false extractions. Precision is evaluated manually on a sample of the extracted VNCs, and recall is estimated by calculating the overlap between the output of the three methods. Evaluation shows that the methods are highly complementary both in recall, since they extract different VNCs, and in precision, since combining the extractors yields fewer false extractions. Whereas BIBREF25 focus on both idiomatic and literal uses of the set of expressions, like in this paper, BIBREF26 tackle only half of that task, namely extracting only literal uses of a given set of VMWEs in Polish. This complicates the task, since it combines extracting all occurrences of the VMWEs and then distinguishing literal from idiomatic uses. Interestingly, they also experiment with models of varying complexity, i.e. just words, part-of-speech tags, and syntactic structures. Their results are hard to put into perspective however, since the frequency of literal VMWEs in their corpus is very rare, whereas corpora containing PIEs tend to show a more balanced distribution. Other similar work to ours also focuses on MWEs more generally, or on different subtypes of MWEs. In addition, these tend to combine both extraction and disambiguation in that they aim to extract only idiomatically used instances of the MWE, without extracting literally used instances or non-instances. Within this line of work, BIBREF27 focuses on verb-particle constructions, BIBREF28 on verbal MWEs (including idioms), and BIBREF29 on verbal MWEs (especially non-canonical variants). Both BIBREF28 and BIBREF29 rely on a pre-defined set of expressions, whereas BIBREF27 also extracts unseen expressions, although based on a pre-defined set of particles and within the vary narrow syntactic frame of verb-particle constructions. The work of BIBREF27 is most similar to ours in that it builds an unsupervised system using existing NLP tools (PoS taggers, chunkers, parsers) and finds that a combination of systems using those tools performs best, as we find in Section SECREF69. BIBREF28 and BIBREF29, by contrast, use supervised classifiers which require training data, not just for the task in general, but specific to the set of expressions used in the task. Although our approach is similar to that of BIBREF25, both in the range of methods used and in the goal of extracting certain multiword expressions regardless of morphosyntactic variation, there are two main differences. First, we use dictionaries, but extract entries automatically and do not manually annotate their type and variability. As a result, our methods rely only on the surface form of the expression taken from the dictionary. Second, we evaluate precision and recall in a more rigorous way, by using an evaluation corpus exhaustively annotated for PIEs. In addition, we do not put any restriction on the syntactic type of the expressions to be extracted, which BIBREF27, BIBREF28, BIBREF25, and BIBREF29 all do. Coverage of Idiom Inventories ::: Background Since our goal is developing a dictionary-based system for extracting potentially idiomatic expressions, we need to devise a proper method for evaluating such a system. This is not straightforward, even though the final goal of such a system is simple: it should extract all potentially idiomatic expressions from a corpus and nothing else, regardless of their sense and the form they are used in. The type of system proposed here hence has two aspects that can be evaluated: the dictionary that it is using as a resource for idiomatic expression, and the extractor component that finds idioms in a corpus. The difficulty here is that there is no undisputed and unambiguous definition of what counts as an idiom BIBREF30, as is the case with multiword expressions in general BIBREF12. Of course, a complete set of idiomatic expressions for English (or any other language) is impossible to get due to the broad and ever-changing nature of language. This incompleteness is exacerbated by the ambiguity problem: if we had a clear definition of idiom we could make an attempt of evaluating idiom dictionaries on their accuracy, but it is practically impossible to come up with a definition of idiom that leaves no room for ambiguity. This ambiguity, among others, creates a large grey area between clearly non-idiomatic phrases on the one hand (e.g. buy a house), and clear potentially idiomatic phrases on the other hand (e.g. buy the farm). As a consequence, we cannot empirically evaluate the coverage of the dictionaries. Instead, in this work, we will quantify the divergence between various idiom dictionaries and corpora, with regard to their idiom inventories. If they show large discrepancies, we take that to mean that either there is little agreement on definitions of idiom or the category is so broad that a single resource can only cover a small proportion. Conversely, if there is large agreement, we assume that idiom resources are largely reliable, and that there is consensus around what is, and what is not, an idiomatic expression. We use different idiom resources and assume that the combined set of resources yields an approximation of the true set of idioms in English. A large divergence between the idiom inventories of these resources would then suggest a low recall for a single resource, since many other idioms are present in the other resources. Conversely, if the idiom inventories largely overlap, that indicates that a single resource can already yield decent coverage of idioms in the English language. The results of the dictionary comparisons are in Section SECREF36. Coverage of Idiom Inventories ::: Selected Idiom Resources (Data and Method) We evaluate the quality of three idiom dictionaries by comparing them to each other and to three idiom corpora. Before we report on the comparison we first describe why we select and how we prepare these resources. We investigate the following six idiom resources: Wiktionary; the Oxford Dictionary of English Idioms (ODEI, BIBREF31); UsingEnglish.com (UE); the Sporleder corpus BIBREF10; the VNC dataset BIBREF9; and the SemEval-2013 Task 5 dataset BIBREF15. These dictionaries were selected because they are available in digital format. Wiktionary and UsingEnglish have the added benefit of being freely available. However, they are both crowdsourced, which means they lack professional editing. In contrast, ODEI is a traditional dictionary, created and edited by lexicographers, but it has the downside of not being freely available. For Wiktionary, we extracted all idioms from the category `English Idioms' from the English version of Wiktionary. We took the titles of all pages containing a dictionary entry and considered these idioms. Since we focus on multiword idiomatic expressions, we filtered out all single-word entries in this category. More specifically, since Wiktionary is a constantly changing resource, we used the 8,482 idioms retrieved on 10-03-2017, 15:30. We used a similar extraction method for UE, a web page containing freely available resources for ESL learners, including a list of idioms. We extracted all idioms which have publicly available definitions, which numbered 3,727 on 10-03-2017, 15:30. Again, single-word entries and duplicates were filtered out. Concerning ODEI, all idioms from the e-book version were extracted, amounting to 5,911 idioms scraped on 13-03-2017, 10:34. Here we performed an extra processing step to expand idioms containing content in parentheses, such as a tough (or hard) nut (to crack). Using a set of simple expansion rules and some hand-crafted exceptions, we automatically generated all variants for this idiom, with good, but not perfect accuracy. For the example above, the generated variants are: {a tough nut, a tough nut to crack, a hard nut, a hard nut to crack}. The idioms in the VNC dataset are in the form verb_noun, e.g. blow_top, so they were manually expanded to a regular dictionary form, e.g. blow one's top before comparison. Coverage of Idiom Inventories ::: Method In many cases, using simple string-match to check overlap in idioms does not work, as exact comparison of idioms misses equivalent idioms that differ only slightly in dictionary form. Differences between resources are caused by, for example: inflectional variation (crossing the Rubicon — cross the Rubicon); variation in scope (as easy as ABC — easy as ABC); determiner variation (put the damper on — put a damper on); spelling variation (mind your p's and q's — mind your ps and qs); order variation (call off the dogs — call the dogs off); and different conventions for placeholder words (recharge your batteries — recharge one's batteries), where both your and one's can generalise to any possessive personal pronoun. These minor variations do not fundamentally change the nature of the idiom, and we should count these types of variation as belonging to the same idiom (see also BIBREF32, who devise a measure to quantify different types of variation allowed by specific MWEs). So, to get a good estimate of the true overlap between idiom resources, these variations need to be accounted for, which we do in our flexible matching approach. There is one other case of variation not listed above, namely lexical variation (e.g. rub someone up the wrong way - stroke someone the wrong way). We do not abstract over this, since we consider lexical variation to be a more fundamental change to the nature of the idiom. That is, a lexical variant is an indicator of the coverage of the dictionary, where the other variations are due to different stylistic conventions and do not indicate actual coverage. In addition, it is easy to abstract over the other types of variation in an NLP application, but this is not the case for lexical variation. The overlap counts are estimated by abstracting over all variations except lexical variation in a semi-automatic manner, using heuristics and manual checking. Potentially overlapping idioms are selected using the following set of heuristics: whether an idiom from one resource is a substring (including gaps) of an idiom in the other resource, whether the words of an idiom form a subset of the words of an idiom in the other resource, and whether there is an idiom in the other resource which has a Levenshtein ratio of over 0.8. The Levenshtein ratio is an indicator of the Levenshtein distance between the two idioms relative to their length. These potential matches are then judged manually on whether they are really forms of the same idiom or not. Coverage of Idiom Inventories ::: Results The results of using exact string matching to quantify the overlap between the dictionaries is illustrated in Figure FIGREF37. Overlap between the three dictionaries is low. A possible explanation for this lies with the different nature of the dictionaries. Oxford is a traditional dictionary, created and edited by professional lexicographers, whereas Wiktionary is a crowdsourced dictionary open to everyone, and UsingEnglish is similar, but focused on ESL-learners. It is likely that these different origins result in different idiom inventories. Similarly, we would expect that the overlap between a pair of traditional dictionaries, such as the ODEI and the Penguin Dictionary of English Idioms BIBREF33 would be significantly higher. It should also be noted, however, that comparisons between more similar dictionaries also found relatively little overlap (BIBREF34; BIBREF35). A counterpoint is provided by BIBREF36, who quantifies coverage of verb-particle constructions in three different dictionaries and finds large overlap – perhaps because verb-particle are a more restricted class. As noted previously, using exact string matching is a very limited approach to calculating overlap. Therefore, we used heuristics and manual checking to get more precise numbers, as shown in Table TABREF39, which also includes the three corpora in addition to the three dictionaries. As the manual checking only involved judging similar idioms found in pairs of resources, we cannot calculate three-way overlap as in Figure FIGREF37. The counts of the pair-wise overlap between dictionaries differ significantly between the two methods, which serves to illustrate the limitations of using only exact string matching and the necessity of using more advanced methods and manual effort. Several insights can be gained from the data in Table TABREF39. The relation between Wiktionary and the SemEval corpus is obvious (cf. Section SECREF12), given the 96.92% coverage. For the other dictionary-corpus pairs, the coverage increases proportionally with the size of the dictionary, except in the case of UsingEnglish and the Sporleder corpus. The proportional increase indicates no clear qualitative differences between the dictionaries, i.e. one does not have a significantly higher percentage of non-idioms than the other, when compared to the corpora. Generally, overlap between dictionaries and corpora is low: the two biggest, ODEI and Wiktionary have only around 30% overlap, while the dictionaries also cover no more than approximately 70% of the idioms used in the various corpora. Overlap between the three corpora is also extremely low, at below 5%. This is unsurprising, since a new dataset is more interesting and useful when it covers a different set of idioms than used in an existing dataset, and thus is likely constructed with this goal in mind. Corpus Annotation In order to evaluate the PIE extraction methods developed in this work (Section SECREF6), we exhaustively annotate an evaluation corpus with all instances of a pre-defined set of PIEs. As part of this, we come up with a workable definition of PIEs, and measure the reliability of PIE annotation by inter-annotator agreement. Assuming that we have a set of idioms, the main problem of defining what is and what is not a potentially idiomatic expression is caused by variation. In principle, potentially idiomatic expression is an instance of a phrase that, when seen without context, could have either an idiomatic or a literal meaning. This is clearest for the dictionary form of the idiom, as in Example SECREF5. Literal uses generally allow all kinds of variation, but not all of these variations allow a figurative interpretation, e.g. Example SECREF5. However, how much variation an idiom can undergo while retaining its figurative interpretation is different for each expression, and judgements of this might vary from one speaker to the other. An example of this is spill the bean, a variant of spill the beans, in Example SECREF5 judged by BIBREF21 as being highly questionable. However, even here a corpus example can be found containing the same variant used in a figurative sense (Example SECREF5). As such, we assume that we cannot know a priori which variants of an expression allow a figurative reading, and are thus a potentially idiomatic expression. Therefore we consider every possible morpho-syntactic variation of an idiom a PIE, regardless of whether it actually allows a figurative reading. We believe the boundaries of this variation can only be determined based on corpus evidence, and even then they are likely variable. Note that a similar question is tackled by BIBREF26, when they establish the boundary between a `literal reading of a VMWE' and a `coincidental co-occurrence'. BIBREF26's answer is similar to ours, in that they count something as a literal reading of a VMWE if it `the same or equivalent dependencies hold between [the expression]'s components as in its canonical form'. . John kicked the bucket last night. . * The bucket, John kicked last night. . ?? Azin spilled the bean. (from BIBREF21) . Alba reveals Fantastic Four 2 details The Invisible Woman actress spills the bean on super sequel (from ukWaC) Corpus Annotation ::: Evaluating the Extraction Methods Evaluating the extraction methods is easier than evaluating dictionary coverage, since the goal of the extraction component is more clearly delimited: given a set of PIEs from one or more dictionaries, extract all occurrences of those PIEs from a corpus. Thus, rather than dealing with the undefined set of all PIEs, we can work with a clearly defined and finite set of PIEs from a dictionary. Because we have a clearly defined set of PIEs, we can exhaustively annotate a corpus for PIEs, and use that annotated corpus for automatic evaluation of extraction methods using recall and precision. This allows us to facilitate and speed up annotation by pre-extracting sentences possibly containing a PIE. After the corpus is annotated, the precision and recall can be easily estimated by comparing the extracted PIE instances to those marked in the corpus. The details of the corpus selection, dictionary selection, extraction heuristic and annotation procedure are presented in Section SECREF46, and the details and results of the various extraction methods are presented in Section SECREF6. Corpus Annotation ::: Base Corpus and Idiom Selection As a base corpus, we use the XML version of the British National Corpus BIBREF37, because of its size, variety, and wide availability. The BNC is pre-segmented into s-units, which we take to be sentences, w-units, which we take to be words, and c-units, punctuation. We then extract the text of all w-units and c-units. We keep the sentence segmentation, resulting in a set of plain text sentences. All sentences are included, except for sentences containing <gap> elements, which are filtered out. These <gap> elements indicate places where material from the original has been left out, e.g. for anonymisation purposes. Since this can result in incomplete sentences that cannot be parsed correctly, we filter out sentences containing these gaps. We use only the written part of the BNC. From this, we extract a set of documents with the aim of having as much genre variation as possible. To achieve this, we select the first document in each genre, as defined by the classCode attribute (e.g. nonAc, commerce, letters). The resulting set of 46 documents makes up our base corpus. Note that these documents vary greatly in size, which means the resulting corpus is varied, but not balanced in terms of size (Table TABREF43). The documents are split across a development and test set, as specified at the end of Section SECREF46. We exclude documents with IDs starting with A0 from all annotation and evaluation procedures, as these were used during development of the extraction tool and annotation guidelines. As for the set of potentially idiomatic expressions, we use the intersection of the three dictionaries, Wiktionary, Oxford, and UsingEnglish. Based on the assumption that, if all three resources include a certain idiom, it must unquestionably be an idiom, we choose the intersection (also see Figure FIGREF37). This serves to exclude questionable entries, like at all, which is in Wiktionary. The final set of idioms used for these experiments consists of 591 different multiword expressions. Although we aim for wide coverage, this is a necessary trade-off to ensure quality. At the same time, it leaves us with a set of idiom types that is approximately ten times larger than present in existing corpora. The set of 591 idioms includes idioms with a large variety of syntactic patterns, of which the most frequent ones are shown in Table TABREF44. The statistics show that the types most prevalent in existing corpora, verb-noun and preposition-noun combinations, are indeed the most frequent ones, but that there is a sizeable minority of types that do not fall into those categories, including coordinated adjectives, coordinated nouns, and nouns with prepositional phrases. This serves to emphasise the necessity of not restricting corpora to a small set of syntactic patterns. Corpus Annotation ::: Extraction of PIE Candidates To annotate the corpus completely manually would require annotators to read the whole corpus, and cross-reference each sentence to a list of almost 600 PIEs, to check whether one of those PIEs occurs in a sentence. We do not consider this a feasible annotation settings, due to both the difficulty of recognising literal usages of idioms and the time cost needed to find enough PIEs, given their low overall frequency. As such, we use a pre-extraction step to present candidates for annotation to the human annotators. Given the corpus and the set of PIEs, we heuristically extract the PIE candidates as follows: given an idiomatic expression, extract every sentence which contains all the defining words of the idiom, in any form. This ensures that all possibly matching sentences get extracted, while greatly pruning the amount of sentences for annotators to look at. In addition, it allows us to present the heuristically matched PIE type and corresponding words to the annotators, which makes it much easier to judge whether something is a PIE or not. This also means that annotators never have to go through the full list of PIEs during the annotation process. Initially, the heuristic simply extracted any sentence containing all the required words, where a word is any of the inflectional variants of the words in the PIE, except for determiners and punctuation. This method produced large amounts of noise, that is, a set of PIE candidates with only a very low percentage of actual PIEs. This was caused by the presence of some highly frequent PIEs with very little defining lexical content, such as on the make, and in the running. For example, with the original method, every sentence containing the preposition on, and any inflectional form of the verb make was extracted, resulting in a huge number of non-PIE candidates. To limit the amount of noise, two restrictions were imposed. The first restrictions disallows word order variation for PIEs which do not contain a verb. The rationale behind this is that word order variation is only possible with PIEs like spill the beans (e.g. the beans were spilled), and not with PIEs like in the running (*the running in??). The second restriction is that we limit the number of words that can be inserted between the words of a PIE, but only for PIEs like on the make, and in the running, i.e. PIEs which only contain prepositions, determiners and a single noun. The number of intervening words was limited to three tokens, allowing for some variation, as in Example SECREF45, but preventing sentences like Example SECREF45 from being extracted. This restriction could result in the loss of some PIE candidates with a large number of intervening words. However, the savings in annotation time clearly outweigh the small loss in recall in this situation. . Either at New Year or before July you can anticipate a change in the everyday running of your life. (in the running - BNC - document CBC - sentence 458) . [..] if [he] hung around near the goal or in the box for that matter instead of running all over the show [..] (in the running - BNC - document J1C - sentence 1341) Corpus Annotation ::: Annotation Procedure The manual annotation procedure consists of three different phases (pilot, double annotation, single annotation), followed by an adjudication step to resolve conflicting annotations. Two things are annotated: whether something is a PIE or not, and if it is a PIE, which sense the PIE is used in. In the first phase (0-100-*), we randomly select hundred of the 2,239 PIE candidates which are then annotated by three annotators. All annotators have a good command of English, are computational linguists, and familiar with the subject. The annotators include the first and last author of this paper. The annotators were provided with a short set of guidelines, of which the main rule-of-thumb for labelling a phrase as a PIE is as follows: any phrase is a PIE when it contains all the words, with the same part-of-speech, and in the same grammatical relations as in the dictionary form of the PIE, ignoring determiners. For sense annotation, annotators were to mark a PIE as idiomatic if it had a sense listed in one of the idiom dictionaries, and as literal if it had a meaning that is a regular composition of its component words. For cases which were undecidable due to lack of context, the ?-label was used. The other-label was used as a container label for all cases in which neither the literal or idiomatic sense was correct (e.g. meta-linguistic uses and embeddings in metaphorical frames, see also Section SECREF10). The first phase of annotation serves to bring to light any inconsistencies between annotators and fill in any gaps in the annotation guidelines. The resulting annotations already show a reasonably high agreement of 0.74 Fleiss' Kappa. Table TABREF48 shows annotation details and agreement statistics for all three phases. The annotation tasks suffixed by -PIE indicate agreement on PIE/non-PIE annotation and the tasks suffixed by -sense indicate agreement on sense annotation for PIEs. In the second phase of annotation (100-600-* & 600-1100-*), another 1000 of the 2239 PIE candidates are selected to be annotated by two pairs of annotators. This shows very high agreement, as shown in Table TABREF48. This is probably due to the improvement in guidelines and the discussion following the pilot round of annotation. The exception to this are the somewhat lower scores for the 600-1100-sense annotation task. Adjudication revealed that this is due almost exclusively because of a different interpretation of the literal and idiomatic senses of a single PIE type: on the ground. Excluding this PIE type, Fleiss' Kappa increases from 0.63 to 0.77. Because of the high agreement on PIE annotation, we deem it sufficient for the remainder (1108 candidates) to be annotated by only the primary annotator in the third phase of annotation (1100-2239-*). The reliability of the single annotation can be checked by comparing the distribution of labels to the multi-annotated parts. This shows that it falls clearly within the ranges of the other parts, both in the proportion of PIEs and idiomatic senses (see Table TABREF49). The single-annotated part has 49.0% PIEs, which is only 4 percentage points above the 44.7% PIEs in the multi-annotated parts. The proportion of idioms is just 2 percentage points higher, with 55.9% versus 53.9.%. Although inter-annotator agreement was high, there was still a significant number of cases in the triple and double annotated PIE candidate sets where not all annotators agreed. These cases were adjudicated through discussion by all annotators, until they were in agreement. In addition, all PIE candidates which initially received the ?-label (unclear or undecidable) for sense or PIE were resolved in the same manner. In the adjudication procedure, annotators were provided with additional context on each side of the idiom, in contrast to the single sentence provided during the initial annotation. The main reason to do adjudication, rather than simply discarding all candidates for which there was disagreement, was that we expected exactly those cases for which there are conflicting annotations to be the most interesting ones, since having non-standard properties would cause the annotations to diverge. Examples of such interesting non-standard cases are at sea as part of a larger satirical frame in Example SECREF46 and cut the mustard in Example SECREF46 where it is used in a headline as wordplay on a Cluedo character. . The bovine heroine has connections with Cowpeace International, and deals with a huge treacle slick at sea. (at sea - BNC - document CBC - sentence 13550) . Why not cut the Mustard? [..] WADDINGTON Games's proposal to axe Reverend Green from the board game Cluedo is a bad one. (cut the mustard - BNC - document CBC - sentence 14548) We split the corpus at the document level. The corpus consists of 45 documents from the BNC, and we split it in such a way that the development set has 1,112 candidates across 22 documents and the test set has 1,127 candidates from 23 documents. Note that this means that the development and test set contain different genres. This ensures that we do not optimise our systems on genre-specific aspects of the data. Dictionary-based PIE Extraction We propose and implement four different extraction methods, of differing complexities: exact string match, fuzzy string match, inflectional string match, and parser-based extraction. Because of the absence of existing work on this task, we compare these methods to each other, where the more basic methods function as baselines. More complex methods serve to shine light on the difficulty of the PIE extraction task; if simple methods already work sufficiently well, the task is not as hard as expected, and vice versa. Below, each of the extraction methods is presented and discussed in detail. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Exact String Match This is, very simply, extracting all instances of the exact dictionary form of the PIE, from the tokenized text of the corpus. Word boundaries are taken into account, so at sea does not match `that seawater'. As a result, all inflectional and other variants of the PIE are ignored. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Fuzzy String Match Fuzzy string match is a rough way of dealing with morphological inflection of the words in a PIE. We match all words in the PIE, taking into account word boundaries, and allow for up to 3 additional letters at the end of each word. These 3 additional characters serve to cover inflectional suffixes. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Inflectional String Match In inflectional string match, we aim to match all inflected variations of a PIE. This is done by generating all morphological variants of the words in a PIE, generating all combinations of those words, and then using exact string match as described earlier. Generating morphological variations consists of three steps: part-of-speech tagging, morphological analysis, and morphological reinflection. Since inflectional variation only applies to verbs and nouns, we use the Spacy part-of-speech tagger to detect the verbs and nouns. Then, we apply the morphological analyser morpha to get the base, uninflected form of the word, and then use the morphological generation tool morphg to get all possible inflections of the word. Both tools are part of the Morph morphological processing suite BIBREF38. Note that the Morph tools depend on the part-of-speech tag in the input, so that a wrong PoS may lead to an incorrect set of morphological variants. For a PIE like spill the beans, this results in the following set of variants: $\lbrace $spill the bean, spills the bean, spilled the bean, spilling the bean, spill the beans, spills the beans, spilled the beans, spilling the beans$\rbrace $. Since we generate up to 2 variants for each noun, and up to 4 variants for each verb, the number of variants for PIEs containing multiple verbs and nouns can get quite large. On average, 8 additional variants are generated for each potentially idiomatic expression. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Additional Steps For all string match-based methods, ways to improve performance are implemented, to make them as competitive as possible. Rather than doing exact string matching, we also allow words to be separated by something other than spaces, e.g. nuts-and-bolts for nuts and bolts. Additionally, there is an option to take into account case distinctions. With the case-sensitive option, case is preserved in the idiom lists, e.g. coals to Newcastle, and the string matching is done in a case-sensitive manner. This increases precision, e.g. by avoiding PIEs as part of proper names, but also comes at a cost of recall, e.g. for sentence-initial PIEs. Thirdly, there is the option to allow for a certain number of intervening words between each pair of words in the PIE. This should improve recall, at the cost of precision. For example, this would yield the true positive make a huge mountain out of a molehill for make a mountain out of a molehill, but also false positives like have a smoke and go for have a go. A third shared property of the string-based methods is the processing of placeholders in PIEs. PIEs containing possessive pronoun placeholders, such as one's and someone's are expanded. That is, we remove the original PIE, and add copies of the PIE where the placeholder is replaced by one of the possessive personal pronouns. For example, a thorn in someone's side is replaced by a thorn in {my, your, his, ...} side. In the case of someone's, we also add a wildcard for any possessively used word, i.e. a thorn in —'s side, to match e.g. a thorn in Google's side. Similarly, we make sure that PIE entries containing —, such as the mother of all —, will match any word for — during extraction. We do the same for someone, for which we substitute objective pronouns. For one, this is not possible, since it is too hard to distinguish from the one used as a number. Dictionary-based PIE Extraction ::: Parser-Based Extraction Methods Parser-based extraction is potentially the widest-coverage extraction method, with the capacity to extract both morphological and syntactic variants of the PIE. This should be robust against the most common modifications of the PIE, e.g. through word insertions (spill all the beans), passivisation (the beans were spilled), and abstract over articles (spill beans). In this method, PIEs are extracted using the assumption that any sentence which contains the lemmata of the words in the PIE, in the same dependency relations as in the PIE, contains an instance of the PIE type in question. More concretely, this means that the parse of the sentence should contain the parse tree of the PIE as a subtree. This is illustrated in Figure FIGREF57, which shows the parse tree for the PIE lose the plot, parsed without context. Note that this is a subtree of the parse tree for the sentence `you might just lose the plot completely', which is shown in Figure FIGREF58. Since the sentence parse contains the parse of the PIE, we can conclude that the sentence contains an instance of that PIE and extract the span of the PIE instance. All PIEs are parsed in isolation, based on the assumption that all PIEs can be parsed, since they are almost always well-formed phrases. However, not all PIEs will be parsed correctly, especially since there is no context to resolve ambiguity. Errors tend to occur at the part-of-speech level, where, for example, verb-object combinations like jump ship and touch wood are erroneously tagged as noun-noun compounds. An analysis of the impact of parser error on PIE extraction performance is presented in Section SECREF73. Initially, we use the Spacy parser for parsing both the PIEs and the sentences. Next, the sentence is parsed, and the lemma of the top node of the parsed PIE is matched against the lemmata of the sentence parse. If a match is found, the parse tree of the PIE is matched against the subtree of the matching sentence parse node. If the whole PIE parse tree matches, the span ranging from the first PIE token to the last is extracted. This span can thus include words that are not directly part of the PIE's dictionary form, in order to account for insertions like ships were jumped for jump ship, or have a big heart for have a heart. During the matching, articles (a/an/the) are ignored, and passivisation is accounted for with a special rule. In addition, a number of special cases are dealt with. These are PIEs containing someone('s), something('s), one's, or —. These words are used in PIEs as placeholders for a generic possessor (someone's/something's/one's), generic object (someone/something), or any word of the right PoS (—). For someone's, and something's, we match any possessive pronoun, or (proper) noun + possessive marker. For one's, only possessive pronouns are matched, since this is a placeholder for reflexive possessors. For someone and something, any non-possessive pronoun or (proper) noun is matched. For — wildcards, any word can be matched, as long as it has the right relation to the right head. An additional challenge with these wildcards is that PIEs containing them cannot be parsed, e.g. too — for words is not parseable. This is dealt with by substituting the — by a PoS-ambiguous word, such as fine, or back. Two optional features are added to the parser-based method with the goal of making it more robust to parser errors: generalising over dependency relation labels, and generalising over dependency relation direction. We expect this to increase recall at the cost of precision. In the first no labels setting, we match parts of the parse tree which have the same head lemma and the same dependent lemma, regardless of the relation label. An example of this is Figure FIGREF60, which has the wrong relation label between up and ante. If labels are ignored, however, we can still extract the PIE instance in Figure FIGREF61, which has the correct label. In the no directionality setting, relation labels are also ignored, and in addition the directionality of the relation is ignored, that is, we allow for the reversal of heads and dependents. This benefits performance in a case like Figure FIGREF62, which has stock as the head of laughing in a compound relation, whereas the parse of the PIE (Figure FIGREF63) has laughing as the head of stock in a dobj relation. Note that similar settings were implemented by BIBREF26, who detect literal uses of VMWEs using a parser-based method with either full labelled dependencies, unlabelled dependencies, or directionless unlabelled dependencies (which they call BagOfDeps). They find that recall increases when less restrictions on the dependencies are used, but that this does not hurt precision, as we would expect. However, we cannot draw too many conclusions from these results due to the small size of their evaluation set, which consists of just 72 literal VMWEs in total. Dictionary-based PIE Extraction ::: Parser-Based Extraction Methods ::: In-Context Parsing Since the parser-based method parses PIEs without any context, it often finds an incorrect parse, as for jump ship in Figure FIGREF65. As such, we add an option to the method that aims to increase the number of correct parses by parsing the PIE within context, that is, within a sentence. This can greatly help to disambiguate the parse, as in Figure FIGREF66. If the number of correct parses goes up, the recall of the extraction method should also increase. Naturally, it can also be the case that a PIE is parsed correctly without context, and incorrectly with context. However, we expect the gains to outweigh the losses. The challenge here is thus to collect example sentences containing the PIE. Since the whole point of this work is to extract PIEs from raw text, this provides a catch-22-like situation: we need to extract a sentence containing a PIE in order to extract sentences containing a PIE. The workaround for this problem is to use the exact string matching method with the dictionary form of the PIE and a very large plain text corpus to gather example sentences. By only considering the exact dictionary form we both simplify the finding of example sentences and the extraction of the PIE's parse from the sentence parse. In case multiple example sentences are found, the shortest sentence is selected, since we assume it is easiest to parse. This is also the reason we make use of very large corpora, to increase the likelihood of finding a short, simple sentence. The example sentence extraction method is modified in such a way that sentences where the PIE is used meta-linguistically in quotes, e.g. “the well-known English idiom `to spill the beans' has no equivalents in other languages”, are excluded, since they do not provide a natural context for parsing. When no example sentence can be found in the corpus, we back-off to parsing the PIE without context. After a parse has been found for each PIE (i.e. with or without context), the method proceeds identically to the regular parser-based method. We make use of the combination of two large corpora for the extraction of example sentences: the English Wikipedia, and ukWaC BIBREF17. For the Wikipedia corpus, we use a dump (13-01-2016) of the English-language Wikipedia, and remove all Wikipedia markup. This is done using WikiExtractor. The resulting files still contain some mark-up, which is removed heuristically. The resulting corpus contains mostly clean, raw, untokenized text, numbering approximately 1.78 billion tokens. As for ukWaC, all XML-markup was removed, and the corpus is converted to a one-sentence-per-line format. UkWaC is tokenized, which makes it difficult for a simple string match method to find PIEs containing punctuation, for example day in, day out. Therefore, all spaces before commas, apostrophes, and sentence-final punctuation are removed. The resulting corpus contains approximately 2.05 billion tokens, making for a total of 3.83 billion tokens in the combined ukWaC and Wikipedia corpus. Dictionary-based PIE Extraction ::: Results In order to determine which of the methods described previously produces the highest quality extraction of potentially idiomatic expressions, we evaluate them, in various settings, on the corpus described in Section SECREF5. For parser-based extraction, systems with and without in-context parsing, ignoring labels, and ignoring directionality are tested. For the three string-based extraction methods, varying numbers of intervening words and case sensitivity are evaluated. Evaluation is done using the development set, consisting of 22 documents and 1112 PIE candidates, and the test set, which consists of 23 documents and 1127 PIE candidates. For each method the best set of parameters and/or options is determined using the development set, after which the best variant by F1-score of each method is evaluated on the test set. Since these documents in the corpus are exhaustively annotated for PIEs (see Section SECREF40), we can calculate true and false positives, and false negatives, and thus precision, recall and F1-score. The exact spans are ignored, because the spans annotated in the evaluation corpus are not completely reliable. These were automatically generated during candidate extraction, as described in Section SECREF45. Rather, we count an extraction as a true positive if it finds the correct PIE type in the correct sentence. Note that we judge the system with the highest F1-score to be the best-performing system, since it is a clear and objective criterion. However, when using the system in practice, the best performance depends on the goal. When used as a preprocessing step for PIE disambiguation, the system with the highest F1-score is perhaps the most suitable, but as a corpus building tool, one might want to sacrifice some precision for an increase in recall. This helps to get the most comprehensive annotation of PIEs possible, without overloading the annotators with false extractions (i.e. non-PIEs), by maintaining high precision. The results for each system on the development set are presented in Tables TABREF70 and TABREF71. Generally, results are in line with expectations: (the best) parse-based methods are better than (the best) string-based methods, and within string-based methods, inflectional matching works best. The same goes for the different settings: case-sensitivity increases precision at the cost of recall, allowing intervening words increases recall at the cost of precision, and the same goes for the no labels and no directionality options for parser-based extraction. Overall, in-context parser-based extraction works best, with an F1 of 88.54%, whereas fuzzy matching does very poorly. Within string-based methods, exact matching has the highest precision, but low recall. Fuzzy matching increases recall at a disproportionately large precision cost, whereas inflectional matching combines the best of both worlds and has high recall at a small loss in precision. For the parser-based system, it is notable that parsing idioms within context yields a clear overall improvement by greatly improving recall at a small cost in precision. We evaluate the best variant of each system, as determined by F1-score, on the test set. This gives us an indication of whether the system is robust enough, or was overfitted on the development data. Results on the test set are shown in Table TABREF72. On average, the results are lower than the results on the development set. The string-based methods perform clearly worse, with drops of about 4% F1-score for exact and inflectional match, and a large drop of almost 9% F1-score for fuzzy matching. The parser-based method, on the other hand, is more robust, with a small 0.59% increase in F1-score on the test set. Dictionary-based PIE Extraction ::: Analysis Broadly speaking, the PIE extraction systems presented above perform in line with expectations. It is nevertheless useful to see where the best-performing system misses out, and where improvements like in-context parsing help performance. We analyse the shortcomings of the in-context parser-based system by looking at the false positives and false negatives on the development set. We consider the output of the system with best overall performance, since it will provide the clearest picture. The system extracts 529 PIEs in total, of which 54 are false extractions (false positives), and it misses 69 annotated PIE instances (false negatives). Most false positives stem from the system's failure to capture nuances of PIE annotation. This includes cases where PIEs contain, or are part of, proper nouns (Example SECREF73), PIEs that are part of coordination constructions (Example SECREF73), and incorrect attachments (Example SECREF73). Among these errors, sentences containing proper nouns are an especially frequent problem. . Drama series include [..] airline security thrills in Cleared For Takeoff and Head Over Heels [..] (in the clear - BNC - document CBC - sentence 5177) . They prefer silk, satin or lace underwear in tasteful black or ivory. (in the black - BNC - document CBC - sentence 14673) . [..] `I saw this chap make something out of an ordinary piece of wood — he fashioned it into an exquisite work of art.' (out of the woods - BNC - document ABV - sentence 1300) The main cause of false negatives are errors made by the parser. In order to correctly extract a PIE from a sentence, both the PIE and the sentence have to be parsed correctly, or at least parsed in the same way. This means a missed extraction can be caused by a wrong parse for the PIE or a wrong parse for the sentence. These two error types form the largest class of false negatives. Since some PIE types are rather frequent, a wrong parse for a single PIE type can potentially lead to a large number of missed extractions. It is not surprising that the parser makes many mistakes, since idioms often have unusual syntactic constructions (e.g. come a cropper) and contain words where default part-of-speech tags lead to the wrong interpretation (e.g. round is a preposition in round the bend, not a noun or adjective). This is especially true when idioms are parsed without context, and hence, where in-context parsing provides the largest benefit: the number of PIEs which are parsed incorrectly drops, which leads to F1-scores on those types going from 0% to almost 100% (e.g. in light of and ring a bell). Since parser errors are the main contributor to false negatives, hurting recall, we can observe that parsing idioms in context serves to benefit only recall, by 7 percentage points, at only a small loss in precision. We find that adding context mainly helps for parsing expressions which are structurally relatively simple, but still ambiguous, such as rub shoulders, laughing stock, and round the bend. Compare, for example, the parse trees for laughing stock in isolation and within the extracted context sentence in Figures FIGREF74 and FIGREF75. When parsed in isolation, the relation between the two words is incorrectly labelled as a compound relation, whereas in context it is correctly labelled as a direct object relation. Note however, that for the most difficult PIEs, embedding them in a context does solve the parsing problem: a syntactically odd phrase is hard to phrase (e.g. for the time being), and a syntactically odd phrase in a sentence makes for a syntactically odd sentence that is still hard to parse (e.g. `London for the time being had been abandoned.'). Finding example sentences turned out not to be a problem, since appropriate sentences were found for 559 of 591 PIE types. An alternative method for reducing parser error is to use a different, better parser. The Spacy parser was mainly chosen for implementation convenience and speed, and there are parsers which have better performance, as measured on established parsing benchmarks. To investigate the effectiveness of this method, we used the Stanford Neural Dependency Parser BIBREF39 to extract PIEs in the regular parsing, in-context parsing and the no labels settings. In all cases, using the Stanford parser yielded worse extraction performance than the Spacy parser. A possible explanation for why a supposedly better parser performs worse here is that parsers are optimised and trained to do well on established benchmarks, which consist of complete sentences, often from news texts. This does not necessarily correlate with parsing performance on short (sentences containing) idiomatic phrases. As such, we cannot assume that better overall parsing performance implies PIE extraction performance. It should be noted that, when assessing the quality of PIE extraction performance, the parser-based methods are sensitive to specific PIE types. That is, if a single PIE type is parsed incorrectly, then it is highly probable that all instances of that type are missed. If this type is also highly frequent, this means that a small change in actual performance yields a large change in evaluation scores. Our goal is to have a PIE extraction system that is robust across all PIE types, and thus the current evaluation setting does not align exactly with our aim. Splitting out performance per PIE type reveals whether there is indeed a large variance in performance across types. Table TABREF76 shows the 25 most frequent PIE types in the corpus, and the performance of the in-context-parsing-based system on each. Except two cases (in the black and round the bend), we see that the performance is in the 80–100% range, even showing perfect performance on the majority of types. For none of the types do we see low precision paired with high recall, which indicates that the parser never matches a highly frequent non-PIE phrase. For the system with the no labels and no-directionality options (per-type numbers not shown here), however, this does occur. For example, ignoring the labels for the parse of the PIE have a go leads to the erroneous matching of many sentences containing a form of have to go, which is highly frequent, thus leading to a large drop in precision. Although performance is stable across the most frequent types, among the less frequent types it is more spotty. This hurts overall performance, and there are potential gains in mitigating the poor performance on these types, such as for the time being. At the same time, the string matching methods show much more stable performance across types, and some of them do so with very high precision. As such, a combination of two such methods could boost performance significantly. If we use a high-precision string match-based method, such as the exact string match variant with a precision of 97.35%, recall could be improved for the wrongly parsed PIE types, without a significant loss of precision. We experiment with two such combinations, by simply taking the union of the sets of extracted idioms of both systems, and filtering out duplicates. Results are shown in Table TABREF77. Both combinations show the expected effect: a clear gain in recall at a minimal loss in precision. Compared to the in-context-parsing-based system, the combination with exact string matching yields a gain in recall of over 6%, and the combination with inflectional string matching yields an even bigger gain of almost 8%, at precision losses of 0.6% and 0.8%, respectively. This indicates that the systems are very much complementary in the PIEs they extract. It also means that, when used in practice, combining inflectional string matching and parse-based extraction is the most reliable configuration. Conclusions and Outlook We present an in-depth study on the automatic extraction of potentially idiomatic expressions based on dictionaries. The purpose of automatic dictionary-based extraction is, on the one hand, to function as a pre-extraction step in the building of a large idiom-annotated corpus. On the other hand, it can function as part of an idiom extraction system when combined with a disambiguation component. In both cases, the ultimate goal is to improve the processing of idiomatic expressions within NLP. This work consists of three parts: a comparative evaluation of the coverage of idiom dictionaries, the annotation of a PIE corpus, and the development and evaluation of several dictionary-based PIE extraction methods. In the first part, we present a study of idiom dictionary coverage, which serves to answer the question of whether a single idiom dictionary, or a combination of dictionaries, can provide good coverage of the set of all English idioms. Based on the comparison of dictionaries to each other, we estimate that the overlap between them is limited, varying from 20% to 55%, which indicates a large divergence between the dictionaries. This can be explained by the fact that idioms vary widely by register, genre, language variety, and time period. In our case, it is also likely that the divergence is caused partly by the gap between crowdsourced dictionaries on the one hand, and a dictionary compiled by professional lexicographers on the other. Given these factors, we can conclude that a single dictionary cannot provide even close to complete coverage of English idioms, but that by combining dictionaries from various sources, significant gains can be made. Since `English idioms' are a diffuse and constantly changing set, we have no gold standard to compare to. As such, we conclude that multiple dictionaries should be used when possible, but that we cannot say any anything definitive on the coverage of dictionaries with regard to the complete set of English idioms (which can only be approximated in the first place). A more comprehensive of idiom resources could be made in the future by using more advanced automatic methods for matching, for example by using BIBREF32's (BIBREF32) method for measuring expression variability. This would make it easier to evaluate a larger number of dictionaries, since no manual effort would be required. In the second part, we experiment with the exhaustive annotation of PIEs in a corpus of documents from the BNC. Using a set of 591 PIE types, much larger and more varied than in existing resources, we show that it is very much possible to establish a working definition of PIE that allows for a large amount of variation, while still being useful for reliable annotation. This resulted in high inter-annotator agreement, ranging from 0.74 to 0.91 Fleiss' Kappa. This means that we can build a resource to evaluate a wide-range idiom extraction system with relatively little effort. The final corpus of PIEs with sense annotations is publicly available consists of 2,239 PIE candidates, of which 1,050 actual PIEs instances, and contains 278 different PIE types. Finally, several methods for the automatic extraction of PIE instances were developed and evaluated on the annotated PIE corpus. We tested methods of differing complexity, from simple string match to dependency parse-based extraction. Comparison of these methods revealed that the more computationally complex method, parser-based extraction, works best. Parser-based extraction is especially effective in capturing a larger amount of variation, but is less precise than string-based methods, mostly because of parser error. The best overall setting of this method, which parses idioms within context, yielded an F1-score of 89.13% on the test set. Parser error can be partly compensated by combining the parse-based method and the inflectional string match method, which yields an F1-score of 92.01% (on the development set). This aligns well with the findings by BIBREF27, who found that combining simpler and more complex methods improves over just using a simple method case for extracting verb-particle constructions. This level of performance means that we can use the tool in corpus building. This greatly reduces the amount of manual extraction effort involved, while still maintaining a high level of recall. We make the source code for the different systems publicly available. Note that, although used here in the context of PIE extraction, our methods are equally applicable to other phrase extraction tasks, for example the extraction of light-verb constructions, metaphoric constructions, collocations, or any other type of multiword expression (cf. BIBREF27, BIBREF25, BIBREF26). Similarly, our method can be conceived as a blueprint and extended to languages other than English. For this to be possible, for any given new language one would need a list of target expressions and, in the case of the parser-based method, a reliable syntactic parser. If this is not the case, the inflectional matching method can be used, which requires only a morphological analyser and generator. Obviously, for languages that are morphologically richer than English, one would need to develop strategies aimed at controlling non-exact matches, so as to enhance recall without sacrificing precision. Previous work on Italian, for example, has shown the feasibility of achieving such balance through controlled pattern matching BIBREF40. Languages that are typologically very different from English would obviously require a dedicated approach for the matching of PIEs in corpora, but the overall principles of extraction, using language-specific tools, could stay the same. Currently, no corpora containing annotation of PIEs exist for languages other than English. However, the PARSEME corpus BIBREF19 already contains idioms (only idiomatic readings) for many languages and would only need annotation of literal usages of idioms to make up a set of PIEs. Paired with the Universal Dependencies project BIBREF41, which increasingly provides annotated data as well as processing tools for an ever growing number of languages, this seems an excellent starting point for creating PIE resources in multiple languages.
46 documents makes up our base corpus
2480dfe2d996afef840a81bd920aeb9c26e5b31d
2480dfe2d996afef840a81bd920aeb9c26e5b31d_0
Q: What compleentary PIE extraction methods are used to increase reliability further? Text: Introduction Idiomatic expressions pose a major challenge for a wide range of applications in natural language processing BIBREF0. These include machine translation BIBREF1, BIBREF2, semantic parsing BIBREF3, sentiment analysis BIBREF4, and word sense disambiguation BIBREF5. Idioms show significant syntactic and morphological variability (e.g. beans being spilled for spill the beans), which makes them hard to find automatically. Moreover, their non-compositional nature makes idioms really hard to interpret, because their meaning is often very different from the meanings of the words that make them up. Hence, successful systems need not only be able to recognise idiomatic expressions in text or dialogue, but they also need to give a proper interpretation to them. As a matter of fact, current language technology performs badly on idiom understanding, a phenomenon that perhaps has not received enough attention. Nearly all current language technology used in NLP applications is based on supervised machine learning. This requires large amounts of labelled data. In the case of idiom interpretation, however, only small datasets are available. These contain just a couple of thousand idiom instances, covering only about fifty different types of idiomatic expressions. In fact, existing annotated corpora tend to cover only a small set of idiom types, comprising just a few syntactic patterns (e.g., verb-object combinations), of which a limited number of instances are extracted from a large corpus. This is not surprising as preparing and compiling such corpora involves a large amount of manual extraction work, especially if one wants to allow for form variation in the idiomatic expressions (for example, extracting cooking all the books for cook the books). This work involves both the crafting of syntactic patterns to match potential idiomatic expressions and the filtering of false extractions (non-instances of the target expression e.g. due to wrong parses), and increases with the amount of idiom types included in the corpus (which, in the worst case, means an exponential increase in false extractions). Thus, building a large corpus of idioms, especially one that covers many types in many syntactic constructions, is costly. If a high-precision, high-recall system can be developed for the task of extracting the annotation candidates, this cost will be greatly reduced, making the construction of a large corpus much more feasible. The variability of idioms has been a significant topic of interest among researchers of idioms. For example, BIBREF6 investigates the internal and external modification of a set of idioms in a large English corpus, whereas BIBREF7, quantifies and classifies the variation of a set of idioms in a large corpus of Dutch, setting up a useful taxonomy of variation types. Both find that, although idiomatic expressions mainly occur in their dictionary form, there is a significant minority of idiom instances that occur in non-dictionary variants. Additionally, BIBREF8 show that idiom variants retain their idiomatic meaning more often and are processed more easily than previously assumed. This emphasises the need for corpora covering idiomatic expressions to include these variants, and for tools to be robust in dealing with them. As such, the aim of this article is to describe methods and provide tools for constructing larger corpora annotated with a wider range of idiom types than currently in existence due to the reduced amount of manual labour required. In this way we hope to stimulate further research in this area. In contrast to previous approaches, we want to catch as many idiomatic expressions as possible, and we achieve this by casting a wide net, that is, we consider the widest range of possible idiom variants first and then filter out any bycatch in a way that requires the least manual effort. We expect research will benefit from having larger corpora by improving evaluation quality, by allowing for the training of better supervised systems, and by providing additional linguistic insight into idiomatic expressions. A reliable method for extracting idiomatic expressions is not only needed for building an annotated corpus, but can also be used as part of an automatic idiom processing pipeline. In such a pipeline, extracting potentially idiomatic expressions can be seen as a first step before idiom disambiguation, and the combination of the two modules then functions as an complete idiom extraction system. The main research question that we aim to answer in this article is whether dictionary-based extraction of potentially idiomatic expressions is robust and reliable enough to facilitate the creation of wide-coverage sense-annotated idiom corpora. By answering this question we make several contributions to research on multiword expressions, in particular that of idiom extraction. Firstly, we provide an overview of existing research on annotating idiomatic expressions in corpora, showing that current corpora cover only small sets of idiomatic types (Section SECREF3). Secondly, we quantify the coverage and reliability of a set of idiom dictionaries, demonstrating that there is little overlap between resources (Section SECREF4). Thirdly, we develop and release an evaluation corpus for extracting potentially idiomatic expressions from text (Section SECREF5). Finally, various extraction systems and combinations thereof are implemented, made available to the research community, and evaluated empirically (Section SECREF6). New Terminology: Potentially Idiomatic Expression (PIE) The ambiguity of phrases like wake up and smell the coffee poses a terminological problem. Usually, these phrases are called idiomatic expressions, which is suitable when they are used in an idiomatic sense, but not so much when they are used in a literal sense. Therefore, we propose a new term: potentially idiomatic expressions, or PIEs for short. The term potentially idiomatic expression refers to those expressions which can have an idiomatic meaning, regardless of whether they actually have that meaning in a given context. So, see the light is a PIE in both `After another explanation, I finally saw the light' and `I saw the light of the sun through the trees', while it is an idiomatic expression in the first context, and a literal phrase in the latter context. The processing of PIEs involves three main challenges: the discovery of (new) PIE types, the extraction of instances of known PIE types in text, and the disambiguation of PIE instances in context. Here, we propose calling the discovery task simply PIE discovery, the extraction task simply PIE extraction, and the disambiguation task PIE disambiguation. Note that these terms contrast with the terms used in existing research. There, the discovery task is called type-based idiom detection and the disambiguation task is called token-based idiom detection (cf. BIBREF10, BIBREF11), although this usage is not always consistent. Because these terms are very similar, they are potentially confusing, and that is why we propose novel terminology. Other terminology comes from literature on multiword expressions (MWEs) more generally, i.e. not specific to idioms. Here, the task of finding new MWE types is called MWE discovery and finding instances of known MWE types is called MWE identification BIBREF12. Note, however, that MWE identification generally consists of finding only the idiomatic usages of these types (e.g. BIBREF13). This means that MWE identification consists of both the extraction and disambiguation tasks, performed jointly. In this work, we propose to split this into two separate tasks, and we are concerned only with the PIE extraction part, leaving PIE disambiguation as a separate problem. Related Work This section is structured so as to reflect the dual contribution of the present work. First, we discuss existing resources annotated for idiomatic expressions. Second, we discuss existing approaches to the automatic extraction of idioms. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms There are four sizeable sense-annotated PIE corpora for English: the VNC-Tokens Dataset BIBREF9, the Gigaword dataset BIBREF14, the IDIX Corpus BIBREF10, and the SemEval-2013 Task 5 dataset BIBREF15. An overview of these corpora is presented in Table TABREF7. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: VNC-Tokens The VNC-Tokens dataset contains 53 different PIE types. BIBREF9 extract up to 100 instances from the British National Corpus for each type, for a total of 2,984 instances. These types are based on a pre-existing list of verb-noun combinations and were filtered for frequency and whether two idiom dictionaries both listed them. Instances were extracted automatically, by parsing the corpus and selecting all sentences with the right verb and noun in a direct-object relation. It is unclear whether the extracted sentences were manually checked, but no false extractions are mentioned in the paper or present in the dataset. All extracted PIE instances were annotated for sense as either idiomatic, literal or unclear. This is a self-explanatory annotation scheme, but BIBREF9 note that senses are not binary, but can form a continuum. For example, the idiomaticity of have a word in `You have my word' is different from both the literal sense in `The French have a word for this' and the figurative sense in `My manager asked to have a word'. They instructed annotators to choose idiomatic or literal even in ambiguous middle-of-the-continuum cases, and restrict the unclear label only to cases where there is not enough context to disambiguate the meaning of the PIE. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: Gigaword BIBREF14 present a corpus of 17 PIE types, for which they extracted all instances from the Gigaword corpus BIBREF18, yielding a total of 3,964 instances. BIBREF14 extracted these instances semi-automatically by manually defining all inflectional variants of the verb in the PIE and matching these in the corpus. They did not allow for inflectional variations in non-verb words, nor did they allow intervening words. They annotated these potential idioms as either literal or figurative, excluding ambiguous and unclear instances from the dataset. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: IDIX BIBREF10 build on the methodology of BIBREF14, but annotate a larger set of idioms (52 types) and extract all occurrences from the BNC rather than the Gigaword corpus, for a total of 4,022 instances including false extractions. BIBREF10 use a more complex semi-automatic extraction method, which involves parsing the corpus, manually defining the dependency patterns that match the PIE, and extracting all sentences containing those patterns from the corpus. This allows for larger form variations, including intervening words and inflectional variation of all words. In some cases, this yields many non-PIE extractions, as for recharge one's batteries in Example SECREF10. These were not filtered out before annotation, but rather filtered out as part of the annotation process, by having false extraction as an additional annotation label. For sense annotation, they use an extensive tagset, distinguishing literal, non-literal, both, meta-linguistic, embedded, and undecided labels. Here, the both label (Example SECREF10) is used for cases where both senses are present, often as a form of deliberate word play. The meta-linguistic label (Example SECREF10) applies to cases where the PIE instance is used as a linguistic item to discuss, not as part of a sentence. The embedded label (Example SECREF10) applies to cases where the PIE is embedded in a larger figurative context, which makes it impossible to say whether a literal or figurative sense is more applicable. The undecided label is used for unclear and undecidable cases. They take into account the fact that a PIE can have multiple figurative senses, and enumerate these separately as part of the annotation. . These high-performance, rugged tools are claimed to offer the best value for money on the market for the enthusiastic d-i-yer and tradesman, and for the first time offer the possibility of a battery recharging time of just a quarter of an hour. (from IDIX corpus, ID #314) . Left holding the baby, single mothers find it hard to fend for themselves. (from BIBREF10, p.642) . It has long been recognised that expressions such as to pull someone's leg, to have a bee in one's bonnet, to kick the bucket, to cook someone's goose, to be off one's rocker, round the bend, up the creek, etc. are semantically peculiar. (from BIBREF10, p.642) . You're like a restless bird in a cage. When you get out of the cage, you'll fly very high. (from BIBREF10, p.642) The both, meta-linguistic, and embedded labels are useful and linguistically interesting distinctions, although they occur very rarely (0.69%, 0.15%, and an unknown %, respectively). As such, we include these cases in our tagset (see Section SECREF5), but group them under a single label, other, to reduce annotation complexity. We also follow BIBREF10 in that we combine both the PIE/non-PIE annotation and the sense annotation in a single task. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: SemEval-2013 Task 5b BIBREF15 created a dataset for SemEval-2013 Task 5b, a task on detecting semantic compositionality in context. They selected 65 PIE types from Wiktionary, and extracted instances from the ukWaC corpus BIBREF17, for a total of 4,350 instances. It is unclear how they extracted the instances, and how much variation was allowed for, although there is some inflectional variation in the dataset. An unspecified amount of manual filtering was done on the extracted instances. The extracted PIE instances were labelled as literal, idiomatic, both, or undecidable. Interestingly, they crowdsourced the sense annotations using CrowdFlower, with high agreement (90%–94% pairwise). Undecidable cases and instances on which annotators disagreed were removed from the dataset. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: General Multiword Expression Corpora In addition to the aforementioned idiom corpora, there are also corpora focused on multiword expressions (MWEs) in a more general sense. As idioms are a subcategory of MWEs, these corpora also include some idioms. The most important of these are the PARSEME corpus BIBREF19 and the DiMSUM corpus BIBREF20. DiMSUM provides annotations of over 5,000 MWEs in approximately 90K tokens of English text, consisting of reviews, tweets and TED talks. However, they do not categorise the MWEs into specific types, meaning we cannot easily quantify the number of idioms in the corpus. In contrast to the corpus-specific sense labels seen in other corpora, DiMSUM annotates MWEs with WordNet supersenses, which provide a broad category of meaning for each MWE. Similarly, the PARSEME corpus consists of over 62K MWEs in almost 275K tokens of text across 18 different languages (with the notable exception of English). The main differences with DiMSUM, except for scale and multilingualism, are that it only includes verbal MWEs, and that subcategorisation is performed, including a specific category for idioms. Idioms make up almost a quarter of all verbal MWEs in the corpus, although the proportion varies wildly between languages. In both corpora, MWE annotation was done in an unrestricted manner, i.e. there was not a predefined set of expressions to which annotation was restricted. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: Overview In sum, there is large variation in corpus creation methods, regarding PIE definition, extraction method, annotation schemes, base corpus, and PIE type inventory. Depending on the goal of the corpus, the amount of deviation that is allowed from the PIE's dictionary form to the instances can be very little BIBREF14, to quite a lot BIBREF10. The number of PIE types covered by each corpus is limited, ranging from 17 to 65 types, often limited to one or more syntactic patterns. The extraction of PIE instances is usually done in a semi-automatic manner, by manually defining patterns in a text or parse tree, and doing some manual filtering afterwards. This works well, but an extension to a large number of PIE types (e.g. several hundreds) would also require a large increase in the amount of manual effort involved. Considering the sense annotations done on the PIE corpora, there is significant variation, with BIBREF9 using only three tags, whereas BIBREF10 use six. Outside of PIE-specific corpora there are MWE corpora, which provide a different perspective. A major difference there is that annotation is not restricted to a pre-specified set of expressions, which has not been done for PIEs specifically. Related Work ::: Extracting Idioms from Corpora There are two main approaches to idiom extraction. The first approach aims to distinguish idioms from other multiword phrases, where the main purpose is to expand idiom inventories with rare or novel expressions BIBREF21, BIBREF22, BIBREF23, BIBREF24. The second approach aims to extract all occurrences of a known idiomatic expression in a text. In this paper, we focus on the latter approach. We rely on idiom dictionaries to provide a list of PIE types, and build a system that extracts all instances of those PIE types from a corpus. High-quality idiom dictionaries exist for most well-resourced languages, but their reliability and coverage is not known. As such, we quantify the coverage of dictionaries in Section SECREF4. There is, to the best of our knowledge, no existing work that focuses on dictionary-based PIE extraction. However, there is closely-related work by BIBREF25, who present a system for the dictionary-based extraction of verb-noun combinations (VNCs) in English and Spanish. In their case, the VNCs can be any kind of multiword expression, which they subdivide into literal expressions, collocations, light verb constructions, metaphoric expressions, and idioms. They extract 173 English VNCs and 150 Spanish VNCs and annotate these with both their lexico-semantic MWE type and the amount of morphosyntactic variation they exhibit. BIBREF25 then compare a word sequence-based method, a chunking-based method, and a parse-based method for VNC extraction. Each method relies on the morpho-syntactic information in order to limit false extractions. Precision is evaluated manually on a sample of the extracted VNCs, and recall is estimated by calculating the overlap between the output of the three methods. Evaluation shows that the methods are highly complementary both in recall, since they extract different VNCs, and in precision, since combining the extractors yields fewer false extractions. Whereas BIBREF25 focus on both idiomatic and literal uses of the set of expressions, like in this paper, BIBREF26 tackle only half of that task, namely extracting only literal uses of a given set of VMWEs in Polish. This complicates the task, since it combines extracting all occurrences of the VMWEs and then distinguishing literal from idiomatic uses. Interestingly, they also experiment with models of varying complexity, i.e. just words, part-of-speech tags, and syntactic structures. Their results are hard to put into perspective however, since the frequency of literal VMWEs in their corpus is very rare, whereas corpora containing PIEs tend to show a more balanced distribution. Other similar work to ours also focuses on MWEs more generally, or on different subtypes of MWEs. In addition, these tend to combine both extraction and disambiguation in that they aim to extract only idiomatically used instances of the MWE, without extracting literally used instances or non-instances. Within this line of work, BIBREF27 focuses on verb-particle constructions, BIBREF28 on verbal MWEs (including idioms), and BIBREF29 on verbal MWEs (especially non-canonical variants). Both BIBREF28 and BIBREF29 rely on a pre-defined set of expressions, whereas BIBREF27 also extracts unseen expressions, although based on a pre-defined set of particles and within the vary narrow syntactic frame of verb-particle constructions. The work of BIBREF27 is most similar to ours in that it builds an unsupervised system using existing NLP tools (PoS taggers, chunkers, parsers) and finds that a combination of systems using those tools performs best, as we find in Section SECREF69. BIBREF28 and BIBREF29, by contrast, use supervised classifiers which require training data, not just for the task in general, but specific to the set of expressions used in the task. Although our approach is similar to that of BIBREF25, both in the range of methods used and in the goal of extracting certain multiword expressions regardless of morphosyntactic variation, there are two main differences. First, we use dictionaries, but extract entries automatically and do not manually annotate their type and variability. As a result, our methods rely only on the surface form of the expression taken from the dictionary. Second, we evaluate precision and recall in a more rigorous way, by using an evaluation corpus exhaustively annotated for PIEs. In addition, we do not put any restriction on the syntactic type of the expressions to be extracted, which BIBREF27, BIBREF28, BIBREF25, and BIBREF29 all do. Coverage of Idiom Inventories ::: Background Since our goal is developing a dictionary-based system for extracting potentially idiomatic expressions, we need to devise a proper method for evaluating such a system. This is not straightforward, even though the final goal of such a system is simple: it should extract all potentially idiomatic expressions from a corpus and nothing else, regardless of their sense and the form they are used in. The type of system proposed here hence has two aspects that can be evaluated: the dictionary that it is using as a resource for idiomatic expression, and the extractor component that finds idioms in a corpus. The difficulty here is that there is no undisputed and unambiguous definition of what counts as an idiom BIBREF30, as is the case with multiword expressions in general BIBREF12. Of course, a complete set of idiomatic expressions for English (or any other language) is impossible to get due to the broad and ever-changing nature of language. This incompleteness is exacerbated by the ambiguity problem: if we had a clear definition of idiom we could make an attempt of evaluating idiom dictionaries on their accuracy, but it is practically impossible to come up with a definition of idiom that leaves no room for ambiguity. This ambiguity, among others, creates a large grey area between clearly non-idiomatic phrases on the one hand (e.g. buy a house), and clear potentially idiomatic phrases on the other hand (e.g. buy the farm). As a consequence, we cannot empirically evaluate the coverage of the dictionaries. Instead, in this work, we will quantify the divergence between various idiom dictionaries and corpora, with regard to their idiom inventories. If they show large discrepancies, we take that to mean that either there is little agreement on definitions of idiom or the category is so broad that a single resource can only cover a small proportion. Conversely, if there is large agreement, we assume that idiom resources are largely reliable, and that there is consensus around what is, and what is not, an idiomatic expression. We use different idiom resources and assume that the combined set of resources yields an approximation of the true set of idioms in English. A large divergence between the idiom inventories of these resources would then suggest a low recall for a single resource, since many other idioms are present in the other resources. Conversely, if the idiom inventories largely overlap, that indicates that a single resource can already yield decent coverage of idioms in the English language. The results of the dictionary comparisons are in Section SECREF36. Coverage of Idiom Inventories ::: Selected Idiom Resources (Data and Method) We evaluate the quality of three idiom dictionaries by comparing them to each other and to three idiom corpora. Before we report on the comparison we first describe why we select and how we prepare these resources. We investigate the following six idiom resources: Wiktionary; the Oxford Dictionary of English Idioms (ODEI, BIBREF31); UsingEnglish.com (UE); the Sporleder corpus BIBREF10; the VNC dataset BIBREF9; and the SemEval-2013 Task 5 dataset BIBREF15. These dictionaries were selected because they are available in digital format. Wiktionary and UsingEnglish have the added benefit of being freely available. However, they are both crowdsourced, which means they lack professional editing. In contrast, ODEI is a traditional dictionary, created and edited by lexicographers, but it has the downside of not being freely available. For Wiktionary, we extracted all idioms from the category `English Idioms' from the English version of Wiktionary. We took the titles of all pages containing a dictionary entry and considered these idioms. Since we focus on multiword idiomatic expressions, we filtered out all single-word entries in this category. More specifically, since Wiktionary is a constantly changing resource, we used the 8,482 idioms retrieved on 10-03-2017, 15:30. We used a similar extraction method for UE, a web page containing freely available resources for ESL learners, including a list of idioms. We extracted all idioms which have publicly available definitions, which numbered 3,727 on 10-03-2017, 15:30. Again, single-word entries and duplicates were filtered out. Concerning ODEI, all idioms from the e-book version were extracted, amounting to 5,911 idioms scraped on 13-03-2017, 10:34. Here we performed an extra processing step to expand idioms containing content in parentheses, such as a tough (or hard) nut (to crack). Using a set of simple expansion rules and some hand-crafted exceptions, we automatically generated all variants for this idiom, with good, but not perfect accuracy. For the example above, the generated variants are: {a tough nut, a tough nut to crack, a hard nut, a hard nut to crack}. The idioms in the VNC dataset are in the form verb_noun, e.g. blow_top, so they were manually expanded to a regular dictionary form, e.g. blow one's top before comparison. Coverage of Idiom Inventories ::: Method In many cases, using simple string-match to check overlap in idioms does not work, as exact comparison of idioms misses equivalent idioms that differ only slightly in dictionary form. Differences between resources are caused by, for example: inflectional variation (crossing the Rubicon — cross the Rubicon); variation in scope (as easy as ABC — easy as ABC); determiner variation (put the damper on — put a damper on); spelling variation (mind your p's and q's — mind your ps and qs); order variation (call off the dogs — call the dogs off); and different conventions for placeholder words (recharge your batteries — recharge one's batteries), where both your and one's can generalise to any possessive personal pronoun. These minor variations do not fundamentally change the nature of the idiom, and we should count these types of variation as belonging to the same idiom (see also BIBREF32, who devise a measure to quantify different types of variation allowed by specific MWEs). So, to get a good estimate of the true overlap between idiom resources, these variations need to be accounted for, which we do in our flexible matching approach. There is one other case of variation not listed above, namely lexical variation (e.g. rub someone up the wrong way - stroke someone the wrong way). We do not abstract over this, since we consider lexical variation to be a more fundamental change to the nature of the idiom. That is, a lexical variant is an indicator of the coverage of the dictionary, where the other variations are due to different stylistic conventions and do not indicate actual coverage. In addition, it is easy to abstract over the other types of variation in an NLP application, but this is not the case for lexical variation. The overlap counts are estimated by abstracting over all variations except lexical variation in a semi-automatic manner, using heuristics and manual checking. Potentially overlapping idioms are selected using the following set of heuristics: whether an idiom from one resource is a substring (including gaps) of an idiom in the other resource, whether the words of an idiom form a subset of the words of an idiom in the other resource, and whether there is an idiom in the other resource which has a Levenshtein ratio of over 0.8. The Levenshtein ratio is an indicator of the Levenshtein distance between the two idioms relative to their length. These potential matches are then judged manually on whether they are really forms of the same idiom or not. Coverage of Idiom Inventories ::: Results The results of using exact string matching to quantify the overlap between the dictionaries is illustrated in Figure FIGREF37. Overlap between the three dictionaries is low. A possible explanation for this lies with the different nature of the dictionaries. Oxford is a traditional dictionary, created and edited by professional lexicographers, whereas Wiktionary is a crowdsourced dictionary open to everyone, and UsingEnglish is similar, but focused on ESL-learners. It is likely that these different origins result in different idiom inventories. Similarly, we would expect that the overlap between a pair of traditional dictionaries, such as the ODEI and the Penguin Dictionary of English Idioms BIBREF33 would be significantly higher. It should also be noted, however, that comparisons between more similar dictionaries also found relatively little overlap (BIBREF34; BIBREF35). A counterpoint is provided by BIBREF36, who quantifies coverage of verb-particle constructions in three different dictionaries and finds large overlap – perhaps because verb-particle are a more restricted class. As noted previously, using exact string matching is a very limited approach to calculating overlap. Therefore, we used heuristics and manual checking to get more precise numbers, as shown in Table TABREF39, which also includes the three corpora in addition to the three dictionaries. As the manual checking only involved judging similar idioms found in pairs of resources, we cannot calculate three-way overlap as in Figure FIGREF37. The counts of the pair-wise overlap between dictionaries differ significantly between the two methods, which serves to illustrate the limitations of using only exact string matching and the necessity of using more advanced methods and manual effort. Several insights can be gained from the data in Table TABREF39. The relation between Wiktionary and the SemEval corpus is obvious (cf. Section SECREF12), given the 96.92% coverage. For the other dictionary-corpus pairs, the coverage increases proportionally with the size of the dictionary, except in the case of UsingEnglish and the Sporleder corpus. The proportional increase indicates no clear qualitative differences between the dictionaries, i.e. one does not have a significantly higher percentage of non-idioms than the other, when compared to the corpora. Generally, overlap between dictionaries and corpora is low: the two biggest, ODEI and Wiktionary have only around 30% overlap, while the dictionaries also cover no more than approximately 70% of the idioms used in the various corpora. Overlap between the three corpora is also extremely low, at below 5%. This is unsurprising, since a new dataset is more interesting and useful when it covers a different set of idioms than used in an existing dataset, and thus is likely constructed with this goal in mind. Corpus Annotation In order to evaluate the PIE extraction methods developed in this work (Section SECREF6), we exhaustively annotate an evaluation corpus with all instances of a pre-defined set of PIEs. As part of this, we come up with a workable definition of PIEs, and measure the reliability of PIE annotation by inter-annotator agreement. Assuming that we have a set of idioms, the main problem of defining what is and what is not a potentially idiomatic expression is caused by variation. In principle, potentially idiomatic expression is an instance of a phrase that, when seen without context, could have either an idiomatic or a literal meaning. This is clearest for the dictionary form of the idiom, as in Example SECREF5. Literal uses generally allow all kinds of variation, but not all of these variations allow a figurative interpretation, e.g. Example SECREF5. However, how much variation an idiom can undergo while retaining its figurative interpretation is different for each expression, and judgements of this might vary from one speaker to the other. An example of this is spill the bean, a variant of spill the beans, in Example SECREF5 judged by BIBREF21 as being highly questionable. However, even here a corpus example can be found containing the same variant used in a figurative sense (Example SECREF5). As such, we assume that we cannot know a priori which variants of an expression allow a figurative reading, and are thus a potentially idiomatic expression. Therefore we consider every possible morpho-syntactic variation of an idiom a PIE, regardless of whether it actually allows a figurative reading. We believe the boundaries of this variation can only be determined based on corpus evidence, and even then they are likely variable. Note that a similar question is tackled by BIBREF26, when they establish the boundary between a `literal reading of a VMWE' and a `coincidental co-occurrence'. BIBREF26's answer is similar to ours, in that they count something as a literal reading of a VMWE if it `the same or equivalent dependencies hold between [the expression]'s components as in its canonical form'. . John kicked the bucket last night. . * The bucket, John kicked last night. . ?? Azin spilled the bean. (from BIBREF21) . Alba reveals Fantastic Four 2 details The Invisible Woman actress spills the bean on super sequel (from ukWaC) Corpus Annotation ::: Evaluating the Extraction Methods Evaluating the extraction methods is easier than evaluating dictionary coverage, since the goal of the extraction component is more clearly delimited: given a set of PIEs from one or more dictionaries, extract all occurrences of those PIEs from a corpus. Thus, rather than dealing with the undefined set of all PIEs, we can work with a clearly defined and finite set of PIEs from a dictionary. Because we have a clearly defined set of PIEs, we can exhaustively annotate a corpus for PIEs, and use that annotated corpus for automatic evaluation of extraction methods using recall and precision. This allows us to facilitate and speed up annotation by pre-extracting sentences possibly containing a PIE. After the corpus is annotated, the precision and recall can be easily estimated by comparing the extracted PIE instances to those marked in the corpus. The details of the corpus selection, dictionary selection, extraction heuristic and annotation procedure are presented in Section SECREF46, and the details and results of the various extraction methods are presented in Section SECREF6. Corpus Annotation ::: Base Corpus and Idiom Selection As a base corpus, we use the XML version of the British National Corpus BIBREF37, because of its size, variety, and wide availability. The BNC is pre-segmented into s-units, which we take to be sentences, w-units, which we take to be words, and c-units, punctuation. We then extract the text of all w-units and c-units. We keep the sentence segmentation, resulting in a set of plain text sentences. All sentences are included, except for sentences containing <gap> elements, which are filtered out. These <gap> elements indicate places where material from the original has been left out, e.g. for anonymisation purposes. Since this can result in incomplete sentences that cannot be parsed correctly, we filter out sentences containing these gaps. We use only the written part of the BNC. From this, we extract a set of documents with the aim of having as much genre variation as possible. To achieve this, we select the first document in each genre, as defined by the classCode attribute (e.g. nonAc, commerce, letters). The resulting set of 46 documents makes up our base corpus. Note that these documents vary greatly in size, which means the resulting corpus is varied, but not balanced in terms of size (Table TABREF43). The documents are split across a development and test set, as specified at the end of Section SECREF46. We exclude documents with IDs starting with A0 from all annotation and evaluation procedures, as these were used during development of the extraction tool and annotation guidelines. As for the set of potentially idiomatic expressions, we use the intersection of the three dictionaries, Wiktionary, Oxford, and UsingEnglish. Based on the assumption that, if all three resources include a certain idiom, it must unquestionably be an idiom, we choose the intersection (also see Figure FIGREF37). This serves to exclude questionable entries, like at all, which is in Wiktionary. The final set of idioms used for these experiments consists of 591 different multiword expressions. Although we aim for wide coverage, this is a necessary trade-off to ensure quality. At the same time, it leaves us with a set of idiom types that is approximately ten times larger than present in existing corpora. The set of 591 idioms includes idioms with a large variety of syntactic patterns, of which the most frequent ones are shown in Table TABREF44. The statistics show that the types most prevalent in existing corpora, verb-noun and preposition-noun combinations, are indeed the most frequent ones, but that there is a sizeable minority of types that do not fall into those categories, including coordinated adjectives, coordinated nouns, and nouns with prepositional phrases. This serves to emphasise the necessity of not restricting corpora to a small set of syntactic patterns. Corpus Annotation ::: Extraction of PIE Candidates To annotate the corpus completely manually would require annotators to read the whole corpus, and cross-reference each sentence to a list of almost 600 PIEs, to check whether one of those PIEs occurs in a sentence. We do not consider this a feasible annotation settings, due to both the difficulty of recognising literal usages of idioms and the time cost needed to find enough PIEs, given their low overall frequency. As such, we use a pre-extraction step to present candidates for annotation to the human annotators. Given the corpus and the set of PIEs, we heuristically extract the PIE candidates as follows: given an idiomatic expression, extract every sentence which contains all the defining words of the idiom, in any form. This ensures that all possibly matching sentences get extracted, while greatly pruning the amount of sentences for annotators to look at. In addition, it allows us to present the heuristically matched PIE type and corresponding words to the annotators, which makes it much easier to judge whether something is a PIE or not. This also means that annotators never have to go through the full list of PIEs during the annotation process. Initially, the heuristic simply extracted any sentence containing all the required words, where a word is any of the inflectional variants of the words in the PIE, except for determiners and punctuation. This method produced large amounts of noise, that is, a set of PIE candidates with only a very low percentage of actual PIEs. This was caused by the presence of some highly frequent PIEs with very little defining lexical content, such as on the make, and in the running. For example, with the original method, every sentence containing the preposition on, and any inflectional form of the verb make was extracted, resulting in a huge number of non-PIE candidates. To limit the amount of noise, two restrictions were imposed. The first restrictions disallows word order variation for PIEs which do not contain a verb. The rationale behind this is that word order variation is only possible with PIEs like spill the beans (e.g. the beans were spilled), and not with PIEs like in the running (*the running in??). The second restriction is that we limit the number of words that can be inserted between the words of a PIE, but only for PIEs like on the make, and in the running, i.e. PIEs which only contain prepositions, determiners and a single noun. The number of intervening words was limited to three tokens, allowing for some variation, as in Example SECREF45, but preventing sentences like Example SECREF45 from being extracted. This restriction could result in the loss of some PIE candidates with a large number of intervening words. However, the savings in annotation time clearly outweigh the small loss in recall in this situation. . Either at New Year or before July you can anticipate a change in the everyday running of your life. (in the running - BNC - document CBC - sentence 458) . [..] if [he] hung around near the goal or in the box for that matter instead of running all over the show [..] (in the running - BNC - document J1C - sentence 1341) Corpus Annotation ::: Annotation Procedure The manual annotation procedure consists of three different phases (pilot, double annotation, single annotation), followed by an adjudication step to resolve conflicting annotations. Two things are annotated: whether something is a PIE or not, and if it is a PIE, which sense the PIE is used in. In the first phase (0-100-*), we randomly select hundred of the 2,239 PIE candidates which are then annotated by three annotators. All annotators have a good command of English, are computational linguists, and familiar with the subject. The annotators include the first and last author of this paper. The annotators were provided with a short set of guidelines, of which the main rule-of-thumb for labelling a phrase as a PIE is as follows: any phrase is a PIE when it contains all the words, with the same part-of-speech, and in the same grammatical relations as in the dictionary form of the PIE, ignoring determiners. For sense annotation, annotators were to mark a PIE as idiomatic if it had a sense listed in one of the idiom dictionaries, and as literal if it had a meaning that is a regular composition of its component words. For cases which were undecidable due to lack of context, the ?-label was used. The other-label was used as a container label for all cases in which neither the literal or idiomatic sense was correct (e.g. meta-linguistic uses and embeddings in metaphorical frames, see also Section SECREF10). The first phase of annotation serves to bring to light any inconsistencies between annotators and fill in any gaps in the annotation guidelines. The resulting annotations already show a reasonably high agreement of 0.74 Fleiss' Kappa. Table TABREF48 shows annotation details and agreement statistics for all three phases. The annotation tasks suffixed by -PIE indicate agreement on PIE/non-PIE annotation and the tasks suffixed by -sense indicate agreement on sense annotation for PIEs. In the second phase of annotation (100-600-* & 600-1100-*), another 1000 of the 2239 PIE candidates are selected to be annotated by two pairs of annotators. This shows very high agreement, as shown in Table TABREF48. This is probably due to the improvement in guidelines and the discussion following the pilot round of annotation. The exception to this are the somewhat lower scores for the 600-1100-sense annotation task. Adjudication revealed that this is due almost exclusively because of a different interpretation of the literal and idiomatic senses of a single PIE type: on the ground. Excluding this PIE type, Fleiss' Kappa increases from 0.63 to 0.77. Because of the high agreement on PIE annotation, we deem it sufficient for the remainder (1108 candidates) to be annotated by only the primary annotator in the third phase of annotation (1100-2239-*). The reliability of the single annotation can be checked by comparing the distribution of labels to the multi-annotated parts. This shows that it falls clearly within the ranges of the other parts, both in the proportion of PIEs and idiomatic senses (see Table TABREF49). The single-annotated part has 49.0% PIEs, which is only 4 percentage points above the 44.7% PIEs in the multi-annotated parts. The proportion of idioms is just 2 percentage points higher, with 55.9% versus 53.9.%. Although inter-annotator agreement was high, there was still a significant number of cases in the triple and double annotated PIE candidate sets where not all annotators agreed. These cases were adjudicated through discussion by all annotators, until they were in agreement. In addition, all PIE candidates which initially received the ?-label (unclear or undecidable) for sense or PIE were resolved in the same manner. In the adjudication procedure, annotators were provided with additional context on each side of the idiom, in contrast to the single sentence provided during the initial annotation. The main reason to do adjudication, rather than simply discarding all candidates for which there was disagreement, was that we expected exactly those cases for which there are conflicting annotations to be the most interesting ones, since having non-standard properties would cause the annotations to diverge. Examples of such interesting non-standard cases are at sea as part of a larger satirical frame in Example SECREF46 and cut the mustard in Example SECREF46 where it is used in a headline as wordplay on a Cluedo character. . The bovine heroine has connections with Cowpeace International, and deals with a huge treacle slick at sea. (at sea - BNC - document CBC - sentence 13550) . Why not cut the Mustard? [..] WADDINGTON Games's proposal to axe Reverend Green from the board game Cluedo is a bad one. (cut the mustard - BNC - document CBC - sentence 14548) We split the corpus at the document level. The corpus consists of 45 documents from the BNC, and we split it in such a way that the development set has 1,112 candidates across 22 documents and the test set has 1,127 candidates from 23 documents. Note that this means that the development and test set contain different genres. This ensures that we do not optimise our systems on genre-specific aspects of the data. Dictionary-based PIE Extraction We propose and implement four different extraction methods, of differing complexities: exact string match, fuzzy string match, inflectional string match, and parser-based extraction. Because of the absence of existing work on this task, we compare these methods to each other, where the more basic methods function as baselines. More complex methods serve to shine light on the difficulty of the PIE extraction task; if simple methods already work sufficiently well, the task is not as hard as expected, and vice versa. Below, each of the extraction methods is presented and discussed in detail. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Exact String Match This is, very simply, extracting all instances of the exact dictionary form of the PIE, from the tokenized text of the corpus. Word boundaries are taken into account, so at sea does not match `that seawater'. As a result, all inflectional and other variants of the PIE are ignored. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Fuzzy String Match Fuzzy string match is a rough way of dealing with morphological inflection of the words in a PIE. We match all words in the PIE, taking into account word boundaries, and allow for up to 3 additional letters at the end of each word. These 3 additional characters serve to cover inflectional suffixes. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Inflectional String Match In inflectional string match, we aim to match all inflected variations of a PIE. This is done by generating all morphological variants of the words in a PIE, generating all combinations of those words, and then using exact string match as described earlier. Generating morphological variations consists of three steps: part-of-speech tagging, morphological analysis, and morphological reinflection. Since inflectional variation only applies to verbs and nouns, we use the Spacy part-of-speech tagger to detect the verbs and nouns. Then, we apply the morphological analyser morpha to get the base, uninflected form of the word, and then use the morphological generation tool morphg to get all possible inflections of the word. Both tools are part of the Morph morphological processing suite BIBREF38. Note that the Morph tools depend on the part-of-speech tag in the input, so that a wrong PoS may lead to an incorrect set of morphological variants. For a PIE like spill the beans, this results in the following set of variants: $\lbrace $spill the bean, spills the bean, spilled the bean, spilling the bean, spill the beans, spills the beans, spilled the beans, spilling the beans$\rbrace $. Since we generate up to 2 variants for each noun, and up to 4 variants for each verb, the number of variants for PIEs containing multiple verbs and nouns can get quite large. On average, 8 additional variants are generated for each potentially idiomatic expression. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Additional Steps For all string match-based methods, ways to improve performance are implemented, to make them as competitive as possible. Rather than doing exact string matching, we also allow words to be separated by something other than spaces, e.g. nuts-and-bolts for nuts and bolts. Additionally, there is an option to take into account case distinctions. With the case-sensitive option, case is preserved in the idiom lists, e.g. coals to Newcastle, and the string matching is done in a case-sensitive manner. This increases precision, e.g. by avoiding PIEs as part of proper names, but also comes at a cost of recall, e.g. for sentence-initial PIEs. Thirdly, there is the option to allow for a certain number of intervening words between each pair of words in the PIE. This should improve recall, at the cost of precision. For example, this would yield the true positive make a huge mountain out of a molehill for make a mountain out of a molehill, but also false positives like have a smoke and go for have a go. A third shared property of the string-based methods is the processing of placeholders in PIEs. PIEs containing possessive pronoun placeholders, such as one's and someone's are expanded. That is, we remove the original PIE, and add copies of the PIE where the placeholder is replaced by one of the possessive personal pronouns. For example, a thorn in someone's side is replaced by a thorn in {my, your, his, ...} side. In the case of someone's, we also add a wildcard for any possessively used word, i.e. a thorn in —'s side, to match e.g. a thorn in Google's side. Similarly, we make sure that PIE entries containing —, such as the mother of all —, will match any word for — during extraction. We do the same for someone, for which we substitute objective pronouns. For one, this is not possible, since it is too hard to distinguish from the one used as a number. Dictionary-based PIE Extraction ::: Parser-Based Extraction Methods Parser-based extraction is potentially the widest-coverage extraction method, with the capacity to extract both morphological and syntactic variants of the PIE. This should be robust against the most common modifications of the PIE, e.g. through word insertions (spill all the beans), passivisation (the beans were spilled), and abstract over articles (spill beans). In this method, PIEs are extracted using the assumption that any sentence which contains the lemmata of the words in the PIE, in the same dependency relations as in the PIE, contains an instance of the PIE type in question. More concretely, this means that the parse of the sentence should contain the parse tree of the PIE as a subtree. This is illustrated in Figure FIGREF57, which shows the parse tree for the PIE lose the plot, parsed without context. Note that this is a subtree of the parse tree for the sentence `you might just lose the plot completely', which is shown in Figure FIGREF58. Since the sentence parse contains the parse of the PIE, we can conclude that the sentence contains an instance of that PIE and extract the span of the PIE instance. All PIEs are parsed in isolation, based on the assumption that all PIEs can be parsed, since they are almost always well-formed phrases. However, not all PIEs will be parsed correctly, especially since there is no context to resolve ambiguity. Errors tend to occur at the part-of-speech level, where, for example, verb-object combinations like jump ship and touch wood are erroneously tagged as noun-noun compounds. An analysis of the impact of parser error on PIE extraction performance is presented in Section SECREF73. Initially, we use the Spacy parser for parsing both the PIEs and the sentences. Next, the sentence is parsed, and the lemma of the top node of the parsed PIE is matched against the lemmata of the sentence parse. If a match is found, the parse tree of the PIE is matched against the subtree of the matching sentence parse node. If the whole PIE parse tree matches, the span ranging from the first PIE token to the last is extracted. This span can thus include words that are not directly part of the PIE's dictionary form, in order to account for insertions like ships were jumped for jump ship, or have a big heart for have a heart. During the matching, articles (a/an/the) are ignored, and passivisation is accounted for with a special rule. In addition, a number of special cases are dealt with. These are PIEs containing someone('s), something('s), one's, or —. These words are used in PIEs as placeholders for a generic possessor (someone's/something's/one's), generic object (someone/something), or any word of the right PoS (—). For someone's, and something's, we match any possessive pronoun, or (proper) noun + possessive marker. For one's, only possessive pronouns are matched, since this is a placeholder for reflexive possessors. For someone and something, any non-possessive pronoun or (proper) noun is matched. For — wildcards, any word can be matched, as long as it has the right relation to the right head. An additional challenge with these wildcards is that PIEs containing them cannot be parsed, e.g. too — for words is not parseable. This is dealt with by substituting the — by a PoS-ambiguous word, such as fine, or back. Two optional features are added to the parser-based method with the goal of making it more robust to parser errors: generalising over dependency relation labels, and generalising over dependency relation direction. We expect this to increase recall at the cost of precision. In the first no labels setting, we match parts of the parse tree which have the same head lemma and the same dependent lemma, regardless of the relation label. An example of this is Figure FIGREF60, which has the wrong relation label between up and ante. If labels are ignored, however, we can still extract the PIE instance in Figure FIGREF61, which has the correct label. In the no directionality setting, relation labels are also ignored, and in addition the directionality of the relation is ignored, that is, we allow for the reversal of heads and dependents. This benefits performance in a case like Figure FIGREF62, which has stock as the head of laughing in a compound relation, whereas the parse of the PIE (Figure FIGREF63) has laughing as the head of stock in a dobj relation. Note that similar settings were implemented by BIBREF26, who detect literal uses of VMWEs using a parser-based method with either full labelled dependencies, unlabelled dependencies, or directionless unlabelled dependencies (which they call BagOfDeps). They find that recall increases when less restrictions on the dependencies are used, but that this does not hurt precision, as we would expect. However, we cannot draw too many conclusions from these results due to the small size of their evaluation set, which consists of just 72 literal VMWEs in total. Dictionary-based PIE Extraction ::: Parser-Based Extraction Methods ::: In-Context Parsing Since the parser-based method parses PIEs without any context, it often finds an incorrect parse, as for jump ship in Figure FIGREF65. As such, we add an option to the method that aims to increase the number of correct parses by parsing the PIE within context, that is, within a sentence. This can greatly help to disambiguate the parse, as in Figure FIGREF66. If the number of correct parses goes up, the recall of the extraction method should also increase. Naturally, it can also be the case that a PIE is parsed correctly without context, and incorrectly with context. However, we expect the gains to outweigh the losses. The challenge here is thus to collect example sentences containing the PIE. Since the whole point of this work is to extract PIEs from raw text, this provides a catch-22-like situation: we need to extract a sentence containing a PIE in order to extract sentences containing a PIE. The workaround for this problem is to use the exact string matching method with the dictionary form of the PIE and a very large plain text corpus to gather example sentences. By only considering the exact dictionary form we both simplify the finding of example sentences and the extraction of the PIE's parse from the sentence parse. In case multiple example sentences are found, the shortest sentence is selected, since we assume it is easiest to parse. This is also the reason we make use of very large corpora, to increase the likelihood of finding a short, simple sentence. The example sentence extraction method is modified in such a way that sentences where the PIE is used meta-linguistically in quotes, e.g. “the well-known English idiom `to spill the beans' has no equivalents in other languages”, are excluded, since they do not provide a natural context for parsing. When no example sentence can be found in the corpus, we back-off to parsing the PIE without context. After a parse has been found for each PIE (i.e. with or without context), the method proceeds identically to the regular parser-based method. We make use of the combination of two large corpora for the extraction of example sentences: the English Wikipedia, and ukWaC BIBREF17. For the Wikipedia corpus, we use a dump (13-01-2016) of the English-language Wikipedia, and remove all Wikipedia markup. This is done using WikiExtractor. The resulting files still contain some mark-up, which is removed heuristically. The resulting corpus contains mostly clean, raw, untokenized text, numbering approximately 1.78 billion tokens. As for ukWaC, all XML-markup was removed, and the corpus is converted to a one-sentence-per-line format. UkWaC is tokenized, which makes it difficult for a simple string match method to find PIEs containing punctuation, for example day in, day out. Therefore, all spaces before commas, apostrophes, and sentence-final punctuation are removed. The resulting corpus contains approximately 2.05 billion tokens, making for a total of 3.83 billion tokens in the combined ukWaC and Wikipedia corpus. Dictionary-based PIE Extraction ::: Results In order to determine which of the methods described previously produces the highest quality extraction of potentially idiomatic expressions, we evaluate them, in various settings, on the corpus described in Section SECREF5. For parser-based extraction, systems with and without in-context parsing, ignoring labels, and ignoring directionality are tested. For the three string-based extraction methods, varying numbers of intervening words and case sensitivity are evaluated. Evaluation is done using the development set, consisting of 22 documents and 1112 PIE candidates, and the test set, which consists of 23 documents and 1127 PIE candidates. For each method the best set of parameters and/or options is determined using the development set, after which the best variant by F1-score of each method is evaluated on the test set. Since these documents in the corpus are exhaustively annotated for PIEs (see Section SECREF40), we can calculate true and false positives, and false negatives, and thus precision, recall and F1-score. The exact spans are ignored, because the spans annotated in the evaluation corpus are not completely reliable. These were automatically generated during candidate extraction, as described in Section SECREF45. Rather, we count an extraction as a true positive if it finds the correct PIE type in the correct sentence. Note that we judge the system with the highest F1-score to be the best-performing system, since it is a clear and objective criterion. However, when using the system in practice, the best performance depends on the goal. When used as a preprocessing step for PIE disambiguation, the system with the highest F1-score is perhaps the most suitable, but as a corpus building tool, one might want to sacrifice some precision for an increase in recall. This helps to get the most comprehensive annotation of PIEs possible, without overloading the annotators with false extractions (i.e. non-PIEs), by maintaining high precision. The results for each system on the development set are presented in Tables TABREF70 and TABREF71. Generally, results are in line with expectations: (the best) parse-based methods are better than (the best) string-based methods, and within string-based methods, inflectional matching works best. The same goes for the different settings: case-sensitivity increases precision at the cost of recall, allowing intervening words increases recall at the cost of precision, and the same goes for the no labels and no directionality options for parser-based extraction. Overall, in-context parser-based extraction works best, with an F1 of 88.54%, whereas fuzzy matching does very poorly. Within string-based methods, exact matching has the highest precision, but low recall. Fuzzy matching increases recall at a disproportionately large precision cost, whereas inflectional matching combines the best of both worlds and has high recall at a small loss in precision. For the parser-based system, it is notable that parsing idioms within context yields a clear overall improvement by greatly improving recall at a small cost in precision. We evaluate the best variant of each system, as determined by F1-score, on the test set. This gives us an indication of whether the system is robust enough, or was overfitted on the development data. Results on the test set are shown in Table TABREF72. On average, the results are lower than the results on the development set. The string-based methods perform clearly worse, with drops of about 4% F1-score for exact and inflectional match, and a large drop of almost 9% F1-score for fuzzy matching. The parser-based method, on the other hand, is more robust, with a small 0.59% increase in F1-score on the test set. Dictionary-based PIE Extraction ::: Analysis Broadly speaking, the PIE extraction systems presented above perform in line with expectations. It is nevertheless useful to see where the best-performing system misses out, and where improvements like in-context parsing help performance. We analyse the shortcomings of the in-context parser-based system by looking at the false positives and false negatives on the development set. We consider the output of the system with best overall performance, since it will provide the clearest picture. The system extracts 529 PIEs in total, of which 54 are false extractions (false positives), and it misses 69 annotated PIE instances (false negatives). Most false positives stem from the system's failure to capture nuances of PIE annotation. This includes cases where PIEs contain, or are part of, proper nouns (Example SECREF73), PIEs that are part of coordination constructions (Example SECREF73), and incorrect attachments (Example SECREF73). Among these errors, sentences containing proper nouns are an especially frequent problem. . Drama series include [..] airline security thrills in Cleared For Takeoff and Head Over Heels [..] (in the clear - BNC - document CBC - sentence 5177) . They prefer silk, satin or lace underwear in tasteful black or ivory. (in the black - BNC - document CBC - sentence 14673) . [..] `I saw this chap make something out of an ordinary piece of wood — he fashioned it into an exquisite work of art.' (out of the woods - BNC - document ABV - sentence 1300) The main cause of false negatives are errors made by the parser. In order to correctly extract a PIE from a sentence, both the PIE and the sentence have to be parsed correctly, or at least parsed in the same way. This means a missed extraction can be caused by a wrong parse for the PIE or a wrong parse for the sentence. These two error types form the largest class of false negatives. Since some PIE types are rather frequent, a wrong parse for a single PIE type can potentially lead to a large number of missed extractions. It is not surprising that the parser makes many mistakes, since idioms often have unusual syntactic constructions (e.g. come a cropper) and contain words where default part-of-speech tags lead to the wrong interpretation (e.g. round is a preposition in round the bend, not a noun or adjective). This is especially true when idioms are parsed without context, and hence, where in-context parsing provides the largest benefit: the number of PIEs which are parsed incorrectly drops, which leads to F1-scores on those types going from 0% to almost 100% (e.g. in light of and ring a bell). Since parser errors are the main contributor to false negatives, hurting recall, we can observe that parsing idioms in context serves to benefit only recall, by 7 percentage points, at only a small loss in precision. We find that adding context mainly helps for parsing expressions which are structurally relatively simple, but still ambiguous, such as rub shoulders, laughing stock, and round the bend. Compare, for example, the parse trees for laughing stock in isolation and within the extracted context sentence in Figures FIGREF74 and FIGREF75. When parsed in isolation, the relation between the two words is incorrectly labelled as a compound relation, whereas in context it is correctly labelled as a direct object relation. Note however, that for the most difficult PIEs, embedding them in a context does solve the parsing problem: a syntactically odd phrase is hard to phrase (e.g. for the time being), and a syntactically odd phrase in a sentence makes for a syntactically odd sentence that is still hard to parse (e.g. `London for the time being had been abandoned.'). Finding example sentences turned out not to be a problem, since appropriate sentences were found for 559 of 591 PIE types. An alternative method for reducing parser error is to use a different, better parser. The Spacy parser was mainly chosen for implementation convenience and speed, and there are parsers which have better performance, as measured on established parsing benchmarks. To investigate the effectiveness of this method, we used the Stanford Neural Dependency Parser BIBREF39 to extract PIEs in the regular parsing, in-context parsing and the no labels settings. In all cases, using the Stanford parser yielded worse extraction performance than the Spacy parser. A possible explanation for why a supposedly better parser performs worse here is that parsers are optimised and trained to do well on established benchmarks, which consist of complete sentences, often from news texts. This does not necessarily correlate with parsing performance on short (sentences containing) idiomatic phrases. As such, we cannot assume that better overall parsing performance implies PIE extraction performance. It should be noted that, when assessing the quality of PIE extraction performance, the parser-based methods are sensitive to specific PIE types. That is, if a single PIE type is parsed incorrectly, then it is highly probable that all instances of that type are missed. If this type is also highly frequent, this means that a small change in actual performance yields a large change in evaluation scores. Our goal is to have a PIE extraction system that is robust across all PIE types, and thus the current evaluation setting does not align exactly with our aim. Splitting out performance per PIE type reveals whether there is indeed a large variance in performance across types. Table TABREF76 shows the 25 most frequent PIE types in the corpus, and the performance of the in-context-parsing-based system on each. Except two cases (in the black and round the bend), we see that the performance is in the 80–100% range, even showing perfect performance on the majority of types. For none of the types do we see low precision paired with high recall, which indicates that the parser never matches a highly frequent non-PIE phrase. For the system with the no labels and no-directionality options (per-type numbers not shown here), however, this does occur. For example, ignoring the labels for the parse of the PIE have a go leads to the erroneous matching of many sentences containing a form of have to go, which is highly frequent, thus leading to a large drop in precision. Although performance is stable across the most frequent types, among the less frequent types it is more spotty. This hurts overall performance, and there are potential gains in mitigating the poor performance on these types, such as for the time being. At the same time, the string matching methods show much more stable performance across types, and some of them do so with very high precision. As such, a combination of two such methods could boost performance significantly. If we use a high-precision string match-based method, such as the exact string match variant with a precision of 97.35%, recall could be improved for the wrongly parsed PIE types, without a significant loss of precision. We experiment with two such combinations, by simply taking the union of the sets of extracted idioms of both systems, and filtering out duplicates. Results are shown in Table TABREF77. Both combinations show the expected effect: a clear gain in recall at a minimal loss in precision. Compared to the in-context-parsing-based system, the combination with exact string matching yields a gain in recall of over 6%, and the combination with inflectional string matching yields an even bigger gain of almost 8%, at precision losses of 0.6% and 0.8%, respectively. This indicates that the systems are very much complementary in the PIEs they extract. It also means that, when used in practice, combining inflectional string matching and parse-based extraction is the most reliable configuration. Conclusions and Outlook We present an in-depth study on the automatic extraction of potentially idiomatic expressions based on dictionaries. The purpose of automatic dictionary-based extraction is, on the one hand, to function as a pre-extraction step in the building of a large idiom-annotated corpus. On the other hand, it can function as part of an idiom extraction system when combined with a disambiguation component. In both cases, the ultimate goal is to improve the processing of idiomatic expressions within NLP. This work consists of three parts: a comparative evaluation of the coverage of idiom dictionaries, the annotation of a PIE corpus, and the development and evaluation of several dictionary-based PIE extraction methods. In the first part, we present a study of idiom dictionary coverage, which serves to answer the question of whether a single idiom dictionary, or a combination of dictionaries, can provide good coverage of the set of all English idioms. Based on the comparison of dictionaries to each other, we estimate that the overlap between them is limited, varying from 20% to 55%, which indicates a large divergence between the dictionaries. This can be explained by the fact that idioms vary widely by register, genre, language variety, and time period. In our case, it is also likely that the divergence is caused partly by the gap between crowdsourced dictionaries on the one hand, and a dictionary compiled by professional lexicographers on the other. Given these factors, we can conclude that a single dictionary cannot provide even close to complete coverage of English idioms, but that by combining dictionaries from various sources, significant gains can be made. Since `English idioms' are a diffuse and constantly changing set, we have no gold standard to compare to. As such, we conclude that multiple dictionaries should be used when possible, but that we cannot say any anything definitive on the coverage of dictionaries with regard to the complete set of English idioms (which can only be approximated in the first place). A more comprehensive of idiom resources could be made in the future by using more advanced automatic methods for matching, for example by using BIBREF32's (BIBREF32) method for measuring expression variability. This would make it easier to evaluate a larger number of dictionaries, since no manual effort would be required. In the second part, we experiment with the exhaustive annotation of PIEs in a corpus of documents from the BNC. Using a set of 591 PIE types, much larger and more varied than in existing resources, we show that it is very much possible to establish a working definition of PIE that allows for a large amount of variation, while still being useful for reliable annotation. This resulted in high inter-annotator agreement, ranging from 0.74 to 0.91 Fleiss' Kappa. This means that we can build a resource to evaluate a wide-range idiom extraction system with relatively little effort. The final corpus of PIEs with sense annotations is publicly available consists of 2,239 PIE candidates, of which 1,050 actual PIEs instances, and contains 278 different PIE types. Finally, several methods for the automatic extraction of PIE instances were developed and evaluated on the annotated PIE corpus. We tested methods of differing complexity, from simple string match to dependency parse-based extraction. Comparison of these methods revealed that the more computationally complex method, parser-based extraction, works best. Parser-based extraction is especially effective in capturing a larger amount of variation, but is less precise than string-based methods, mostly because of parser error. The best overall setting of this method, which parses idioms within context, yielded an F1-score of 89.13% on the test set. Parser error can be partly compensated by combining the parse-based method and the inflectional string match method, which yields an F1-score of 92.01% (on the development set). This aligns well with the findings by BIBREF27, who found that combining simpler and more complex methods improves over just using a simple method case for extracting verb-particle constructions. This level of performance means that we can use the tool in corpus building. This greatly reduces the amount of manual extraction effort involved, while still maintaining a high level of recall. We make the source code for the different systems publicly available. Note that, although used here in the context of PIE extraction, our methods are equally applicable to other phrase extraction tasks, for example the extraction of light-verb constructions, metaphoric constructions, collocations, or any other type of multiword expression (cf. BIBREF27, BIBREF25, BIBREF26). Similarly, our method can be conceived as a blueprint and extended to languages other than English. For this to be possible, for any given new language one would need a list of target expressions and, in the case of the parser-based method, a reliable syntactic parser. If this is not the case, the inflectional matching method can be used, which requires only a morphological analyser and generator. Obviously, for languages that are morphologically richer than English, one would need to develop strategies aimed at controlling non-exact matches, so as to enhance recall without sacrificing precision. Previous work on Italian, for example, has shown the feasibility of achieving such balance through controlled pattern matching BIBREF40. Languages that are typologically very different from English would obviously require a dedicated approach for the matching of PIEs in corpora, but the overall principles of extraction, using language-specific tools, could stay the same. Currently, no corpora containing annotation of PIEs exist for languages other than English. However, the PARSEME corpus BIBREF19 already contains idioms (only idiomatic readings) for many languages and would only need annotation of literal usages of idioms to make up a set of PIEs. Paired with the Universal Dependencies project BIBREF41, which increasingly provides annotated data as well as processing tools for an ever growing number of languages, this seems an excellent starting point for creating PIE resources in multiple languages.
exact string matching, inflectional string matching
0fec9da2bc80a12a7a6d6600b9ecf3e122732b60
0fec9da2bc80a12a7a6d6600b9ecf3e122732b60_0
Q: Are PIEs extracted automatically subjected to human evaluation? Text: Introduction Idiomatic expressions pose a major challenge for a wide range of applications in natural language processing BIBREF0. These include machine translation BIBREF1, BIBREF2, semantic parsing BIBREF3, sentiment analysis BIBREF4, and word sense disambiguation BIBREF5. Idioms show significant syntactic and morphological variability (e.g. beans being spilled for spill the beans), which makes them hard to find automatically. Moreover, their non-compositional nature makes idioms really hard to interpret, because their meaning is often very different from the meanings of the words that make them up. Hence, successful systems need not only be able to recognise idiomatic expressions in text or dialogue, but they also need to give a proper interpretation to them. As a matter of fact, current language technology performs badly on idiom understanding, a phenomenon that perhaps has not received enough attention. Nearly all current language technology used in NLP applications is based on supervised machine learning. This requires large amounts of labelled data. In the case of idiom interpretation, however, only small datasets are available. These contain just a couple of thousand idiom instances, covering only about fifty different types of idiomatic expressions. In fact, existing annotated corpora tend to cover only a small set of idiom types, comprising just a few syntactic patterns (e.g., verb-object combinations), of which a limited number of instances are extracted from a large corpus. This is not surprising as preparing and compiling such corpora involves a large amount of manual extraction work, especially if one wants to allow for form variation in the idiomatic expressions (for example, extracting cooking all the books for cook the books). This work involves both the crafting of syntactic patterns to match potential idiomatic expressions and the filtering of false extractions (non-instances of the target expression e.g. due to wrong parses), and increases with the amount of idiom types included in the corpus (which, in the worst case, means an exponential increase in false extractions). Thus, building a large corpus of idioms, especially one that covers many types in many syntactic constructions, is costly. If a high-precision, high-recall system can be developed for the task of extracting the annotation candidates, this cost will be greatly reduced, making the construction of a large corpus much more feasible. The variability of idioms has been a significant topic of interest among researchers of idioms. For example, BIBREF6 investigates the internal and external modification of a set of idioms in a large English corpus, whereas BIBREF7, quantifies and classifies the variation of a set of idioms in a large corpus of Dutch, setting up a useful taxonomy of variation types. Both find that, although idiomatic expressions mainly occur in their dictionary form, there is a significant minority of idiom instances that occur in non-dictionary variants. Additionally, BIBREF8 show that idiom variants retain their idiomatic meaning more often and are processed more easily than previously assumed. This emphasises the need for corpora covering idiomatic expressions to include these variants, and for tools to be robust in dealing with them. As such, the aim of this article is to describe methods and provide tools for constructing larger corpora annotated with a wider range of idiom types than currently in existence due to the reduced amount of manual labour required. In this way we hope to stimulate further research in this area. In contrast to previous approaches, we want to catch as many idiomatic expressions as possible, and we achieve this by casting a wide net, that is, we consider the widest range of possible idiom variants first and then filter out any bycatch in a way that requires the least manual effort. We expect research will benefit from having larger corpora by improving evaluation quality, by allowing for the training of better supervised systems, and by providing additional linguistic insight into idiomatic expressions. A reliable method for extracting idiomatic expressions is not only needed for building an annotated corpus, but can also be used as part of an automatic idiom processing pipeline. In such a pipeline, extracting potentially idiomatic expressions can be seen as a first step before idiom disambiguation, and the combination of the two modules then functions as an complete idiom extraction system. The main research question that we aim to answer in this article is whether dictionary-based extraction of potentially idiomatic expressions is robust and reliable enough to facilitate the creation of wide-coverage sense-annotated idiom corpora. By answering this question we make several contributions to research on multiword expressions, in particular that of idiom extraction. Firstly, we provide an overview of existing research on annotating idiomatic expressions in corpora, showing that current corpora cover only small sets of idiomatic types (Section SECREF3). Secondly, we quantify the coverage and reliability of a set of idiom dictionaries, demonstrating that there is little overlap between resources (Section SECREF4). Thirdly, we develop and release an evaluation corpus for extracting potentially idiomatic expressions from text (Section SECREF5). Finally, various extraction systems and combinations thereof are implemented, made available to the research community, and evaluated empirically (Section SECREF6). New Terminology: Potentially Idiomatic Expression (PIE) The ambiguity of phrases like wake up and smell the coffee poses a terminological problem. Usually, these phrases are called idiomatic expressions, which is suitable when they are used in an idiomatic sense, but not so much when they are used in a literal sense. Therefore, we propose a new term: potentially idiomatic expressions, or PIEs for short. The term potentially idiomatic expression refers to those expressions which can have an idiomatic meaning, regardless of whether they actually have that meaning in a given context. So, see the light is a PIE in both `After another explanation, I finally saw the light' and `I saw the light of the sun through the trees', while it is an idiomatic expression in the first context, and a literal phrase in the latter context. The processing of PIEs involves three main challenges: the discovery of (new) PIE types, the extraction of instances of known PIE types in text, and the disambiguation of PIE instances in context. Here, we propose calling the discovery task simply PIE discovery, the extraction task simply PIE extraction, and the disambiguation task PIE disambiguation. Note that these terms contrast with the terms used in existing research. There, the discovery task is called type-based idiom detection and the disambiguation task is called token-based idiom detection (cf. BIBREF10, BIBREF11), although this usage is not always consistent. Because these terms are very similar, they are potentially confusing, and that is why we propose novel terminology. Other terminology comes from literature on multiword expressions (MWEs) more generally, i.e. not specific to idioms. Here, the task of finding new MWE types is called MWE discovery and finding instances of known MWE types is called MWE identification BIBREF12. Note, however, that MWE identification generally consists of finding only the idiomatic usages of these types (e.g. BIBREF13). This means that MWE identification consists of both the extraction and disambiguation tasks, performed jointly. In this work, we propose to split this into two separate tasks, and we are concerned only with the PIE extraction part, leaving PIE disambiguation as a separate problem. Related Work This section is structured so as to reflect the dual contribution of the present work. First, we discuss existing resources annotated for idiomatic expressions. Second, we discuss existing approaches to the automatic extraction of idioms. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms There are four sizeable sense-annotated PIE corpora for English: the VNC-Tokens Dataset BIBREF9, the Gigaword dataset BIBREF14, the IDIX Corpus BIBREF10, and the SemEval-2013 Task 5 dataset BIBREF15. An overview of these corpora is presented in Table TABREF7. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: VNC-Tokens The VNC-Tokens dataset contains 53 different PIE types. BIBREF9 extract up to 100 instances from the British National Corpus for each type, for a total of 2,984 instances. These types are based on a pre-existing list of verb-noun combinations and were filtered for frequency and whether two idiom dictionaries both listed them. Instances were extracted automatically, by parsing the corpus and selecting all sentences with the right verb and noun in a direct-object relation. It is unclear whether the extracted sentences were manually checked, but no false extractions are mentioned in the paper or present in the dataset. All extracted PIE instances were annotated for sense as either idiomatic, literal or unclear. This is a self-explanatory annotation scheme, but BIBREF9 note that senses are not binary, but can form a continuum. For example, the idiomaticity of have a word in `You have my word' is different from both the literal sense in `The French have a word for this' and the figurative sense in `My manager asked to have a word'. They instructed annotators to choose idiomatic or literal even in ambiguous middle-of-the-continuum cases, and restrict the unclear label only to cases where there is not enough context to disambiguate the meaning of the PIE. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: Gigaword BIBREF14 present a corpus of 17 PIE types, for which they extracted all instances from the Gigaword corpus BIBREF18, yielding a total of 3,964 instances. BIBREF14 extracted these instances semi-automatically by manually defining all inflectional variants of the verb in the PIE and matching these in the corpus. They did not allow for inflectional variations in non-verb words, nor did they allow intervening words. They annotated these potential idioms as either literal or figurative, excluding ambiguous and unclear instances from the dataset. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: IDIX BIBREF10 build on the methodology of BIBREF14, but annotate a larger set of idioms (52 types) and extract all occurrences from the BNC rather than the Gigaword corpus, for a total of 4,022 instances including false extractions. BIBREF10 use a more complex semi-automatic extraction method, which involves parsing the corpus, manually defining the dependency patterns that match the PIE, and extracting all sentences containing those patterns from the corpus. This allows for larger form variations, including intervening words and inflectional variation of all words. In some cases, this yields many non-PIE extractions, as for recharge one's batteries in Example SECREF10. These were not filtered out before annotation, but rather filtered out as part of the annotation process, by having false extraction as an additional annotation label. For sense annotation, they use an extensive tagset, distinguishing literal, non-literal, both, meta-linguistic, embedded, and undecided labels. Here, the both label (Example SECREF10) is used for cases where both senses are present, often as a form of deliberate word play. The meta-linguistic label (Example SECREF10) applies to cases where the PIE instance is used as a linguistic item to discuss, not as part of a sentence. The embedded label (Example SECREF10) applies to cases where the PIE is embedded in a larger figurative context, which makes it impossible to say whether a literal or figurative sense is more applicable. The undecided label is used for unclear and undecidable cases. They take into account the fact that a PIE can have multiple figurative senses, and enumerate these separately as part of the annotation. . These high-performance, rugged tools are claimed to offer the best value for money on the market for the enthusiastic d-i-yer and tradesman, and for the first time offer the possibility of a battery recharging time of just a quarter of an hour. (from IDIX corpus, ID #314) . Left holding the baby, single mothers find it hard to fend for themselves. (from BIBREF10, p.642) . It has long been recognised that expressions such as to pull someone's leg, to have a bee in one's bonnet, to kick the bucket, to cook someone's goose, to be off one's rocker, round the bend, up the creek, etc. are semantically peculiar. (from BIBREF10, p.642) . You're like a restless bird in a cage. When you get out of the cage, you'll fly very high. (from BIBREF10, p.642) The both, meta-linguistic, and embedded labels are useful and linguistically interesting distinctions, although they occur very rarely (0.69%, 0.15%, and an unknown %, respectively). As such, we include these cases in our tagset (see Section SECREF5), but group them under a single label, other, to reduce annotation complexity. We also follow BIBREF10 in that we combine both the PIE/non-PIE annotation and the sense annotation in a single task. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: SemEval-2013 Task 5b BIBREF15 created a dataset for SemEval-2013 Task 5b, a task on detecting semantic compositionality in context. They selected 65 PIE types from Wiktionary, and extracted instances from the ukWaC corpus BIBREF17, for a total of 4,350 instances. It is unclear how they extracted the instances, and how much variation was allowed for, although there is some inflectional variation in the dataset. An unspecified amount of manual filtering was done on the extracted instances. The extracted PIE instances were labelled as literal, idiomatic, both, or undecidable. Interestingly, they crowdsourced the sense annotations using CrowdFlower, with high agreement (90%–94% pairwise). Undecidable cases and instances on which annotators disagreed were removed from the dataset. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: General Multiword Expression Corpora In addition to the aforementioned idiom corpora, there are also corpora focused on multiword expressions (MWEs) in a more general sense. As idioms are a subcategory of MWEs, these corpora also include some idioms. The most important of these are the PARSEME corpus BIBREF19 and the DiMSUM corpus BIBREF20. DiMSUM provides annotations of over 5,000 MWEs in approximately 90K tokens of English text, consisting of reviews, tweets and TED talks. However, they do not categorise the MWEs into specific types, meaning we cannot easily quantify the number of idioms in the corpus. In contrast to the corpus-specific sense labels seen in other corpora, DiMSUM annotates MWEs with WordNet supersenses, which provide a broad category of meaning for each MWE. Similarly, the PARSEME corpus consists of over 62K MWEs in almost 275K tokens of text across 18 different languages (with the notable exception of English). The main differences with DiMSUM, except for scale and multilingualism, are that it only includes verbal MWEs, and that subcategorisation is performed, including a specific category for idioms. Idioms make up almost a quarter of all verbal MWEs in the corpus, although the proportion varies wildly between languages. In both corpora, MWE annotation was done in an unrestricted manner, i.e. there was not a predefined set of expressions to which annotation was restricted. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: Overview In sum, there is large variation in corpus creation methods, regarding PIE definition, extraction method, annotation schemes, base corpus, and PIE type inventory. Depending on the goal of the corpus, the amount of deviation that is allowed from the PIE's dictionary form to the instances can be very little BIBREF14, to quite a lot BIBREF10. The number of PIE types covered by each corpus is limited, ranging from 17 to 65 types, often limited to one or more syntactic patterns. The extraction of PIE instances is usually done in a semi-automatic manner, by manually defining patterns in a text or parse tree, and doing some manual filtering afterwards. This works well, but an extension to a large number of PIE types (e.g. several hundreds) would also require a large increase in the amount of manual effort involved. Considering the sense annotations done on the PIE corpora, there is significant variation, with BIBREF9 using only three tags, whereas BIBREF10 use six. Outside of PIE-specific corpora there are MWE corpora, which provide a different perspective. A major difference there is that annotation is not restricted to a pre-specified set of expressions, which has not been done for PIEs specifically. Related Work ::: Extracting Idioms from Corpora There are two main approaches to idiom extraction. The first approach aims to distinguish idioms from other multiword phrases, where the main purpose is to expand idiom inventories with rare or novel expressions BIBREF21, BIBREF22, BIBREF23, BIBREF24. The second approach aims to extract all occurrences of a known idiomatic expression in a text. In this paper, we focus on the latter approach. We rely on idiom dictionaries to provide a list of PIE types, and build a system that extracts all instances of those PIE types from a corpus. High-quality idiom dictionaries exist for most well-resourced languages, but their reliability and coverage is not known. As such, we quantify the coverage of dictionaries in Section SECREF4. There is, to the best of our knowledge, no existing work that focuses on dictionary-based PIE extraction. However, there is closely-related work by BIBREF25, who present a system for the dictionary-based extraction of verb-noun combinations (VNCs) in English and Spanish. In their case, the VNCs can be any kind of multiword expression, which they subdivide into literal expressions, collocations, light verb constructions, metaphoric expressions, and idioms. They extract 173 English VNCs and 150 Spanish VNCs and annotate these with both their lexico-semantic MWE type and the amount of morphosyntactic variation they exhibit. BIBREF25 then compare a word sequence-based method, a chunking-based method, and a parse-based method for VNC extraction. Each method relies on the morpho-syntactic information in order to limit false extractions. Precision is evaluated manually on a sample of the extracted VNCs, and recall is estimated by calculating the overlap between the output of the three methods. Evaluation shows that the methods are highly complementary both in recall, since they extract different VNCs, and in precision, since combining the extractors yields fewer false extractions. Whereas BIBREF25 focus on both idiomatic and literal uses of the set of expressions, like in this paper, BIBREF26 tackle only half of that task, namely extracting only literal uses of a given set of VMWEs in Polish. This complicates the task, since it combines extracting all occurrences of the VMWEs and then distinguishing literal from idiomatic uses. Interestingly, they also experiment with models of varying complexity, i.e. just words, part-of-speech tags, and syntactic structures. Their results are hard to put into perspective however, since the frequency of literal VMWEs in their corpus is very rare, whereas corpora containing PIEs tend to show a more balanced distribution. Other similar work to ours also focuses on MWEs more generally, or on different subtypes of MWEs. In addition, these tend to combine both extraction and disambiguation in that they aim to extract only idiomatically used instances of the MWE, without extracting literally used instances or non-instances. Within this line of work, BIBREF27 focuses on verb-particle constructions, BIBREF28 on verbal MWEs (including idioms), and BIBREF29 on verbal MWEs (especially non-canonical variants). Both BIBREF28 and BIBREF29 rely on a pre-defined set of expressions, whereas BIBREF27 also extracts unseen expressions, although based on a pre-defined set of particles and within the vary narrow syntactic frame of verb-particle constructions. The work of BIBREF27 is most similar to ours in that it builds an unsupervised system using existing NLP tools (PoS taggers, chunkers, parsers) and finds that a combination of systems using those tools performs best, as we find in Section SECREF69. BIBREF28 and BIBREF29, by contrast, use supervised classifiers which require training data, not just for the task in general, but specific to the set of expressions used in the task. Although our approach is similar to that of BIBREF25, both in the range of methods used and in the goal of extracting certain multiword expressions regardless of morphosyntactic variation, there are two main differences. First, we use dictionaries, but extract entries automatically and do not manually annotate their type and variability. As a result, our methods rely only on the surface form of the expression taken from the dictionary. Second, we evaluate precision and recall in a more rigorous way, by using an evaluation corpus exhaustively annotated for PIEs. In addition, we do not put any restriction on the syntactic type of the expressions to be extracted, which BIBREF27, BIBREF28, BIBREF25, and BIBREF29 all do. Coverage of Idiom Inventories ::: Background Since our goal is developing a dictionary-based system for extracting potentially idiomatic expressions, we need to devise a proper method for evaluating such a system. This is not straightforward, even though the final goal of such a system is simple: it should extract all potentially idiomatic expressions from a corpus and nothing else, regardless of their sense and the form they are used in. The type of system proposed here hence has two aspects that can be evaluated: the dictionary that it is using as a resource for idiomatic expression, and the extractor component that finds idioms in a corpus. The difficulty here is that there is no undisputed and unambiguous definition of what counts as an idiom BIBREF30, as is the case with multiword expressions in general BIBREF12. Of course, a complete set of idiomatic expressions for English (or any other language) is impossible to get due to the broad and ever-changing nature of language. This incompleteness is exacerbated by the ambiguity problem: if we had a clear definition of idiom we could make an attempt of evaluating idiom dictionaries on their accuracy, but it is practically impossible to come up with a definition of idiom that leaves no room for ambiguity. This ambiguity, among others, creates a large grey area between clearly non-idiomatic phrases on the one hand (e.g. buy a house), and clear potentially idiomatic phrases on the other hand (e.g. buy the farm). As a consequence, we cannot empirically evaluate the coverage of the dictionaries. Instead, in this work, we will quantify the divergence between various idiom dictionaries and corpora, with regard to their idiom inventories. If they show large discrepancies, we take that to mean that either there is little agreement on definitions of idiom or the category is so broad that a single resource can only cover a small proportion. Conversely, if there is large agreement, we assume that idiom resources are largely reliable, and that there is consensus around what is, and what is not, an idiomatic expression. We use different idiom resources and assume that the combined set of resources yields an approximation of the true set of idioms in English. A large divergence between the idiom inventories of these resources would then suggest a low recall for a single resource, since many other idioms are present in the other resources. Conversely, if the idiom inventories largely overlap, that indicates that a single resource can already yield decent coverage of idioms in the English language. The results of the dictionary comparisons are in Section SECREF36. Coverage of Idiom Inventories ::: Selected Idiom Resources (Data and Method) We evaluate the quality of three idiom dictionaries by comparing them to each other and to three idiom corpora. Before we report on the comparison we first describe why we select and how we prepare these resources. We investigate the following six idiom resources: Wiktionary; the Oxford Dictionary of English Idioms (ODEI, BIBREF31); UsingEnglish.com (UE); the Sporleder corpus BIBREF10; the VNC dataset BIBREF9; and the SemEval-2013 Task 5 dataset BIBREF15. These dictionaries were selected because they are available in digital format. Wiktionary and UsingEnglish have the added benefit of being freely available. However, they are both crowdsourced, which means they lack professional editing. In contrast, ODEI is a traditional dictionary, created and edited by lexicographers, but it has the downside of not being freely available. For Wiktionary, we extracted all idioms from the category `English Idioms' from the English version of Wiktionary. We took the titles of all pages containing a dictionary entry and considered these idioms. Since we focus on multiword idiomatic expressions, we filtered out all single-word entries in this category. More specifically, since Wiktionary is a constantly changing resource, we used the 8,482 idioms retrieved on 10-03-2017, 15:30. We used a similar extraction method for UE, a web page containing freely available resources for ESL learners, including a list of idioms. We extracted all idioms which have publicly available definitions, which numbered 3,727 on 10-03-2017, 15:30. Again, single-word entries and duplicates were filtered out. Concerning ODEI, all idioms from the e-book version were extracted, amounting to 5,911 idioms scraped on 13-03-2017, 10:34. Here we performed an extra processing step to expand idioms containing content in parentheses, such as a tough (or hard) nut (to crack). Using a set of simple expansion rules and some hand-crafted exceptions, we automatically generated all variants for this idiom, with good, but not perfect accuracy. For the example above, the generated variants are: {a tough nut, a tough nut to crack, a hard nut, a hard nut to crack}. The idioms in the VNC dataset are in the form verb_noun, e.g. blow_top, so they were manually expanded to a regular dictionary form, e.g. blow one's top before comparison. Coverage of Idiom Inventories ::: Method In many cases, using simple string-match to check overlap in idioms does not work, as exact comparison of idioms misses equivalent idioms that differ only slightly in dictionary form. Differences between resources are caused by, for example: inflectional variation (crossing the Rubicon — cross the Rubicon); variation in scope (as easy as ABC — easy as ABC); determiner variation (put the damper on — put a damper on); spelling variation (mind your p's and q's — mind your ps and qs); order variation (call off the dogs — call the dogs off); and different conventions for placeholder words (recharge your batteries — recharge one's batteries), where both your and one's can generalise to any possessive personal pronoun. These minor variations do not fundamentally change the nature of the idiom, and we should count these types of variation as belonging to the same idiom (see also BIBREF32, who devise a measure to quantify different types of variation allowed by specific MWEs). So, to get a good estimate of the true overlap between idiom resources, these variations need to be accounted for, which we do in our flexible matching approach. There is one other case of variation not listed above, namely lexical variation (e.g. rub someone up the wrong way - stroke someone the wrong way). We do not abstract over this, since we consider lexical variation to be a more fundamental change to the nature of the idiom. That is, a lexical variant is an indicator of the coverage of the dictionary, where the other variations are due to different stylistic conventions and do not indicate actual coverage. In addition, it is easy to abstract over the other types of variation in an NLP application, but this is not the case for lexical variation. The overlap counts are estimated by abstracting over all variations except lexical variation in a semi-automatic manner, using heuristics and manual checking. Potentially overlapping idioms are selected using the following set of heuristics: whether an idiom from one resource is a substring (including gaps) of an idiom in the other resource, whether the words of an idiom form a subset of the words of an idiom in the other resource, and whether there is an idiom in the other resource which has a Levenshtein ratio of over 0.8. The Levenshtein ratio is an indicator of the Levenshtein distance between the two idioms relative to their length. These potential matches are then judged manually on whether they are really forms of the same idiom or not. Coverage of Idiom Inventories ::: Results The results of using exact string matching to quantify the overlap between the dictionaries is illustrated in Figure FIGREF37. Overlap between the three dictionaries is low. A possible explanation for this lies with the different nature of the dictionaries. Oxford is a traditional dictionary, created and edited by professional lexicographers, whereas Wiktionary is a crowdsourced dictionary open to everyone, and UsingEnglish is similar, but focused on ESL-learners. It is likely that these different origins result in different idiom inventories. Similarly, we would expect that the overlap between a pair of traditional dictionaries, such as the ODEI and the Penguin Dictionary of English Idioms BIBREF33 would be significantly higher. It should also be noted, however, that comparisons between more similar dictionaries also found relatively little overlap (BIBREF34; BIBREF35). A counterpoint is provided by BIBREF36, who quantifies coverage of verb-particle constructions in three different dictionaries and finds large overlap – perhaps because verb-particle are a more restricted class. As noted previously, using exact string matching is a very limited approach to calculating overlap. Therefore, we used heuristics and manual checking to get more precise numbers, as shown in Table TABREF39, which also includes the three corpora in addition to the three dictionaries. As the manual checking only involved judging similar idioms found in pairs of resources, we cannot calculate three-way overlap as in Figure FIGREF37. The counts of the pair-wise overlap between dictionaries differ significantly between the two methods, which serves to illustrate the limitations of using only exact string matching and the necessity of using more advanced methods and manual effort. Several insights can be gained from the data in Table TABREF39. The relation between Wiktionary and the SemEval corpus is obvious (cf. Section SECREF12), given the 96.92% coverage. For the other dictionary-corpus pairs, the coverage increases proportionally with the size of the dictionary, except in the case of UsingEnglish and the Sporleder corpus. The proportional increase indicates no clear qualitative differences between the dictionaries, i.e. one does not have a significantly higher percentage of non-idioms than the other, when compared to the corpora. Generally, overlap between dictionaries and corpora is low: the two biggest, ODEI and Wiktionary have only around 30% overlap, while the dictionaries also cover no more than approximately 70% of the idioms used in the various corpora. Overlap between the three corpora is also extremely low, at below 5%. This is unsurprising, since a new dataset is more interesting and useful when it covers a different set of idioms than used in an existing dataset, and thus is likely constructed with this goal in mind. Corpus Annotation In order to evaluate the PIE extraction methods developed in this work (Section SECREF6), we exhaustively annotate an evaluation corpus with all instances of a pre-defined set of PIEs. As part of this, we come up with a workable definition of PIEs, and measure the reliability of PIE annotation by inter-annotator agreement. Assuming that we have a set of idioms, the main problem of defining what is and what is not a potentially idiomatic expression is caused by variation. In principle, potentially idiomatic expression is an instance of a phrase that, when seen without context, could have either an idiomatic or a literal meaning. This is clearest for the dictionary form of the idiom, as in Example SECREF5. Literal uses generally allow all kinds of variation, but not all of these variations allow a figurative interpretation, e.g. Example SECREF5. However, how much variation an idiom can undergo while retaining its figurative interpretation is different for each expression, and judgements of this might vary from one speaker to the other. An example of this is spill the bean, a variant of spill the beans, in Example SECREF5 judged by BIBREF21 as being highly questionable. However, even here a corpus example can be found containing the same variant used in a figurative sense (Example SECREF5). As such, we assume that we cannot know a priori which variants of an expression allow a figurative reading, and are thus a potentially idiomatic expression. Therefore we consider every possible morpho-syntactic variation of an idiom a PIE, regardless of whether it actually allows a figurative reading. We believe the boundaries of this variation can only be determined based on corpus evidence, and even then they are likely variable. Note that a similar question is tackled by BIBREF26, when they establish the boundary between a `literal reading of a VMWE' and a `coincidental co-occurrence'. BIBREF26's answer is similar to ours, in that they count something as a literal reading of a VMWE if it `the same or equivalent dependencies hold between [the expression]'s components as in its canonical form'. . John kicked the bucket last night. . * The bucket, John kicked last night. . ?? Azin spilled the bean. (from BIBREF21) . Alba reveals Fantastic Four 2 details The Invisible Woman actress spills the bean on super sequel (from ukWaC) Corpus Annotation ::: Evaluating the Extraction Methods Evaluating the extraction methods is easier than evaluating dictionary coverage, since the goal of the extraction component is more clearly delimited: given a set of PIEs from one or more dictionaries, extract all occurrences of those PIEs from a corpus. Thus, rather than dealing with the undefined set of all PIEs, we can work with a clearly defined and finite set of PIEs from a dictionary. Because we have a clearly defined set of PIEs, we can exhaustively annotate a corpus for PIEs, and use that annotated corpus for automatic evaluation of extraction methods using recall and precision. This allows us to facilitate and speed up annotation by pre-extracting sentences possibly containing a PIE. After the corpus is annotated, the precision and recall can be easily estimated by comparing the extracted PIE instances to those marked in the corpus. The details of the corpus selection, dictionary selection, extraction heuristic and annotation procedure are presented in Section SECREF46, and the details and results of the various extraction methods are presented in Section SECREF6. Corpus Annotation ::: Base Corpus and Idiom Selection As a base corpus, we use the XML version of the British National Corpus BIBREF37, because of its size, variety, and wide availability. The BNC is pre-segmented into s-units, which we take to be sentences, w-units, which we take to be words, and c-units, punctuation. We then extract the text of all w-units and c-units. We keep the sentence segmentation, resulting in a set of plain text sentences. All sentences are included, except for sentences containing <gap> elements, which are filtered out. These <gap> elements indicate places where material from the original has been left out, e.g. for anonymisation purposes. Since this can result in incomplete sentences that cannot be parsed correctly, we filter out sentences containing these gaps. We use only the written part of the BNC. From this, we extract a set of documents with the aim of having as much genre variation as possible. To achieve this, we select the first document in each genre, as defined by the classCode attribute (e.g. nonAc, commerce, letters). The resulting set of 46 documents makes up our base corpus. Note that these documents vary greatly in size, which means the resulting corpus is varied, but not balanced in terms of size (Table TABREF43). The documents are split across a development and test set, as specified at the end of Section SECREF46. We exclude documents with IDs starting with A0 from all annotation and evaluation procedures, as these were used during development of the extraction tool and annotation guidelines. As for the set of potentially idiomatic expressions, we use the intersection of the three dictionaries, Wiktionary, Oxford, and UsingEnglish. Based on the assumption that, if all three resources include a certain idiom, it must unquestionably be an idiom, we choose the intersection (also see Figure FIGREF37). This serves to exclude questionable entries, like at all, which is in Wiktionary. The final set of idioms used for these experiments consists of 591 different multiword expressions. Although we aim for wide coverage, this is a necessary trade-off to ensure quality. At the same time, it leaves us with a set of idiom types that is approximately ten times larger than present in existing corpora. The set of 591 idioms includes idioms with a large variety of syntactic patterns, of which the most frequent ones are shown in Table TABREF44. The statistics show that the types most prevalent in existing corpora, verb-noun and preposition-noun combinations, are indeed the most frequent ones, but that there is a sizeable minority of types that do not fall into those categories, including coordinated adjectives, coordinated nouns, and nouns with prepositional phrases. This serves to emphasise the necessity of not restricting corpora to a small set of syntactic patterns. Corpus Annotation ::: Extraction of PIE Candidates To annotate the corpus completely manually would require annotators to read the whole corpus, and cross-reference each sentence to a list of almost 600 PIEs, to check whether one of those PIEs occurs in a sentence. We do not consider this a feasible annotation settings, due to both the difficulty of recognising literal usages of idioms and the time cost needed to find enough PIEs, given their low overall frequency. As such, we use a pre-extraction step to present candidates for annotation to the human annotators. Given the corpus and the set of PIEs, we heuristically extract the PIE candidates as follows: given an idiomatic expression, extract every sentence which contains all the defining words of the idiom, in any form. This ensures that all possibly matching sentences get extracted, while greatly pruning the amount of sentences for annotators to look at. In addition, it allows us to present the heuristically matched PIE type and corresponding words to the annotators, which makes it much easier to judge whether something is a PIE or not. This also means that annotators never have to go through the full list of PIEs during the annotation process. Initially, the heuristic simply extracted any sentence containing all the required words, where a word is any of the inflectional variants of the words in the PIE, except for determiners and punctuation. This method produced large amounts of noise, that is, a set of PIE candidates with only a very low percentage of actual PIEs. This was caused by the presence of some highly frequent PIEs with very little defining lexical content, such as on the make, and in the running. For example, with the original method, every sentence containing the preposition on, and any inflectional form of the verb make was extracted, resulting in a huge number of non-PIE candidates. To limit the amount of noise, two restrictions were imposed. The first restrictions disallows word order variation for PIEs which do not contain a verb. The rationale behind this is that word order variation is only possible with PIEs like spill the beans (e.g. the beans were spilled), and not with PIEs like in the running (*the running in??). The second restriction is that we limit the number of words that can be inserted between the words of a PIE, but only for PIEs like on the make, and in the running, i.e. PIEs which only contain prepositions, determiners and a single noun. The number of intervening words was limited to three tokens, allowing for some variation, as in Example SECREF45, but preventing sentences like Example SECREF45 from being extracted. This restriction could result in the loss of some PIE candidates with a large number of intervening words. However, the savings in annotation time clearly outweigh the small loss in recall in this situation. . Either at New Year or before July you can anticipate a change in the everyday running of your life. (in the running - BNC - document CBC - sentence 458) . [..] if [he] hung around near the goal or in the box for that matter instead of running all over the show [..] (in the running - BNC - document J1C - sentence 1341) Corpus Annotation ::: Annotation Procedure The manual annotation procedure consists of three different phases (pilot, double annotation, single annotation), followed by an adjudication step to resolve conflicting annotations. Two things are annotated: whether something is a PIE or not, and if it is a PIE, which sense the PIE is used in. In the first phase (0-100-*), we randomly select hundred of the 2,239 PIE candidates which are then annotated by three annotators. All annotators have a good command of English, are computational linguists, and familiar with the subject. The annotators include the first and last author of this paper. The annotators were provided with a short set of guidelines, of which the main rule-of-thumb for labelling a phrase as a PIE is as follows: any phrase is a PIE when it contains all the words, with the same part-of-speech, and in the same grammatical relations as in the dictionary form of the PIE, ignoring determiners. For sense annotation, annotators were to mark a PIE as idiomatic if it had a sense listed in one of the idiom dictionaries, and as literal if it had a meaning that is a regular composition of its component words. For cases which were undecidable due to lack of context, the ?-label was used. The other-label was used as a container label for all cases in which neither the literal or idiomatic sense was correct (e.g. meta-linguistic uses and embeddings in metaphorical frames, see also Section SECREF10). The first phase of annotation serves to bring to light any inconsistencies between annotators and fill in any gaps in the annotation guidelines. The resulting annotations already show a reasonably high agreement of 0.74 Fleiss' Kappa. Table TABREF48 shows annotation details and agreement statistics for all three phases. The annotation tasks suffixed by -PIE indicate agreement on PIE/non-PIE annotation and the tasks suffixed by -sense indicate agreement on sense annotation for PIEs. In the second phase of annotation (100-600-* & 600-1100-*), another 1000 of the 2239 PIE candidates are selected to be annotated by two pairs of annotators. This shows very high agreement, as shown in Table TABREF48. This is probably due to the improvement in guidelines and the discussion following the pilot round of annotation. The exception to this are the somewhat lower scores for the 600-1100-sense annotation task. Adjudication revealed that this is due almost exclusively because of a different interpretation of the literal and idiomatic senses of a single PIE type: on the ground. Excluding this PIE type, Fleiss' Kappa increases from 0.63 to 0.77. Because of the high agreement on PIE annotation, we deem it sufficient for the remainder (1108 candidates) to be annotated by only the primary annotator in the third phase of annotation (1100-2239-*). The reliability of the single annotation can be checked by comparing the distribution of labels to the multi-annotated parts. This shows that it falls clearly within the ranges of the other parts, both in the proportion of PIEs and idiomatic senses (see Table TABREF49). The single-annotated part has 49.0% PIEs, which is only 4 percentage points above the 44.7% PIEs in the multi-annotated parts. The proportion of idioms is just 2 percentage points higher, with 55.9% versus 53.9.%. Although inter-annotator agreement was high, there was still a significant number of cases in the triple and double annotated PIE candidate sets where not all annotators agreed. These cases were adjudicated through discussion by all annotators, until they were in agreement. In addition, all PIE candidates which initially received the ?-label (unclear or undecidable) for sense or PIE were resolved in the same manner. In the adjudication procedure, annotators were provided with additional context on each side of the idiom, in contrast to the single sentence provided during the initial annotation. The main reason to do adjudication, rather than simply discarding all candidates for which there was disagreement, was that we expected exactly those cases for which there are conflicting annotations to be the most interesting ones, since having non-standard properties would cause the annotations to diverge. Examples of such interesting non-standard cases are at sea as part of a larger satirical frame in Example SECREF46 and cut the mustard in Example SECREF46 where it is used in a headline as wordplay on a Cluedo character. . The bovine heroine has connections with Cowpeace International, and deals with a huge treacle slick at sea. (at sea - BNC - document CBC - sentence 13550) . Why not cut the Mustard? [..] WADDINGTON Games's proposal to axe Reverend Green from the board game Cluedo is a bad one. (cut the mustard - BNC - document CBC - sentence 14548) We split the corpus at the document level. The corpus consists of 45 documents from the BNC, and we split it in such a way that the development set has 1,112 candidates across 22 documents and the test set has 1,127 candidates from 23 documents. Note that this means that the development and test set contain different genres. This ensures that we do not optimise our systems on genre-specific aspects of the data. Dictionary-based PIE Extraction We propose and implement four different extraction methods, of differing complexities: exact string match, fuzzy string match, inflectional string match, and parser-based extraction. Because of the absence of existing work on this task, we compare these methods to each other, where the more basic methods function as baselines. More complex methods serve to shine light on the difficulty of the PIE extraction task; if simple methods already work sufficiently well, the task is not as hard as expected, and vice versa. Below, each of the extraction methods is presented and discussed in detail. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Exact String Match This is, very simply, extracting all instances of the exact dictionary form of the PIE, from the tokenized text of the corpus. Word boundaries are taken into account, so at sea does not match `that seawater'. As a result, all inflectional and other variants of the PIE are ignored. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Fuzzy String Match Fuzzy string match is a rough way of dealing with morphological inflection of the words in a PIE. We match all words in the PIE, taking into account word boundaries, and allow for up to 3 additional letters at the end of each word. These 3 additional characters serve to cover inflectional suffixes. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Inflectional String Match In inflectional string match, we aim to match all inflected variations of a PIE. This is done by generating all morphological variants of the words in a PIE, generating all combinations of those words, and then using exact string match as described earlier. Generating morphological variations consists of three steps: part-of-speech tagging, morphological analysis, and morphological reinflection. Since inflectional variation only applies to verbs and nouns, we use the Spacy part-of-speech tagger to detect the verbs and nouns. Then, we apply the morphological analyser morpha to get the base, uninflected form of the word, and then use the morphological generation tool morphg to get all possible inflections of the word. Both tools are part of the Morph morphological processing suite BIBREF38. Note that the Morph tools depend on the part-of-speech tag in the input, so that a wrong PoS may lead to an incorrect set of morphological variants. For a PIE like spill the beans, this results in the following set of variants: $\lbrace $spill the bean, spills the bean, spilled the bean, spilling the bean, spill the beans, spills the beans, spilled the beans, spilling the beans$\rbrace $. Since we generate up to 2 variants for each noun, and up to 4 variants for each verb, the number of variants for PIEs containing multiple verbs and nouns can get quite large. On average, 8 additional variants are generated for each potentially idiomatic expression. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Additional Steps For all string match-based methods, ways to improve performance are implemented, to make them as competitive as possible. Rather than doing exact string matching, we also allow words to be separated by something other than spaces, e.g. nuts-and-bolts for nuts and bolts. Additionally, there is an option to take into account case distinctions. With the case-sensitive option, case is preserved in the idiom lists, e.g. coals to Newcastle, and the string matching is done in a case-sensitive manner. This increases precision, e.g. by avoiding PIEs as part of proper names, but also comes at a cost of recall, e.g. for sentence-initial PIEs. Thirdly, there is the option to allow for a certain number of intervening words between each pair of words in the PIE. This should improve recall, at the cost of precision. For example, this would yield the true positive make a huge mountain out of a molehill for make a mountain out of a molehill, but also false positives like have a smoke and go for have a go. A third shared property of the string-based methods is the processing of placeholders in PIEs. PIEs containing possessive pronoun placeholders, such as one's and someone's are expanded. That is, we remove the original PIE, and add copies of the PIE where the placeholder is replaced by one of the possessive personal pronouns. For example, a thorn in someone's side is replaced by a thorn in {my, your, his, ...} side. In the case of someone's, we also add a wildcard for any possessively used word, i.e. a thorn in —'s side, to match e.g. a thorn in Google's side. Similarly, we make sure that PIE entries containing —, such as the mother of all —, will match any word for — during extraction. We do the same for someone, for which we substitute objective pronouns. For one, this is not possible, since it is too hard to distinguish from the one used as a number. Dictionary-based PIE Extraction ::: Parser-Based Extraction Methods Parser-based extraction is potentially the widest-coverage extraction method, with the capacity to extract both morphological and syntactic variants of the PIE. This should be robust against the most common modifications of the PIE, e.g. through word insertions (spill all the beans), passivisation (the beans were spilled), and abstract over articles (spill beans). In this method, PIEs are extracted using the assumption that any sentence which contains the lemmata of the words in the PIE, in the same dependency relations as in the PIE, contains an instance of the PIE type in question. More concretely, this means that the parse of the sentence should contain the parse tree of the PIE as a subtree. This is illustrated in Figure FIGREF57, which shows the parse tree for the PIE lose the plot, parsed without context. Note that this is a subtree of the parse tree for the sentence `you might just lose the plot completely', which is shown in Figure FIGREF58. Since the sentence parse contains the parse of the PIE, we can conclude that the sentence contains an instance of that PIE and extract the span of the PIE instance. All PIEs are parsed in isolation, based on the assumption that all PIEs can be parsed, since they are almost always well-formed phrases. However, not all PIEs will be parsed correctly, especially since there is no context to resolve ambiguity. Errors tend to occur at the part-of-speech level, where, for example, verb-object combinations like jump ship and touch wood are erroneously tagged as noun-noun compounds. An analysis of the impact of parser error on PIE extraction performance is presented in Section SECREF73. Initially, we use the Spacy parser for parsing both the PIEs and the sentences. Next, the sentence is parsed, and the lemma of the top node of the parsed PIE is matched against the lemmata of the sentence parse. If a match is found, the parse tree of the PIE is matched against the subtree of the matching sentence parse node. If the whole PIE parse tree matches, the span ranging from the first PIE token to the last is extracted. This span can thus include words that are not directly part of the PIE's dictionary form, in order to account for insertions like ships were jumped for jump ship, or have a big heart for have a heart. During the matching, articles (a/an/the) are ignored, and passivisation is accounted for with a special rule. In addition, a number of special cases are dealt with. These are PIEs containing someone('s), something('s), one's, or —. These words are used in PIEs as placeholders for a generic possessor (someone's/something's/one's), generic object (someone/something), or any word of the right PoS (—). For someone's, and something's, we match any possessive pronoun, or (proper) noun + possessive marker. For one's, only possessive pronouns are matched, since this is a placeholder for reflexive possessors. For someone and something, any non-possessive pronoun or (proper) noun is matched. For — wildcards, any word can be matched, as long as it has the right relation to the right head. An additional challenge with these wildcards is that PIEs containing them cannot be parsed, e.g. too — for words is not parseable. This is dealt with by substituting the — by a PoS-ambiguous word, such as fine, or back. Two optional features are added to the parser-based method with the goal of making it more robust to parser errors: generalising over dependency relation labels, and generalising over dependency relation direction. We expect this to increase recall at the cost of precision. In the first no labels setting, we match parts of the parse tree which have the same head lemma and the same dependent lemma, regardless of the relation label. An example of this is Figure FIGREF60, which has the wrong relation label between up and ante. If labels are ignored, however, we can still extract the PIE instance in Figure FIGREF61, which has the correct label. In the no directionality setting, relation labels are also ignored, and in addition the directionality of the relation is ignored, that is, we allow for the reversal of heads and dependents. This benefits performance in a case like Figure FIGREF62, which has stock as the head of laughing in a compound relation, whereas the parse of the PIE (Figure FIGREF63) has laughing as the head of stock in a dobj relation. Note that similar settings were implemented by BIBREF26, who detect literal uses of VMWEs using a parser-based method with either full labelled dependencies, unlabelled dependencies, or directionless unlabelled dependencies (which they call BagOfDeps). They find that recall increases when less restrictions on the dependencies are used, but that this does not hurt precision, as we would expect. However, we cannot draw too many conclusions from these results due to the small size of their evaluation set, which consists of just 72 literal VMWEs in total. Dictionary-based PIE Extraction ::: Parser-Based Extraction Methods ::: In-Context Parsing Since the parser-based method parses PIEs without any context, it often finds an incorrect parse, as for jump ship in Figure FIGREF65. As such, we add an option to the method that aims to increase the number of correct parses by parsing the PIE within context, that is, within a sentence. This can greatly help to disambiguate the parse, as in Figure FIGREF66. If the number of correct parses goes up, the recall of the extraction method should also increase. Naturally, it can also be the case that a PIE is parsed correctly without context, and incorrectly with context. However, we expect the gains to outweigh the losses. The challenge here is thus to collect example sentences containing the PIE. Since the whole point of this work is to extract PIEs from raw text, this provides a catch-22-like situation: we need to extract a sentence containing a PIE in order to extract sentences containing a PIE. The workaround for this problem is to use the exact string matching method with the dictionary form of the PIE and a very large plain text corpus to gather example sentences. By only considering the exact dictionary form we both simplify the finding of example sentences and the extraction of the PIE's parse from the sentence parse. In case multiple example sentences are found, the shortest sentence is selected, since we assume it is easiest to parse. This is also the reason we make use of very large corpora, to increase the likelihood of finding a short, simple sentence. The example sentence extraction method is modified in such a way that sentences where the PIE is used meta-linguistically in quotes, e.g. “the well-known English idiom `to spill the beans' has no equivalents in other languages”, are excluded, since they do not provide a natural context for parsing. When no example sentence can be found in the corpus, we back-off to parsing the PIE without context. After a parse has been found for each PIE (i.e. with or without context), the method proceeds identically to the regular parser-based method. We make use of the combination of two large corpora for the extraction of example sentences: the English Wikipedia, and ukWaC BIBREF17. For the Wikipedia corpus, we use a dump (13-01-2016) of the English-language Wikipedia, and remove all Wikipedia markup. This is done using WikiExtractor. The resulting files still contain some mark-up, which is removed heuristically. The resulting corpus contains mostly clean, raw, untokenized text, numbering approximately 1.78 billion tokens. As for ukWaC, all XML-markup was removed, and the corpus is converted to a one-sentence-per-line format. UkWaC is tokenized, which makes it difficult for a simple string match method to find PIEs containing punctuation, for example day in, day out. Therefore, all spaces before commas, apostrophes, and sentence-final punctuation are removed. The resulting corpus contains approximately 2.05 billion tokens, making for a total of 3.83 billion tokens in the combined ukWaC and Wikipedia corpus. Dictionary-based PIE Extraction ::: Results In order to determine which of the methods described previously produces the highest quality extraction of potentially idiomatic expressions, we evaluate them, in various settings, on the corpus described in Section SECREF5. For parser-based extraction, systems with and without in-context parsing, ignoring labels, and ignoring directionality are tested. For the three string-based extraction methods, varying numbers of intervening words and case sensitivity are evaluated. Evaluation is done using the development set, consisting of 22 documents and 1112 PIE candidates, and the test set, which consists of 23 documents and 1127 PIE candidates. For each method the best set of parameters and/or options is determined using the development set, after which the best variant by F1-score of each method is evaluated on the test set. Since these documents in the corpus are exhaustively annotated for PIEs (see Section SECREF40), we can calculate true and false positives, and false negatives, and thus precision, recall and F1-score. The exact spans are ignored, because the spans annotated in the evaluation corpus are not completely reliable. These were automatically generated during candidate extraction, as described in Section SECREF45. Rather, we count an extraction as a true positive if it finds the correct PIE type in the correct sentence. Note that we judge the system with the highest F1-score to be the best-performing system, since it is a clear and objective criterion. However, when using the system in practice, the best performance depends on the goal. When used as a preprocessing step for PIE disambiguation, the system with the highest F1-score is perhaps the most suitable, but as a corpus building tool, one might want to sacrifice some precision for an increase in recall. This helps to get the most comprehensive annotation of PIEs possible, without overloading the annotators with false extractions (i.e. non-PIEs), by maintaining high precision. The results for each system on the development set are presented in Tables TABREF70 and TABREF71. Generally, results are in line with expectations: (the best) parse-based methods are better than (the best) string-based methods, and within string-based methods, inflectional matching works best. The same goes for the different settings: case-sensitivity increases precision at the cost of recall, allowing intervening words increases recall at the cost of precision, and the same goes for the no labels and no directionality options for parser-based extraction. Overall, in-context parser-based extraction works best, with an F1 of 88.54%, whereas fuzzy matching does very poorly. Within string-based methods, exact matching has the highest precision, but low recall. Fuzzy matching increases recall at a disproportionately large precision cost, whereas inflectional matching combines the best of both worlds and has high recall at a small loss in precision. For the parser-based system, it is notable that parsing idioms within context yields a clear overall improvement by greatly improving recall at a small cost in precision. We evaluate the best variant of each system, as determined by F1-score, on the test set. This gives us an indication of whether the system is robust enough, or was overfitted on the development data. Results on the test set are shown in Table TABREF72. On average, the results are lower than the results on the development set. The string-based methods perform clearly worse, with drops of about 4% F1-score for exact and inflectional match, and a large drop of almost 9% F1-score for fuzzy matching. The parser-based method, on the other hand, is more robust, with a small 0.59% increase in F1-score on the test set. Dictionary-based PIE Extraction ::: Analysis Broadly speaking, the PIE extraction systems presented above perform in line with expectations. It is nevertheless useful to see where the best-performing system misses out, and where improvements like in-context parsing help performance. We analyse the shortcomings of the in-context parser-based system by looking at the false positives and false negatives on the development set. We consider the output of the system with best overall performance, since it will provide the clearest picture. The system extracts 529 PIEs in total, of which 54 are false extractions (false positives), and it misses 69 annotated PIE instances (false negatives). Most false positives stem from the system's failure to capture nuances of PIE annotation. This includes cases where PIEs contain, or are part of, proper nouns (Example SECREF73), PIEs that are part of coordination constructions (Example SECREF73), and incorrect attachments (Example SECREF73). Among these errors, sentences containing proper nouns are an especially frequent problem. . Drama series include [..] airline security thrills in Cleared For Takeoff and Head Over Heels [..] (in the clear - BNC - document CBC - sentence 5177) . They prefer silk, satin or lace underwear in tasteful black or ivory. (in the black - BNC - document CBC - sentence 14673) . [..] `I saw this chap make something out of an ordinary piece of wood — he fashioned it into an exquisite work of art.' (out of the woods - BNC - document ABV - sentence 1300) The main cause of false negatives are errors made by the parser. In order to correctly extract a PIE from a sentence, both the PIE and the sentence have to be parsed correctly, or at least parsed in the same way. This means a missed extraction can be caused by a wrong parse for the PIE or a wrong parse for the sentence. These two error types form the largest class of false negatives. Since some PIE types are rather frequent, a wrong parse for a single PIE type can potentially lead to a large number of missed extractions. It is not surprising that the parser makes many mistakes, since idioms often have unusual syntactic constructions (e.g. come a cropper) and contain words where default part-of-speech tags lead to the wrong interpretation (e.g. round is a preposition in round the bend, not a noun or adjective). This is especially true when idioms are parsed without context, and hence, where in-context parsing provides the largest benefit: the number of PIEs which are parsed incorrectly drops, which leads to F1-scores on those types going from 0% to almost 100% (e.g. in light of and ring a bell). Since parser errors are the main contributor to false negatives, hurting recall, we can observe that parsing idioms in context serves to benefit only recall, by 7 percentage points, at only a small loss in precision. We find that adding context mainly helps for parsing expressions which are structurally relatively simple, but still ambiguous, such as rub shoulders, laughing stock, and round the bend. Compare, for example, the parse trees for laughing stock in isolation and within the extracted context sentence in Figures FIGREF74 and FIGREF75. When parsed in isolation, the relation between the two words is incorrectly labelled as a compound relation, whereas in context it is correctly labelled as a direct object relation. Note however, that for the most difficult PIEs, embedding them in a context does solve the parsing problem: a syntactically odd phrase is hard to phrase (e.g. for the time being), and a syntactically odd phrase in a sentence makes for a syntactically odd sentence that is still hard to parse (e.g. `London for the time being had been abandoned.'). Finding example sentences turned out not to be a problem, since appropriate sentences were found for 559 of 591 PIE types. An alternative method for reducing parser error is to use a different, better parser. The Spacy parser was mainly chosen for implementation convenience and speed, and there are parsers which have better performance, as measured on established parsing benchmarks. To investigate the effectiveness of this method, we used the Stanford Neural Dependency Parser BIBREF39 to extract PIEs in the regular parsing, in-context parsing and the no labels settings. In all cases, using the Stanford parser yielded worse extraction performance than the Spacy parser. A possible explanation for why a supposedly better parser performs worse here is that parsers are optimised and trained to do well on established benchmarks, which consist of complete sentences, often from news texts. This does not necessarily correlate with parsing performance on short (sentences containing) idiomatic phrases. As such, we cannot assume that better overall parsing performance implies PIE extraction performance. It should be noted that, when assessing the quality of PIE extraction performance, the parser-based methods are sensitive to specific PIE types. That is, if a single PIE type is parsed incorrectly, then it is highly probable that all instances of that type are missed. If this type is also highly frequent, this means that a small change in actual performance yields a large change in evaluation scores. Our goal is to have a PIE extraction system that is robust across all PIE types, and thus the current evaluation setting does not align exactly with our aim. Splitting out performance per PIE type reveals whether there is indeed a large variance in performance across types. Table TABREF76 shows the 25 most frequent PIE types in the corpus, and the performance of the in-context-parsing-based system on each. Except two cases (in the black and round the bend), we see that the performance is in the 80–100% range, even showing perfect performance on the majority of types. For none of the types do we see low precision paired with high recall, which indicates that the parser never matches a highly frequent non-PIE phrase. For the system with the no labels and no-directionality options (per-type numbers not shown here), however, this does occur. For example, ignoring the labels for the parse of the PIE have a go leads to the erroneous matching of many sentences containing a form of have to go, which is highly frequent, thus leading to a large drop in precision. Although performance is stable across the most frequent types, among the less frequent types it is more spotty. This hurts overall performance, and there are potential gains in mitigating the poor performance on these types, such as for the time being. At the same time, the string matching methods show much more stable performance across types, and some of them do so with very high precision. As such, a combination of two such methods could boost performance significantly. If we use a high-precision string match-based method, such as the exact string match variant with a precision of 97.35%, recall could be improved for the wrongly parsed PIE types, without a significant loss of precision. We experiment with two such combinations, by simply taking the union of the sets of extracted idioms of both systems, and filtering out duplicates. Results are shown in Table TABREF77. Both combinations show the expected effect: a clear gain in recall at a minimal loss in precision. Compared to the in-context-parsing-based system, the combination with exact string matching yields a gain in recall of over 6%, and the combination with inflectional string matching yields an even bigger gain of almost 8%, at precision losses of 0.6% and 0.8%, respectively. This indicates that the systems are very much complementary in the PIEs they extract. It also means that, when used in practice, combining inflectional string matching and parse-based extraction is the most reliable configuration. Conclusions and Outlook We present an in-depth study on the automatic extraction of potentially idiomatic expressions based on dictionaries. The purpose of automatic dictionary-based extraction is, on the one hand, to function as a pre-extraction step in the building of a large idiom-annotated corpus. On the other hand, it can function as part of an idiom extraction system when combined with a disambiguation component. In both cases, the ultimate goal is to improve the processing of idiomatic expressions within NLP. This work consists of three parts: a comparative evaluation of the coverage of idiom dictionaries, the annotation of a PIE corpus, and the development and evaluation of several dictionary-based PIE extraction methods. In the first part, we present a study of idiom dictionary coverage, which serves to answer the question of whether a single idiom dictionary, or a combination of dictionaries, can provide good coverage of the set of all English idioms. Based on the comparison of dictionaries to each other, we estimate that the overlap between them is limited, varying from 20% to 55%, which indicates a large divergence between the dictionaries. This can be explained by the fact that idioms vary widely by register, genre, language variety, and time period. In our case, it is also likely that the divergence is caused partly by the gap between crowdsourced dictionaries on the one hand, and a dictionary compiled by professional lexicographers on the other. Given these factors, we can conclude that a single dictionary cannot provide even close to complete coverage of English idioms, but that by combining dictionaries from various sources, significant gains can be made. Since `English idioms' are a diffuse and constantly changing set, we have no gold standard to compare to. As such, we conclude that multiple dictionaries should be used when possible, but that we cannot say any anything definitive on the coverage of dictionaries with regard to the complete set of English idioms (which can only be approximated in the first place). A more comprehensive of idiom resources could be made in the future by using more advanced automatic methods for matching, for example by using BIBREF32's (BIBREF32) method for measuring expression variability. This would make it easier to evaluate a larger number of dictionaries, since no manual effort would be required. In the second part, we experiment with the exhaustive annotation of PIEs in a corpus of documents from the BNC. Using a set of 591 PIE types, much larger and more varied than in existing resources, we show that it is very much possible to establish a working definition of PIE that allows for a large amount of variation, while still being useful for reliable annotation. This resulted in high inter-annotator agreement, ranging from 0.74 to 0.91 Fleiss' Kappa. This means that we can build a resource to evaluate a wide-range idiom extraction system with relatively little effort. The final corpus of PIEs with sense annotations is publicly available consists of 2,239 PIE candidates, of which 1,050 actual PIEs instances, and contains 278 different PIE types. Finally, several methods for the automatic extraction of PIE instances were developed and evaluated on the annotated PIE corpus. We tested methods of differing complexity, from simple string match to dependency parse-based extraction. Comparison of these methods revealed that the more computationally complex method, parser-based extraction, works best. Parser-based extraction is especially effective in capturing a larger amount of variation, but is less precise than string-based methods, mostly because of parser error. The best overall setting of this method, which parses idioms within context, yielded an F1-score of 89.13% on the test set. Parser error can be partly compensated by combining the parse-based method and the inflectional string match method, which yields an F1-score of 92.01% (on the development set). This aligns well with the findings by BIBREF27, who found that combining simpler and more complex methods improves over just using a simple method case for extracting verb-particle constructions. This level of performance means that we can use the tool in corpus building. This greatly reduces the amount of manual extraction effort involved, while still maintaining a high level of recall. We make the source code for the different systems publicly available. Note that, although used here in the context of PIE extraction, our methods are equally applicable to other phrase extraction tasks, for example the extraction of light-verb constructions, metaphoric constructions, collocations, or any other type of multiword expression (cf. BIBREF27, BIBREF25, BIBREF26). Similarly, our method can be conceived as a blueprint and extended to languages other than English. For this to be possible, for any given new language one would need a list of target expressions and, in the case of the parser-based method, a reliable syntactic parser. If this is not the case, the inflectional matching method can be used, which requires only a morphological analyser and generator. Obviously, for languages that are morphologically richer than English, one would need to develop strategies aimed at controlling non-exact matches, so as to enhance recall without sacrificing precision. Previous work on Italian, for example, has shown the feasibility of achieving such balance through controlled pattern matching BIBREF40. Languages that are typologically very different from English would obviously require a dedicated approach for the matching of PIEs in corpora, but the overall principles of extraction, using language-specific tools, could stay the same. Currently, no corpora containing annotation of PIEs exist for languages other than English. However, the PARSEME corpus BIBREF19 already contains idioms (only idiomatic readings) for many languages and would only need annotation of literal usages of idioms to make up a set of PIEs. Paired with the Universal Dependencies project BIBREF41, which increasingly provides annotated data as well as processing tools for an ever growing number of languages, this seems an excellent starting point for creating PIE resources in multiple languages.
Yes
5499527beadb7f5dd908bd659cad83d6a81119bd
5499527beadb7f5dd908bd659cad83d6a81119bd_0
Q: What dictionaries are used for automatic extraction of PIEs? Text: Introduction Idiomatic expressions pose a major challenge for a wide range of applications in natural language processing BIBREF0. These include machine translation BIBREF1, BIBREF2, semantic parsing BIBREF3, sentiment analysis BIBREF4, and word sense disambiguation BIBREF5. Idioms show significant syntactic and morphological variability (e.g. beans being spilled for spill the beans), which makes them hard to find automatically. Moreover, their non-compositional nature makes idioms really hard to interpret, because their meaning is often very different from the meanings of the words that make them up. Hence, successful systems need not only be able to recognise idiomatic expressions in text or dialogue, but they also need to give a proper interpretation to them. As a matter of fact, current language technology performs badly on idiom understanding, a phenomenon that perhaps has not received enough attention. Nearly all current language technology used in NLP applications is based on supervised machine learning. This requires large amounts of labelled data. In the case of idiom interpretation, however, only small datasets are available. These contain just a couple of thousand idiom instances, covering only about fifty different types of idiomatic expressions. In fact, existing annotated corpora tend to cover only a small set of idiom types, comprising just a few syntactic patterns (e.g., verb-object combinations), of which a limited number of instances are extracted from a large corpus. This is not surprising as preparing and compiling such corpora involves a large amount of manual extraction work, especially if one wants to allow for form variation in the idiomatic expressions (for example, extracting cooking all the books for cook the books). This work involves both the crafting of syntactic patterns to match potential idiomatic expressions and the filtering of false extractions (non-instances of the target expression e.g. due to wrong parses), and increases with the amount of idiom types included in the corpus (which, in the worst case, means an exponential increase in false extractions). Thus, building a large corpus of idioms, especially one that covers many types in many syntactic constructions, is costly. If a high-precision, high-recall system can be developed for the task of extracting the annotation candidates, this cost will be greatly reduced, making the construction of a large corpus much more feasible. The variability of idioms has been a significant topic of interest among researchers of idioms. For example, BIBREF6 investigates the internal and external modification of a set of idioms in a large English corpus, whereas BIBREF7, quantifies and classifies the variation of a set of idioms in a large corpus of Dutch, setting up a useful taxonomy of variation types. Both find that, although idiomatic expressions mainly occur in their dictionary form, there is a significant minority of idiom instances that occur in non-dictionary variants. Additionally, BIBREF8 show that idiom variants retain their idiomatic meaning more often and are processed more easily than previously assumed. This emphasises the need for corpora covering idiomatic expressions to include these variants, and for tools to be robust in dealing with them. As such, the aim of this article is to describe methods and provide tools for constructing larger corpora annotated with a wider range of idiom types than currently in existence due to the reduced amount of manual labour required. In this way we hope to stimulate further research in this area. In contrast to previous approaches, we want to catch as many idiomatic expressions as possible, and we achieve this by casting a wide net, that is, we consider the widest range of possible idiom variants first and then filter out any bycatch in a way that requires the least manual effort. We expect research will benefit from having larger corpora by improving evaluation quality, by allowing for the training of better supervised systems, and by providing additional linguistic insight into idiomatic expressions. A reliable method for extracting idiomatic expressions is not only needed for building an annotated corpus, but can also be used as part of an automatic idiom processing pipeline. In such a pipeline, extracting potentially idiomatic expressions can be seen as a first step before idiom disambiguation, and the combination of the two modules then functions as an complete idiom extraction system. The main research question that we aim to answer in this article is whether dictionary-based extraction of potentially idiomatic expressions is robust and reliable enough to facilitate the creation of wide-coverage sense-annotated idiom corpora. By answering this question we make several contributions to research on multiword expressions, in particular that of idiom extraction. Firstly, we provide an overview of existing research on annotating idiomatic expressions in corpora, showing that current corpora cover only small sets of idiomatic types (Section SECREF3). Secondly, we quantify the coverage and reliability of a set of idiom dictionaries, demonstrating that there is little overlap between resources (Section SECREF4). Thirdly, we develop and release an evaluation corpus for extracting potentially idiomatic expressions from text (Section SECREF5). Finally, various extraction systems and combinations thereof are implemented, made available to the research community, and evaluated empirically (Section SECREF6). New Terminology: Potentially Idiomatic Expression (PIE) The ambiguity of phrases like wake up and smell the coffee poses a terminological problem. Usually, these phrases are called idiomatic expressions, which is suitable when they are used in an idiomatic sense, but not so much when they are used in a literal sense. Therefore, we propose a new term: potentially idiomatic expressions, or PIEs for short. The term potentially idiomatic expression refers to those expressions which can have an idiomatic meaning, regardless of whether they actually have that meaning in a given context. So, see the light is a PIE in both `After another explanation, I finally saw the light' and `I saw the light of the sun through the trees', while it is an idiomatic expression in the first context, and a literal phrase in the latter context. The processing of PIEs involves three main challenges: the discovery of (new) PIE types, the extraction of instances of known PIE types in text, and the disambiguation of PIE instances in context. Here, we propose calling the discovery task simply PIE discovery, the extraction task simply PIE extraction, and the disambiguation task PIE disambiguation. Note that these terms contrast with the terms used in existing research. There, the discovery task is called type-based idiom detection and the disambiguation task is called token-based idiom detection (cf. BIBREF10, BIBREF11), although this usage is not always consistent. Because these terms are very similar, they are potentially confusing, and that is why we propose novel terminology. Other terminology comes from literature on multiword expressions (MWEs) more generally, i.e. not specific to idioms. Here, the task of finding new MWE types is called MWE discovery and finding instances of known MWE types is called MWE identification BIBREF12. Note, however, that MWE identification generally consists of finding only the idiomatic usages of these types (e.g. BIBREF13). This means that MWE identification consists of both the extraction and disambiguation tasks, performed jointly. In this work, we propose to split this into two separate tasks, and we are concerned only with the PIE extraction part, leaving PIE disambiguation as a separate problem. Related Work This section is structured so as to reflect the dual contribution of the present work. First, we discuss existing resources annotated for idiomatic expressions. Second, we discuss existing approaches to the automatic extraction of idioms. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms There are four sizeable sense-annotated PIE corpora for English: the VNC-Tokens Dataset BIBREF9, the Gigaword dataset BIBREF14, the IDIX Corpus BIBREF10, and the SemEval-2013 Task 5 dataset BIBREF15. An overview of these corpora is presented in Table TABREF7. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: VNC-Tokens The VNC-Tokens dataset contains 53 different PIE types. BIBREF9 extract up to 100 instances from the British National Corpus for each type, for a total of 2,984 instances. These types are based on a pre-existing list of verb-noun combinations and were filtered for frequency and whether two idiom dictionaries both listed them. Instances were extracted automatically, by parsing the corpus and selecting all sentences with the right verb and noun in a direct-object relation. It is unclear whether the extracted sentences were manually checked, but no false extractions are mentioned in the paper or present in the dataset. All extracted PIE instances were annotated for sense as either idiomatic, literal or unclear. This is a self-explanatory annotation scheme, but BIBREF9 note that senses are not binary, but can form a continuum. For example, the idiomaticity of have a word in `You have my word' is different from both the literal sense in `The French have a word for this' and the figurative sense in `My manager asked to have a word'. They instructed annotators to choose idiomatic or literal even in ambiguous middle-of-the-continuum cases, and restrict the unclear label only to cases where there is not enough context to disambiguate the meaning of the PIE. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: Gigaword BIBREF14 present a corpus of 17 PIE types, for which they extracted all instances from the Gigaword corpus BIBREF18, yielding a total of 3,964 instances. BIBREF14 extracted these instances semi-automatically by manually defining all inflectional variants of the verb in the PIE and matching these in the corpus. They did not allow for inflectional variations in non-verb words, nor did they allow intervening words. They annotated these potential idioms as either literal or figurative, excluding ambiguous and unclear instances from the dataset. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: IDIX BIBREF10 build on the methodology of BIBREF14, but annotate a larger set of idioms (52 types) and extract all occurrences from the BNC rather than the Gigaword corpus, for a total of 4,022 instances including false extractions. BIBREF10 use a more complex semi-automatic extraction method, which involves parsing the corpus, manually defining the dependency patterns that match the PIE, and extracting all sentences containing those patterns from the corpus. This allows for larger form variations, including intervening words and inflectional variation of all words. In some cases, this yields many non-PIE extractions, as for recharge one's batteries in Example SECREF10. These were not filtered out before annotation, but rather filtered out as part of the annotation process, by having false extraction as an additional annotation label. For sense annotation, they use an extensive tagset, distinguishing literal, non-literal, both, meta-linguistic, embedded, and undecided labels. Here, the both label (Example SECREF10) is used for cases where both senses are present, often as a form of deliberate word play. The meta-linguistic label (Example SECREF10) applies to cases where the PIE instance is used as a linguistic item to discuss, not as part of a sentence. The embedded label (Example SECREF10) applies to cases where the PIE is embedded in a larger figurative context, which makes it impossible to say whether a literal or figurative sense is more applicable. The undecided label is used for unclear and undecidable cases. They take into account the fact that a PIE can have multiple figurative senses, and enumerate these separately as part of the annotation. . These high-performance, rugged tools are claimed to offer the best value for money on the market for the enthusiastic d-i-yer and tradesman, and for the first time offer the possibility of a battery recharging time of just a quarter of an hour. (from IDIX corpus, ID #314) . Left holding the baby, single mothers find it hard to fend for themselves. (from BIBREF10, p.642) . It has long been recognised that expressions such as to pull someone's leg, to have a bee in one's bonnet, to kick the bucket, to cook someone's goose, to be off one's rocker, round the bend, up the creek, etc. are semantically peculiar. (from BIBREF10, p.642) . You're like a restless bird in a cage. When you get out of the cage, you'll fly very high. (from BIBREF10, p.642) The both, meta-linguistic, and embedded labels are useful and linguistically interesting distinctions, although they occur very rarely (0.69%, 0.15%, and an unknown %, respectively). As such, we include these cases in our tagset (see Section SECREF5), but group them under a single label, other, to reduce annotation complexity. We also follow BIBREF10 in that we combine both the PIE/non-PIE annotation and the sense annotation in a single task. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: SemEval-2013 Task 5b BIBREF15 created a dataset for SemEval-2013 Task 5b, a task on detecting semantic compositionality in context. They selected 65 PIE types from Wiktionary, and extracted instances from the ukWaC corpus BIBREF17, for a total of 4,350 instances. It is unclear how they extracted the instances, and how much variation was allowed for, although there is some inflectional variation in the dataset. An unspecified amount of manual filtering was done on the extracted instances. The extracted PIE instances were labelled as literal, idiomatic, both, or undecidable. Interestingly, they crowdsourced the sense annotations using CrowdFlower, with high agreement (90%–94% pairwise). Undecidable cases and instances on which annotators disagreed were removed from the dataset. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: General Multiword Expression Corpora In addition to the aforementioned idiom corpora, there are also corpora focused on multiword expressions (MWEs) in a more general sense. As idioms are a subcategory of MWEs, these corpora also include some idioms. The most important of these are the PARSEME corpus BIBREF19 and the DiMSUM corpus BIBREF20. DiMSUM provides annotations of over 5,000 MWEs in approximately 90K tokens of English text, consisting of reviews, tweets and TED talks. However, they do not categorise the MWEs into specific types, meaning we cannot easily quantify the number of idioms in the corpus. In contrast to the corpus-specific sense labels seen in other corpora, DiMSUM annotates MWEs with WordNet supersenses, which provide a broad category of meaning for each MWE. Similarly, the PARSEME corpus consists of over 62K MWEs in almost 275K tokens of text across 18 different languages (with the notable exception of English). The main differences with DiMSUM, except for scale and multilingualism, are that it only includes verbal MWEs, and that subcategorisation is performed, including a specific category for idioms. Idioms make up almost a quarter of all verbal MWEs in the corpus, although the proportion varies wildly between languages. In both corpora, MWE annotation was done in an unrestricted manner, i.e. there was not a predefined set of expressions to which annotation was restricted. Related Work ::: Annotated Corpora and Annotation Schemes for Idioms ::: Overview In sum, there is large variation in corpus creation methods, regarding PIE definition, extraction method, annotation schemes, base corpus, and PIE type inventory. Depending on the goal of the corpus, the amount of deviation that is allowed from the PIE's dictionary form to the instances can be very little BIBREF14, to quite a lot BIBREF10. The number of PIE types covered by each corpus is limited, ranging from 17 to 65 types, often limited to one or more syntactic patterns. The extraction of PIE instances is usually done in a semi-automatic manner, by manually defining patterns in a text or parse tree, and doing some manual filtering afterwards. This works well, but an extension to a large number of PIE types (e.g. several hundreds) would also require a large increase in the amount of manual effort involved. Considering the sense annotations done on the PIE corpora, there is significant variation, with BIBREF9 using only three tags, whereas BIBREF10 use six. Outside of PIE-specific corpora there are MWE corpora, which provide a different perspective. A major difference there is that annotation is not restricted to a pre-specified set of expressions, which has not been done for PIEs specifically. Related Work ::: Extracting Idioms from Corpora There are two main approaches to idiom extraction. The first approach aims to distinguish idioms from other multiword phrases, where the main purpose is to expand idiom inventories with rare or novel expressions BIBREF21, BIBREF22, BIBREF23, BIBREF24. The second approach aims to extract all occurrences of a known idiomatic expression in a text. In this paper, we focus on the latter approach. We rely on idiom dictionaries to provide a list of PIE types, and build a system that extracts all instances of those PIE types from a corpus. High-quality idiom dictionaries exist for most well-resourced languages, but their reliability and coverage is not known. As such, we quantify the coverage of dictionaries in Section SECREF4. There is, to the best of our knowledge, no existing work that focuses on dictionary-based PIE extraction. However, there is closely-related work by BIBREF25, who present a system for the dictionary-based extraction of verb-noun combinations (VNCs) in English and Spanish. In their case, the VNCs can be any kind of multiword expression, which they subdivide into literal expressions, collocations, light verb constructions, metaphoric expressions, and idioms. They extract 173 English VNCs and 150 Spanish VNCs and annotate these with both their lexico-semantic MWE type and the amount of morphosyntactic variation they exhibit. BIBREF25 then compare a word sequence-based method, a chunking-based method, and a parse-based method for VNC extraction. Each method relies on the morpho-syntactic information in order to limit false extractions. Precision is evaluated manually on a sample of the extracted VNCs, and recall is estimated by calculating the overlap between the output of the three methods. Evaluation shows that the methods are highly complementary both in recall, since they extract different VNCs, and in precision, since combining the extractors yields fewer false extractions. Whereas BIBREF25 focus on both idiomatic and literal uses of the set of expressions, like in this paper, BIBREF26 tackle only half of that task, namely extracting only literal uses of a given set of VMWEs in Polish. This complicates the task, since it combines extracting all occurrences of the VMWEs and then distinguishing literal from idiomatic uses. Interestingly, they also experiment with models of varying complexity, i.e. just words, part-of-speech tags, and syntactic structures. Their results are hard to put into perspective however, since the frequency of literal VMWEs in their corpus is very rare, whereas corpora containing PIEs tend to show a more balanced distribution. Other similar work to ours also focuses on MWEs more generally, or on different subtypes of MWEs. In addition, these tend to combine both extraction and disambiguation in that they aim to extract only idiomatically used instances of the MWE, without extracting literally used instances or non-instances. Within this line of work, BIBREF27 focuses on verb-particle constructions, BIBREF28 on verbal MWEs (including idioms), and BIBREF29 on verbal MWEs (especially non-canonical variants). Both BIBREF28 and BIBREF29 rely on a pre-defined set of expressions, whereas BIBREF27 also extracts unseen expressions, although based on a pre-defined set of particles and within the vary narrow syntactic frame of verb-particle constructions. The work of BIBREF27 is most similar to ours in that it builds an unsupervised system using existing NLP tools (PoS taggers, chunkers, parsers) and finds that a combination of systems using those tools performs best, as we find in Section SECREF69. BIBREF28 and BIBREF29, by contrast, use supervised classifiers which require training data, not just for the task in general, but specific to the set of expressions used in the task. Although our approach is similar to that of BIBREF25, both in the range of methods used and in the goal of extracting certain multiword expressions regardless of morphosyntactic variation, there are two main differences. First, we use dictionaries, but extract entries automatically and do not manually annotate their type and variability. As a result, our methods rely only on the surface form of the expression taken from the dictionary. Second, we evaluate precision and recall in a more rigorous way, by using an evaluation corpus exhaustively annotated for PIEs. In addition, we do not put any restriction on the syntactic type of the expressions to be extracted, which BIBREF27, BIBREF28, BIBREF25, and BIBREF29 all do. Coverage of Idiom Inventories ::: Background Since our goal is developing a dictionary-based system for extracting potentially idiomatic expressions, we need to devise a proper method for evaluating such a system. This is not straightforward, even though the final goal of such a system is simple: it should extract all potentially idiomatic expressions from a corpus and nothing else, regardless of their sense and the form they are used in. The type of system proposed here hence has two aspects that can be evaluated: the dictionary that it is using as a resource for idiomatic expression, and the extractor component that finds idioms in a corpus. The difficulty here is that there is no undisputed and unambiguous definition of what counts as an idiom BIBREF30, as is the case with multiword expressions in general BIBREF12. Of course, a complete set of idiomatic expressions for English (or any other language) is impossible to get due to the broad and ever-changing nature of language. This incompleteness is exacerbated by the ambiguity problem: if we had a clear definition of idiom we could make an attempt of evaluating idiom dictionaries on their accuracy, but it is practically impossible to come up with a definition of idiom that leaves no room for ambiguity. This ambiguity, among others, creates a large grey area between clearly non-idiomatic phrases on the one hand (e.g. buy a house), and clear potentially idiomatic phrases on the other hand (e.g. buy the farm). As a consequence, we cannot empirically evaluate the coverage of the dictionaries. Instead, in this work, we will quantify the divergence between various idiom dictionaries and corpora, with regard to their idiom inventories. If they show large discrepancies, we take that to mean that either there is little agreement on definitions of idiom or the category is so broad that a single resource can only cover a small proportion. Conversely, if there is large agreement, we assume that idiom resources are largely reliable, and that there is consensus around what is, and what is not, an idiomatic expression. We use different idiom resources and assume that the combined set of resources yields an approximation of the true set of idioms in English. A large divergence between the idiom inventories of these resources would then suggest a low recall for a single resource, since many other idioms are present in the other resources. Conversely, if the idiom inventories largely overlap, that indicates that a single resource can already yield decent coverage of idioms in the English language. The results of the dictionary comparisons are in Section SECREF36. Coverage of Idiom Inventories ::: Selected Idiom Resources (Data and Method) We evaluate the quality of three idiom dictionaries by comparing them to each other and to three idiom corpora. Before we report on the comparison we first describe why we select and how we prepare these resources. We investigate the following six idiom resources: Wiktionary; the Oxford Dictionary of English Idioms (ODEI, BIBREF31); UsingEnglish.com (UE); the Sporleder corpus BIBREF10; the VNC dataset BIBREF9; and the SemEval-2013 Task 5 dataset BIBREF15. These dictionaries were selected because they are available in digital format. Wiktionary and UsingEnglish have the added benefit of being freely available. However, they are both crowdsourced, which means they lack professional editing. In contrast, ODEI is a traditional dictionary, created and edited by lexicographers, but it has the downside of not being freely available. For Wiktionary, we extracted all idioms from the category `English Idioms' from the English version of Wiktionary. We took the titles of all pages containing a dictionary entry and considered these idioms. Since we focus on multiword idiomatic expressions, we filtered out all single-word entries in this category. More specifically, since Wiktionary is a constantly changing resource, we used the 8,482 idioms retrieved on 10-03-2017, 15:30. We used a similar extraction method for UE, a web page containing freely available resources for ESL learners, including a list of idioms. We extracted all idioms which have publicly available definitions, which numbered 3,727 on 10-03-2017, 15:30. Again, single-word entries and duplicates were filtered out. Concerning ODEI, all idioms from the e-book version were extracted, amounting to 5,911 idioms scraped on 13-03-2017, 10:34. Here we performed an extra processing step to expand idioms containing content in parentheses, such as a tough (or hard) nut (to crack). Using a set of simple expansion rules and some hand-crafted exceptions, we automatically generated all variants for this idiom, with good, but not perfect accuracy. For the example above, the generated variants are: {a tough nut, a tough nut to crack, a hard nut, a hard nut to crack}. The idioms in the VNC dataset are in the form verb_noun, e.g. blow_top, so they were manually expanded to a regular dictionary form, e.g. blow one's top before comparison. Coverage of Idiom Inventories ::: Method In many cases, using simple string-match to check overlap in idioms does not work, as exact comparison of idioms misses equivalent idioms that differ only slightly in dictionary form. Differences between resources are caused by, for example: inflectional variation (crossing the Rubicon — cross the Rubicon); variation in scope (as easy as ABC — easy as ABC); determiner variation (put the damper on — put a damper on); spelling variation (mind your p's and q's — mind your ps and qs); order variation (call off the dogs — call the dogs off); and different conventions for placeholder words (recharge your batteries — recharge one's batteries), where both your and one's can generalise to any possessive personal pronoun. These minor variations do not fundamentally change the nature of the idiom, and we should count these types of variation as belonging to the same idiom (see also BIBREF32, who devise a measure to quantify different types of variation allowed by specific MWEs). So, to get a good estimate of the true overlap between idiom resources, these variations need to be accounted for, which we do in our flexible matching approach. There is one other case of variation not listed above, namely lexical variation (e.g. rub someone up the wrong way - stroke someone the wrong way). We do not abstract over this, since we consider lexical variation to be a more fundamental change to the nature of the idiom. That is, a lexical variant is an indicator of the coverage of the dictionary, where the other variations are due to different stylistic conventions and do not indicate actual coverage. In addition, it is easy to abstract over the other types of variation in an NLP application, but this is not the case for lexical variation. The overlap counts are estimated by abstracting over all variations except lexical variation in a semi-automatic manner, using heuristics and manual checking. Potentially overlapping idioms are selected using the following set of heuristics: whether an idiom from one resource is a substring (including gaps) of an idiom in the other resource, whether the words of an idiom form a subset of the words of an idiom in the other resource, and whether there is an idiom in the other resource which has a Levenshtein ratio of over 0.8. The Levenshtein ratio is an indicator of the Levenshtein distance between the two idioms relative to their length. These potential matches are then judged manually on whether they are really forms of the same idiom or not. Coverage of Idiom Inventories ::: Results The results of using exact string matching to quantify the overlap between the dictionaries is illustrated in Figure FIGREF37. Overlap between the three dictionaries is low. A possible explanation for this lies with the different nature of the dictionaries. Oxford is a traditional dictionary, created and edited by professional lexicographers, whereas Wiktionary is a crowdsourced dictionary open to everyone, and UsingEnglish is similar, but focused on ESL-learners. It is likely that these different origins result in different idiom inventories. Similarly, we would expect that the overlap between a pair of traditional dictionaries, such as the ODEI and the Penguin Dictionary of English Idioms BIBREF33 would be significantly higher. It should also be noted, however, that comparisons between more similar dictionaries also found relatively little overlap (BIBREF34; BIBREF35). A counterpoint is provided by BIBREF36, who quantifies coverage of verb-particle constructions in three different dictionaries and finds large overlap – perhaps because verb-particle are a more restricted class. As noted previously, using exact string matching is a very limited approach to calculating overlap. Therefore, we used heuristics and manual checking to get more precise numbers, as shown in Table TABREF39, which also includes the three corpora in addition to the three dictionaries. As the manual checking only involved judging similar idioms found in pairs of resources, we cannot calculate three-way overlap as in Figure FIGREF37. The counts of the pair-wise overlap between dictionaries differ significantly between the two methods, which serves to illustrate the limitations of using only exact string matching and the necessity of using more advanced methods and manual effort. Several insights can be gained from the data in Table TABREF39. The relation between Wiktionary and the SemEval corpus is obvious (cf. Section SECREF12), given the 96.92% coverage. For the other dictionary-corpus pairs, the coverage increases proportionally with the size of the dictionary, except in the case of UsingEnglish and the Sporleder corpus. The proportional increase indicates no clear qualitative differences between the dictionaries, i.e. one does not have a significantly higher percentage of non-idioms than the other, when compared to the corpora. Generally, overlap between dictionaries and corpora is low: the two biggest, ODEI and Wiktionary have only around 30% overlap, while the dictionaries also cover no more than approximately 70% of the idioms used in the various corpora. Overlap between the three corpora is also extremely low, at below 5%. This is unsurprising, since a new dataset is more interesting and useful when it covers a different set of idioms than used in an existing dataset, and thus is likely constructed with this goal in mind. Corpus Annotation In order to evaluate the PIE extraction methods developed in this work (Section SECREF6), we exhaustively annotate an evaluation corpus with all instances of a pre-defined set of PIEs. As part of this, we come up with a workable definition of PIEs, and measure the reliability of PIE annotation by inter-annotator agreement. Assuming that we have a set of idioms, the main problem of defining what is and what is not a potentially idiomatic expression is caused by variation. In principle, potentially idiomatic expression is an instance of a phrase that, when seen without context, could have either an idiomatic or a literal meaning. This is clearest for the dictionary form of the idiom, as in Example SECREF5. Literal uses generally allow all kinds of variation, but not all of these variations allow a figurative interpretation, e.g. Example SECREF5. However, how much variation an idiom can undergo while retaining its figurative interpretation is different for each expression, and judgements of this might vary from one speaker to the other. An example of this is spill the bean, a variant of spill the beans, in Example SECREF5 judged by BIBREF21 as being highly questionable. However, even here a corpus example can be found containing the same variant used in a figurative sense (Example SECREF5). As such, we assume that we cannot know a priori which variants of an expression allow a figurative reading, and are thus a potentially idiomatic expression. Therefore we consider every possible morpho-syntactic variation of an idiom a PIE, regardless of whether it actually allows a figurative reading. We believe the boundaries of this variation can only be determined based on corpus evidence, and even then they are likely variable. Note that a similar question is tackled by BIBREF26, when they establish the boundary between a `literal reading of a VMWE' and a `coincidental co-occurrence'. BIBREF26's answer is similar to ours, in that they count something as a literal reading of a VMWE if it `the same or equivalent dependencies hold between [the expression]'s components as in its canonical form'. . John kicked the bucket last night. . * The bucket, John kicked last night. . ?? Azin spilled the bean. (from BIBREF21) . Alba reveals Fantastic Four 2 details The Invisible Woman actress spills the bean on super sequel (from ukWaC) Corpus Annotation ::: Evaluating the Extraction Methods Evaluating the extraction methods is easier than evaluating dictionary coverage, since the goal of the extraction component is more clearly delimited: given a set of PIEs from one or more dictionaries, extract all occurrences of those PIEs from a corpus. Thus, rather than dealing with the undefined set of all PIEs, we can work with a clearly defined and finite set of PIEs from a dictionary. Because we have a clearly defined set of PIEs, we can exhaustively annotate a corpus for PIEs, and use that annotated corpus for automatic evaluation of extraction methods using recall and precision. This allows us to facilitate and speed up annotation by pre-extracting sentences possibly containing a PIE. After the corpus is annotated, the precision and recall can be easily estimated by comparing the extracted PIE instances to those marked in the corpus. The details of the corpus selection, dictionary selection, extraction heuristic and annotation procedure are presented in Section SECREF46, and the details and results of the various extraction methods are presented in Section SECREF6. Corpus Annotation ::: Base Corpus and Idiom Selection As a base corpus, we use the XML version of the British National Corpus BIBREF37, because of its size, variety, and wide availability. The BNC is pre-segmented into s-units, which we take to be sentences, w-units, which we take to be words, and c-units, punctuation. We then extract the text of all w-units and c-units. We keep the sentence segmentation, resulting in a set of plain text sentences. All sentences are included, except for sentences containing <gap> elements, which are filtered out. These <gap> elements indicate places where material from the original has been left out, e.g. for anonymisation purposes. Since this can result in incomplete sentences that cannot be parsed correctly, we filter out sentences containing these gaps. We use only the written part of the BNC. From this, we extract a set of documents with the aim of having as much genre variation as possible. To achieve this, we select the first document in each genre, as defined by the classCode attribute (e.g. nonAc, commerce, letters). The resulting set of 46 documents makes up our base corpus. Note that these documents vary greatly in size, which means the resulting corpus is varied, but not balanced in terms of size (Table TABREF43). The documents are split across a development and test set, as specified at the end of Section SECREF46. We exclude documents with IDs starting with A0 from all annotation and evaluation procedures, as these were used during development of the extraction tool and annotation guidelines. As for the set of potentially idiomatic expressions, we use the intersection of the three dictionaries, Wiktionary, Oxford, and UsingEnglish. Based on the assumption that, if all three resources include a certain idiom, it must unquestionably be an idiom, we choose the intersection (also see Figure FIGREF37). This serves to exclude questionable entries, like at all, which is in Wiktionary. The final set of idioms used for these experiments consists of 591 different multiword expressions. Although we aim for wide coverage, this is a necessary trade-off to ensure quality. At the same time, it leaves us with a set of idiom types that is approximately ten times larger than present in existing corpora. The set of 591 idioms includes idioms with a large variety of syntactic patterns, of which the most frequent ones are shown in Table TABREF44. The statistics show that the types most prevalent in existing corpora, verb-noun and preposition-noun combinations, are indeed the most frequent ones, but that there is a sizeable minority of types that do not fall into those categories, including coordinated adjectives, coordinated nouns, and nouns with prepositional phrases. This serves to emphasise the necessity of not restricting corpora to a small set of syntactic patterns. Corpus Annotation ::: Extraction of PIE Candidates To annotate the corpus completely manually would require annotators to read the whole corpus, and cross-reference each sentence to a list of almost 600 PIEs, to check whether one of those PIEs occurs in a sentence. We do not consider this a feasible annotation settings, due to both the difficulty of recognising literal usages of idioms and the time cost needed to find enough PIEs, given their low overall frequency. As such, we use a pre-extraction step to present candidates for annotation to the human annotators. Given the corpus and the set of PIEs, we heuristically extract the PIE candidates as follows: given an idiomatic expression, extract every sentence which contains all the defining words of the idiom, in any form. This ensures that all possibly matching sentences get extracted, while greatly pruning the amount of sentences for annotators to look at. In addition, it allows us to present the heuristically matched PIE type and corresponding words to the annotators, which makes it much easier to judge whether something is a PIE or not. This also means that annotators never have to go through the full list of PIEs during the annotation process. Initially, the heuristic simply extracted any sentence containing all the required words, where a word is any of the inflectional variants of the words in the PIE, except for determiners and punctuation. This method produced large amounts of noise, that is, a set of PIE candidates with only a very low percentage of actual PIEs. This was caused by the presence of some highly frequent PIEs with very little defining lexical content, such as on the make, and in the running. For example, with the original method, every sentence containing the preposition on, and any inflectional form of the verb make was extracted, resulting in a huge number of non-PIE candidates. To limit the amount of noise, two restrictions were imposed. The first restrictions disallows word order variation for PIEs which do not contain a verb. The rationale behind this is that word order variation is only possible with PIEs like spill the beans (e.g. the beans were spilled), and not with PIEs like in the running (*the running in??). The second restriction is that we limit the number of words that can be inserted between the words of a PIE, but only for PIEs like on the make, and in the running, i.e. PIEs which only contain prepositions, determiners and a single noun. The number of intervening words was limited to three tokens, allowing for some variation, as in Example SECREF45, but preventing sentences like Example SECREF45 from being extracted. This restriction could result in the loss of some PIE candidates with a large number of intervening words. However, the savings in annotation time clearly outweigh the small loss in recall in this situation. . Either at New Year or before July you can anticipate a change in the everyday running of your life. (in the running - BNC - document CBC - sentence 458) . [..] if [he] hung around near the goal or in the box for that matter instead of running all over the show [..] (in the running - BNC - document J1C - sentence 1341) Corpus Annotation ::: Annotation Procedure The manual annotation procedure consists of three different phases (pilot, double annotation, single annotation), followed by an adjudication step to resolve conflicting annotations. Two things are annotated: whether something is a PIE or not, and if it is a PIE, which sense the PIE is used in. In the first phase (0-100-*), we randomly select hundred of the 2,239 PIE candidates which are then annotated by three annotators. All annotators have a good command of English, are computational linguists, and familiar with the subject. The annotators include the first and last author of this paper. The annotators were provided with a short set of guidelines, of which the main rule-of-thumb for labelling a phrase as a PIE is as follows: any phrase is a PIE when it contains all the words, with the same part-of-speech, and in the same grammatical relations as in the dictionary form of the PIE, ignoring determiners. For sense annotation, annotators were to mark a PIE as idiomatic if it had a sense listed in one of the idiom dictionaries, and as literal if it had a meaning that is a regular composition of its component words. For cases which were undecidable due to lack of context, the ?-label was used. The other-label was used as a container label for all cases in which neither the literal or idiomatic sense was correct (e.g. meta-linguistic uses and embeddings in metaphorical frames, see also Section SECREF10). The first phase of annotation serves to bring to light any inconsistencies between annotators and fill in any gaps in the annotation guidelines. The resulting annotations already show a reasonably high agreement of 0.74 Fleiss' Kappa. Table TABREF48 shows annotation details and agreement statistics for all three phases. The annotation tasks suffixed by -PIE indicate agreement on PIE/non-PIE annotation and the tasks suffixed by -sense indicate agreement on sense annotation for PIEs. In the second phase of annotation (100-600-* & 600-1100-*), another 1000 of the 2239 PIE candidates are selected to be annotated by two pairs of annotators. This shows very high agreement, as shown in Table TABREF48. This is probably due to the improvement in guidelines and the discussion following the pilot round of annotation. The exception to this are the somewhat lower scores for the 600-1100-sense annotation task. Adjudication revealed that this is due almost exclusively because of a different interpretation of the literal and idiomatic senses of a single PIE type: on the ground. Excluding this PIE type, Fleiss' Kappa increases from 0.63 to 0.77. Because of the high agreement on PIE annotation, we deem it sufficient for the remainder (1108 candidates) to be annotated by only the primary annotator in the third phase of annotation (1100-2239-*). The reliability of the single annotation can be checked by comparing the distribution of labels to the multi-annotated parts. This shows that it falls clearly within the ranges of the other parts, both in the proportion of PIEs and idiomatic senses (see Table TABREF49). The single-annotated part has 49.0% PIEs, which is only 4 percentage points above the 44.7% PIEs in the multi-annotated parts. The proportion of idioms is just 2 percentage points higher, with 55.9% versus 53.9.%. Although inter-annotator agreement was high, there was still a significant number of cases in the triple and double annotated PIE candidate sets where not all annotators agreed. These cases were adjudicated through discussion by all annotators, until they were in agreement. In addition, all PIE candidates which initially received the ?-label (unclear or undecidable) for sense or PIE were resolved in the same manner. In the adjudication procedure, annotators were provided with additional context on each side of the idiom, in contrast to the single sentence provided during the initial annotation. The main reason to do adjudication, rather than simply discarding all candidates for which there was disagreement, was that we expected exactly those cases for which there are conflicting annotations to be the most interesting ones, since having non-standard properties would cause the annotations to diverge. Examples of such interesting non-standard cases are at sea as part of a larger satirical frame in Example SECREF46 and cut the mustard in Example SECREF46 where it is used in a headline as wordplay on a Cluedo character. . The bovine heroine has connections with Cowpeace International, and deals with a huge treacle slick at sea. (at sea - BNC - document CBC - sentence 13550) . Why not cut the Mustard? [..] WADDINGTON Games's proposal to axe Reverend Green from the board game Cluedo is a bad one. (cut the mustard - BNC - document CBC - sentence 14548) We split the corpus at the document level. The corpus consists of 45 documents from the BNC, and we split it in such a way that the development set has 1,112 candidates across 22 documents and the test set has 1,127 candidates from 23 documents. Note that this means that the development and test set contain different genres. This ensures that we do not optimise our systems on genre-specific aspects of the data. Dictionary-based PIE Extraction We propose and implement four different extraction methods, of differing complexities: exact string match, fuzzy string match, inflectional string match, and parser-based extraction. Because of the absence of existing work on this task, we compare these methods to each other, where the more basic methods function as baselines. More complex methods serve to shine light on the difficulty of the PIE extraction task; if simple methods already work sufficiently well, the task is not as hard as expected, and vice versa. Below, each of the extraction methods is presented and discussed in detail. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Exact String Match This is, very simply, extracting all instances of the exact dictionary form of the PIE, from the tokenized text of the corpus. Word boundaries are taken into account, so at sea does not match `that seawater'. As a result, all inflectional and other variants of the PIE are ignored. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Fuzzy String Match Fuzzy string match is a rough way of dealing with morphological inflection of the words in a PIE. We match all words in the PIE, taking into account word boundaries, and allow for up to 3 additional letters at the end of each word. These 3 additional characters serve to cover inflectional suffixes. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Inflectional String Match In inflectional string match, we aim to match all inflected variations of a PIE. This is done by generating all morphological variants of the words in a PIE, generating all combinations of those words, and then using exact string match as described earlier. Generating morphological variations consists of three steps: part-of-speech tagging, morphological analysis, and morphological reinflection. Since inflectional variation only applies to verbs and nouns, we use the Spacy part-of-speech tagger to detect the verbs and nouns. Then, we apply the morphological analyser morpha to get the base, uninflected form of the word, and then use the morphological generation tool morphg to get all possible inflections of the word. Both tools are part of the Morph morphological processing suite BIBREF38. Note that the Morph tools depend on the part-of-speech tag in the input, so that a wrong PoS may lead to an incorrect set of morphological variants. For a PIE like spill the beans, this results in the following set of variants: $\lbrace $spill the bean, spills the bean, spilled the bean, spilling the bean, spill the beans, spills the beans, spilled the beans, spilling the beans$\rbrace $. Since we generate up to 2 variants for each noun, and up to 4 variants for each verb, the number of variants for PIEs containing multiple verbs and nouns can get quite large. On average, 8 additional variants are generated for each potentially idiomatic expression. Dictionary-based PIE Extraction ::: String-based Extraction Methods ::: Additional Steps For all string match-based methods, ways to improve performance are implemented, to make them as competitive as possible. Rather than doing exact string matching, we also allow words to be separated by something other than spaces, e.g. nuts-and-bolts for nuts and bolts. Additionally, there is an option to take into account case distinctions. With the case-sensitive option, case is preserved in the idiom lists, e.g. coals to Newcastle, and the string matching is done in a case-sensitive manner. This increases precision, e.g. by avoiding PIEs as part of proper names, but also comes at a cost of recall, e.g. for sentence-initial PIEs. Thirdly, there is the option to allow for a certain number of intervening words between each pair of words in the PIE. This should improve recall, at the cost of precision. For example, this would yield the true positive make a huge mountain out of a molehill for make a mountain out of a molehill, but also false positives like have a smoke and go for have a go. A third shared property of the string-based methods is the processing of placeholders in PIEs. PIEs containing possessive pronoun placeholders, such as one's and someone's are expanded. That is, we remove the original PIE, and add copies of the PIE where the placeholder is replaced by one of the possessive personal pronouns. For example, a thorn in someone's side is replaced by a thorn in {my, your, his, ...} side. In the case of someone's, we also add a wildcard for any possessively used word, i.e. a thorn in —'s side, to match e.g. a thorn in Google's side. Similarly, we make sure that PIE entries containing —, such as the mother of all —, will match any word for — during extraction. We do the same for someone, for which we substitute objective pronouns. For one, this is not possible, since it is too hard to distinguish from the one used as a number. Dictionary-based PIE Extraction ::: Parser-Based Extraction Methods Parser-based extraction is potentially the widest-coverage extraction method, with the capacity to extract both morphological and syntactic variants of the PIE. This should be robust against the most common modifications of the PIE, e.g. through word insertions (spill all the beans), passivisation (the beans were spilled), and abstract over articles (spill beans). In this method, PIEs are extracted using the assumption that any sentence which contains the lemmata of the words in the PIE, in the same dependency relations as in the PIE, contains an instance of the PIE type in question. More concretely, this means that the parse of the sentence should contain the parse tree of the PIE as a subtree. This is illustrated in Figure FIGREF57, which shows the parse tree for the PIE lose the plot, parsed without context. Note that this is a subtree of the parse tree for the sentence `you might just lose the plot completely', which is shown in Figure FIGREF58. Since the sentence parse contains the parse of the PIE, we can conclude that the sentence contains an instance of that PIE and extract the span of the PIE instance. All PIEs are parsed in isolation, based on the assumption that all PIEs can be parsed, since they are almost always well-formed phrases. However, not all PIEs will be parsed correctly, especially since there is no context to resolve ambiguity. Errors tend to occur at the part-of-speech level, where, for example, verb-object combinations like jump ship and touch wood are erroneously tagged as noun-noun compounds. An analysis of the impact of parser error on PIE extraction performance is presented in Section SECREF73. Initially, we use the Spacy parser for parsing both the PIEs and the sentences. Next, the sentence is parsed, and the lemma of the top node of the parsed PIE is matched against the lemmata of the sentence parse. If a match is found, the parse tree of the PIE is matched against the subtree of the matching sentence parse node. If the whole PIE parse tree matches, the span ranging from the first PIE token to the last is extracted. This span can thus include words that are not directly part of the PIE's dictionary form, in order to account for insertions like ships were jumped for jump ship, or have a big heart for have a heart. During the matching, articles (a/an/the) are ignored, and passivisation is accounted for with a special rule. In addition, a number of special cases are dealt with. These are PIEs containing someone('s), something('s), one's, or —. These words are used in PIEs as placeholders for a generic possessor (someone's/something's/one's), generic object (someone/something), or any word of the right PoS (—). For someone's, and something's, we match any possessive pronoun, or (proper) noun + possessive marker. For one's, only possessive pronouns are matched, since this is a placeholder for reflexive possessors. For someone and something, any non-possessive pronoun or (proper) noun is matched. For — wildcards, any word can be matched, as long as it has the right relation to the right head. An additional challenge with these wildcards is that PIEs containing them cannot be parsed, e.g. too — for words is not parseable. This is dealt with by substituting the — by a PoS-ambiguous word, such as fine, or back. Two optional features are added to the parser-based method with the goal of making it more robust to parser errors: generalising over dependency relation labels, and generalising over dependency relation direction. We expect this to increase recall at the cost of precision. In the first no labels setting, we match parts of the parse tree which have the same head lemma and the same dependent lemma, regardless of the relation label. An example of this is Figure FIGREF60, which has the wrong relation label between up and ante. If labels are ignored, however, we can still extract the PIE instance in Figure FIGREF61, which has the correct label. In the no directionality setting, relation labels are also ignored, and in addition the directionality of the relation is ignored, that is, we allow for the reversal of heads and dependents. This benefits performance in a case like Figure FIGREF62, which has stock as the head of laughing in a compound relation, whereas the parse of the PIE (Figure FIGREF63) has laughing as the head of stock in a dobj relation. Note that similar settings were implemented by BIBREF26, who detect literal uses of VMWEs using a parser-based method with either full labelled dependencies, unlabelled dependencies, or directionless unlabelled dependencies (which they call BagOfDeps). They find that recall increases when less restrictions on the dependencies are used, but that this does not hurt precision, as we would expect. However, we cannot draw too many conclusions from these results due to the small size of their evaluation set, which consists of just 72 literal VMWEs in total. Dictionary-based PIE Extraction ::: Parser-Based Extraction Methods ::: In-Context Parsing Since the parser-based method parses PIEs without any context, it often finds an incorrect parse, as for jump ship in Figure FIGREF65. As such, we add an option to the method that aims to increase the number of correct parses by parsing the PIE within context, that is, within a sentence. This can greatly help to disambiguate the parse, as in Figure FIGREF66. If the number of correct parses goes up, the recall of the extraction method should also increase. Naturally, it can also be the case that a PIE is parsed correctly without context, and incorrectly with context. However, we expect the gains to outweigh the losses. The challenge here is thus to collect example sentences containing the PIE. Since the whole point of this work is to extract PIEs from raw text, this provides a catch-22-like situation: we need to extract a sentence containing a PIE in order to extract sentences containing a PIE. The workaround for this problem is to use the exact string matching method with the dictionary form of the PIE and a very large plain text corpus to gather example sentences. By only considering the exact dictionary form we both simplify the finding of example sentences and the extraction of the PIE's parse from the sentence parse. In case multiple example sentences are found, the shortest sentence is selected, since we assume it is easiest to parse. This is also the reason we make use of very large corpora, to increase the likelihood of finding a short, simple sentence. The example sentence extraction method is modified in such a way that sentences where the PIE is used meta-linguistically in quotes, e.g. “the well-known English idiom `to spill the beans' has no equivalents in other languages”, are excluded, since they do not provide a natural context for parsing. When no example sentence can be found in the corpus, we back-off to parsing the PIE without context. After a parse has been found for each PIE (i.e. with or without context), the method proceeds identically to the regular parser-based method. We make use of the combination of two large corpora for the extraction of example sentences: the English Wikipedia, and ukWaC BIBREF17. For the Wikipedia corpus, we use a dump (13-01-2016) of the English-language Wikipedia, and remove all Wikipedia markup. This is done using WikiExtractor. The resulting files still contain some mark-up, which is removed heuristically. The resulting corpus contains mostly clean, raw, untokenized text, numbering approximately 1.78 billion tokens. As for ukWaC, all XML-markup was removed, and the corpus is converted to a one-sentence-per-line format. UkWaC is tokenized, which makes it difficult for a simple string match method to find PIEs containing punctuation, for example day in, day out. Therefore, all spaces before commas, apostrophes, and sentence-final punctuation are removed. The resulting corpus contains approximately 2.05 billion tokens, making for a total of 3.83 billion tokens in the combined ukWaC and Wikipedia corpus. Dictionary-based PIE Extraction ::: Results In order to determine which of the methods described previously produces the highest quality extraction of potentially idiomatic expressions, we evaluate them, in various settings, on the corpus described in Section SECREF5. For parser-based extraction, systems with and without in-context parsing, ignoring labels, and ignoring directionality are tested. For the three string-based extraction methods, varying numbers of intervening words and case sensitivity are evaluated. Evaluation is done using the development set, consisting of 22 documents and 1112 PIE candidates, and the test set, which consists of 23 documents and 1127 PIE candidates. For each method the best set of parameters and/or options is determined using the development set, after which the best variant by F1-score of each method is evaluated on the test set. Since these documents in the corpus are exhaustively annotated for PIEs (see Section SECREF40), we can calculate true and false positives, and false negatives, and thus precision, recall and F1-score. The exact spans are ignored, because the spans annotated in the evaluation corpus are not completely reliable. These were automatically generated during candidate extraction, as described in Section SECREF45. Rather, we count an extraction as a true positive if it finds the correct PIE type in the correct sentence. Note that we judge the system with the highest F1-score to be the best-performing system, since it is a clear and objective criterion. However, when using the system in practice, the best performance depends on the goal. When used as a preprocessing step for PIE disambiguation, the system with the highest F1-score is perhaps the most suitable, but as a corpus building tool, one might want to sacrifice some precision for an increase in recall. This helps to get the most comprehensive annotation of PIEs possible, without overloading the annotators with false extractions (i.e. non-PIEs), by maintaining high precision. The results for each system on the development set are presented in Tables TABREF70 and TABREF71. Generally, results are in line with expectations: (the best) parse-based methods are better than (the best) string-based methods, and within string-based methods, inflectional matching works best. The same goes for the different settings: case-sensitivity increases precision at the cost of recall, allowing intervening words increases recall at the cost of precision, and the same goes for the no labels and no directionality options for parser-based extraction. Overall, in-context parser-based extraction works best, with an F1 of 88.54%, whereas fuzzy matching does very poorly. Within string-based methods, exact matching has the highest precision, but low recall. Fuzzy matching increases recall at a disproportionately large precision cost, whereas inflectional matching combines the best of both worlds and has high recall at a small loss in precision. For the parser-based system, it is notable that parsing idioms within context yields a clear overall improvement by greatly improving recall at a small cost in precision. We evaluate the best variant of each system, as determined by F1-score, on the test set. This gives us an indication of whether the system is robust enough, or was overfitted on the development data. Results on the test set are shown in Table TABREF72. On average, the results are lower than the results on the development set. The string-based methods perform clearly worse, with drops of about 4% F1-score for exact and inflectional match, and a large drop of almost 9% F1-score for fuzzy matching. The parser-based method, on the other hand, is more robust, with a small 0.59% increase in F1-score on the test set. Dictionary-based PIE Extraction ::: Analysis Broadly speaking, the PIE extraction systems presented above perform in line with expectations. It is nevertheless useful to see where the best-performing system misses out, and where improvements like in-context parsing help performance. We analyse the shortcomings of the in-context parser-based system by looking at the false positives and false negatives on the development set. We consider the output of the system with best overall performance, since it will provide the clearest picture. The system extracts 529 PIEs in total, of which 54 are false extractions (false positives), and it misses 69 annotated PIE instances (false negatives). Most false positives stem from the system's failure to capture nuances of PIE annotation. This includes cases where PIEs contain, or are part of, proper nouns (Example SECREF73), PIEs that are part of coordination constructions (Example SECREF73), and incorrect attachments (Example SECREF73). Among these errors, sentences containing proper nouns are an especially frequent problem. . Drama series include [..] airline security thrills in Cleared For Takeoff and Head Over Heels [..] (in the clear - BNC - document CBC - sentence 5177) . They prefer silk, satin or lace underwear in tasteful black or ivory. (in the black - BNC - document CBC - sentence 14673) . [..] `I saw this chap make something out of an ordinary piece of wood — he fashioned it into an exquisite work of art.' (out of the woods - BNC - document ABV - sentence 1300) The main cause of false negatives are errors made by the parser. In order to correctly extract a PIE from a sentence, both the PIE and the sentence have to be parsed correctly, or at least parsed in the same way. This means a missed extraction can be caused by a wrong parse for the PIE or a wrong parse for the sentence. These two error types form the largest class of false negatives. Since some PIE types are rather frequent, a wrong parse for a single PIE type can potentially lead to a large number of missed extractions. It is not surprising that the parser makes many mistakes, since idioms often have unusual syntactic constructions (e.g. come a cropper) and contain words where default part-of-speech tags lead to the wrong interpretation (e.g. round is a preposition in round the bend, not a noun or adjective). This is especially true when idioms are parsed without context, and hence, where in-context parsing provides the largest benefit: the number of PIEs which are parsed incorrectly drops, which leads to F1-scores on those types going from 0% to almost 100% (e.g. in light of and ring a bell). Since parser errors are the main contributor to false negatives, hurting recall, we can observe that parsing idioms in context serves to benefit only recall, by 7 percentage points, at only a small loss in precision. We find that adding context mainly helps for parsing expressions which are structurally relatively simple, but still ambiguous, such as rub shoulders, laughing stock, and round the bend. Compare, for example, the parse trees for laughing stock in isolation and within the extracted context sentence in Figures FIGREF74 and FIGREF75. When parsed in isolation, the relation between the two words is incorrectly labelled as a compound relation, whereas in context it is correctly labelled as a direct object relation. Note however, that for the most difficult PIEs, embedding them in a context does solve the parsing problem: a syntactically odd phrase is hard to phrase (e.g. for the time being), and a syntactically odd phrase in a sentence makes for a syntactically odd sentence that is still hard to parse (e.g. `London for the time being had been abandoned.'). Finding example sentences turned out not to be a problem, since appropriate sentences were found for 559 of 591 PIE types. An alternative method for reducing parser error is to use a different, better parser. The Spacy parser was mainly chosen for implementation convenience and speed, and there are parsers which have better performance, as measured on established parsing benchmarks. To investigate the effectiveness of this method, we used the Stanford Neural Dependency Parser BIBREF39 to extract PIEs in the regular parsing, in-context parsing and the no labels settings. In all cases, using the Stanford parser yielded worse extraction performance than the Spacy parser. A possible explanation for why a supposedly better parser performs worse here is that parsers are optimised and trained to do well on established benchmarks, which consist of complete sentences, often from news texts. This does not necessarily correlate with parsing performance on short (sentences containing) idiomatic phrases. As such, we cannot assume that better overall parsing performance implies PIE extraction performance. It should be noted that, when assessing the quality of PIE extraction performance, the parser-based methods are sensitive to specific PIE types. That is, if a single PIE type is parsed incorrectly, then it is highly probable that all instances of that type are missed. If this type is also highly frequent, this means that a small change in actual performance yields a large change in evaluation scores. Our goal is to have a PIE extraction system that is robust across all PIE types, and thus the current evaluation setting does not align exactly with our aim. Splitting out performance per PIE type reveals whether there is indeed a large variance in performance across types. Table TABREF76 shows the 25 most frequent PIE types in the corpus, and the performance of the in-context-parsing-based system on each. Except two cases (in the black and round the bend), we see that the performance is in the 80–100% range, even showing perfect performance on the majority of types. For none of the types do we see low precision paired with high recall, which indicates that the parser never matches a highly frequent non-PIE phrase. For the system with the no labels and no-directionality options (per-type numbers not shown here), however, this does occur. For example, ignoring the labels for the parse of the PIE have a go leads to the erroneous matching of many sentences containing a form of have to go, which is highly frequent, thus leading to a large drop in precision. Although performance is stable across the most frequent types, among the less frequent types it is more spotty. This hurts overall performance, and there are potential gains in mitigating the poor performance on these types, such as for the time being. At the same time, the string matching methods show much more stable performance across types, and some of them do so with very high precision. As such, a combination of two such methods could boost performance significantly. If we use a high-precision string match-based method, such as the exact string match variant with a precision of 97.35%, recall could be improved for the wrongly parsed PIE types, without a significant loss of precision. We experiment with two such combinations, by simply taking the union of the sets of extracted idioms of both systems, and filtering out duplicates. Results are shown in Table TABREF77. Both combinations show the expected effect: a clear gain in recall at a minimal loss in precision. Compared to the in-context-parsing-based system, the combination with exact string matching yields a gain in recall of over 6%, and the combination with inflectional string matching yields an even bigger gain of almost 8%, at precision losses of 0.6% and 0.8%, respectively. This indicates that the systems are very much complementary in the PIEs they extract. It also means that, when used in practice, combining inflectional string matching and parse-based extraction is the most reliable configuration. Conclusions and Outlook We present an in-depth study on the automatic extraction of potentially idiomatic expressions based on dictionaries. The purpose of automatic dictionary-based extraction is, on the one hand, to function as a pre-extraction step in the building of a large idiom-annotated corpus. On the other hand, it can function as part of an idiom extraction system when combined with a disambiguation component. In both cases, the ultimate goal is to improve the processing of idiomatic expressions within NLP. This work consists of three parts: a comparative evaluation of the coverage of idiom dictionaries, the annotation of a PIE corpus, and the development and evaluation of several dictionary-based PIE extraction methods. In the first part, we present a study of idiom dictionary coverage, which serves to answer the question of whether a single idiom dictionary, or a combination of dictionaries, can provide good coverage of the set of all English idioms. Based on the comparison of dictionaries to each other, we estimate that the overlap between them is limited, varying from 20% to 55%, which indicates a large divergence between the dictionaries. This can be explained by the fact that idioms vary widely by register, genre, language variety, and time period. In our case, it is also likely that the divergence is caused partly by the gap between crowdsourced dictionaries on the one hand, and a dictionary compiled by professional lexicographers on the other. Given these factors, we can conclude that a single dictionary cannot provide even close to complete coverage of English idioms, but that by combining dictionaries from various sources, significant gains can be made. Since `English idioms' are a diffuse and constantly changing set, we have no gold standard to compare to. As such, we conclude that multiple dictionaries should be used when possible, but that we cannot say any anything definitive on the coverage of dictionaries with regard to the complete set of English idioms (which can only be approximated in the first place). A more comprehensive of idiom resources could be made in the future by using more advanced automatic methods for matching, for example by using BIBREF32's (BIBREF32) method for measuring expression variability. This would make it easier to evaluate a larger number of dictionaries, since no manual effort would be required. In the second part, we experiment with the exhaustive annotation of PIEs in a corpus of documents from the BNC. Using a set of 591 PIE types, much larger and more varied than in existing resources, we show that it is very much possible to establish a working definition of PIE that allows for a large amount of variation, while still being useful for reliable annotation. This resulted in high inter-annotator agreement, ranging from 0.74 to 0.91 Fleiss' Kappa. This means that we can build a resource to evaluate a wide-range idiom extraction system with relatively little effort. The final corpus of PIEs with sense annotations is publicly available consists of 2,239 PIE candidates, of which 1,050 actual PIEs instances, and contains 278 different PIE types. Finally, several methods for the automatic extraction of PIE instances were developed and evaluated on the annotated PIE corpus. We tested methods of differing complexity, from simple string match to dependency parse-based extraction. Comparison of these methods revealed that the more computationally complex method, parser-based extraction, works best. Parser-based extraction is especially effective in capturing a larger amount of variation, but is less precise than string-based methods, mostly because of parser error. The best overall setting of this method, which parses idioms within context, yielded an F1-score of 89.13% on the test set. Parser error can be partly compensated by combining the parse-based method and the inflectional string match method, which yields an F1-score of 92.01% (on the development set). This aligns well with the findings by BIBREF27, who found that combining simpler and more complex methods improves over just using a simple method case for extracting verb-particle constructions. This level of performance means that we can use the tool in corpus building. This greatly reduces the amount of manual extraction effort involved, while still maintaining a high level of recall. We make the source code for the different systems publicly available. Note that, although used here in the context of PIE extraction, our methods are equally applicable to other phrase extraction tasks, for example the extraction of light-verb constructions, metaphoric constructions, collocations, or any other type of multiword expression (cf. BIBREF27, BIBREF25, BIBREF26). Similarly, our method can be conceived as a blueprint and extended to languages other than English. For this to be possible, for any given new language one would need a list of target expressions and, in the case of the parser-based method, a reliable syntactic parser. If this is not the case, the inflectional matching method can be used, which requires only a morphological analyser and generator. Obviously, for languages that are morphologically richer than English, one would need to develop strategies aimed at controlling non-exact matches, so as to enhance recall without sacrificing precision. Previous work on Italian, for example, has shown the feasibility of achieving such balance through controlled pattern matching BIBREF40. Languages that are typologically very different from English would obviously require a dedicated approach for the matching of PIEs in corpora, but the overall principles of extraction, using language-specific tools, could stay the same. Currently, no corpora containing annotation of PIEs exist for languages other than English. However, the PARSEME corpus BIBREF19 already contains idioms (only idiomatic readings) for many languages and would only need annotation of literal usages of idioms to make up a set of PIEs. Paired with the Universal Dependencies project BIBREF41, which increasingly provides annotated data as well as processing tools for an ever growing number of languages, this seems an excellent starting point for creating PIE resources in multiple languages.
Wiktionary, Oxford Dictionary of English Idioms, UsingEnglish.com (UE), Sporleder corpus, VNC dataset, SemEval-2013 Task 5 dataset
191d4fe8a37611b2485e715bb55ff1a30038ad6a
191d4fe8a37611b2485e715bb55ff1a30038ad6a_0
Q: Are experiments performed with any other pair of languages, how did proposed method perform compared to other models? Text: Introduction Machine translation (MT) research is biased towards language pairs including English due to the ease of collecting parallel corpora. Translation between non-English languages, e.g., French$\rightarrow $German, is usually done with pivoting through English, i.e., translating French (source) input to English (pivot) first with a French$\rightarrow $English model which is later translated to German (target) with a English$\rightarrow $German model BIBREF0, BIBREF1, BIBREF2. However, pivoting requires doubled decoding time and the translation errors are propagated or expanded via the two-step process. Therefore, it is more beneficial to build a single source$\rightarrow $target model directly for both efficiency and adequacy. Since non-English language pairs often have little or no parallel text, common choices to avoid pivoting in NMT are generating pivot-based synthetic data BIBREF3, BIBREF4 or training multilingual systems BIBREF5, BIBREF6. In this work, we present novel transfer learning techniques to effectively train a single, direct NMT model for a non-English language pair. We pre-train NMT models for source$\rightarrow $pivot and pivot$\rightarrow $target, which are transferred to a source$\rightarrow $target model. To optimize the usage of given source-pivot and pivot-target parallel data for the source$\rightarrow $target direction, we devise the following techniques to smooth the discrepancy between the pre-trained and final models: Step-wise pre-training with careful parameter freezing. Additional adapter component to familiarize the pre-trained decoder with the outputs of the pre-trained encoder. Cross-lingual encoder pre-training with autoencoding of the pivot language. Our methods are evaluated in two non-English language pairs of WMT 2019 news translation tasks: high-resource (French$\rightarrow $German) and low-resource (German$\rightarrow $Czech). We show that NMT models pre-trained with our methods are highly effective in various data conditions, when fine-tuned for source$\rightarrow $target with: Real parallel corpus Pivot-based synthetic parallel corpus (zero-resource) None (zero-shot) For each data condition, we consistently outperform strong baselines, e.g., multilingual, pivoting, or teacher-student, showing the universal effectiveness of our transfer learning schemes. The rest of the paper is organized as follows. We first review important previous works on pivot-based MT in Section SECREF2. Our three pre-training techniques are presented in Section SECREF3. Section SECREF4 shows main results of our methods with a detailed description of the experimental setups. Section SECREF5 studies variants of our methods and reports the results without source-target parallel resources or with large synthetic parallel data. Section 6 draws conclusion of this work with future research directions. Related Work In this section, we first review existing approaches to leverage a pivot language in low-resource/zero-resource MT. They can be divided into three categories: Pivot translation (pivoting). The most naive approach is reusing (already trained) source$\rightarrow $pivot and pivot$\rightarrow $target models directly, decoding twice via the pivot language BIBREF7, BIBREF0. One can keep $N$-best hypotheses in the pivot language to reduce the prediction bias BIBREF1 and improve the final translation by system combination BIBREF8, which however increases the translation time even more. In multilingual NMT, firat2016zero modify the second translation step (pivot$\rightarrow $target) to use source and pivot language sentences together as the input. Pivot-based synthetic parallel data. We may translate the pivot side of given pivot-target parallel data using a pivot$\rightarrow $source model BIBREF3, or the other way around translating source-pivot data using a pivot$\rightarrow $target model BIBREF0. For NMT, the former is extended by zheng2017maximum to compute the expectation over synthetic source sentences. The latter is also called teacher-student approach BIBREF4, where the pivot$\rightarrow $target model (teacher) produces target hypotheses for training the source$\rightarrow $target model (student). Pivot-based model training. In phrase-based MT, there have been many efforts to combine phrase/word level features of source-pivot and pivot-target into a source$\rightarrow $target system BIBREF1, BIBREF2, BIBREF9, BIBREF10, BIBREF11, BIBREF12. In NMT, cheng2017joint jointly train for three translation directions of source-pivot-target by sharing network components, where ren2018triangular use the expectation-maximization algorithm with the target sentence as a latent variable. lu2018neural deploy intermediate recurrent layers which are common for multiple encoders and decoders, while johnson2017google share all components of a single multilingual model. Both methods train the model for language pairs involving English but enable zero-shot translation for unseen non-English language pairs. For this, ha2017effective encode the target language as an additional embedding and filter out non-target tokens in the output. lakew2017improving combine the multilingual training with synthetic data generation to improve the zero-shot performance iteratively, where sestorain2018zero applies the NMT prediction score and a language model score to each synthetic example as gradient weights. Our work is based on transfer learning BIBREF13 and belongs to the third category: model training. On the contrary to the multilingual joint training, we suggest two distinct steps: pre-training (with source-pivot and pivot-target data) and fine-tuning (with source-target data). With our proposed methods, we prevent the model from losing its capacity to other languages while utilizing the information from related language pairs well, as shown in the experiments (Section SECREF4). Our pivot adapter (Section SECREF18) shares the same motivation with the interlingua component of lu2018neural, but is much compact, independent of variable input length, and easy to train offline. The adapter training algorithm is adopted from bilingual word embedding mapping BIBREF14. Our cross-lingual encoder (Section SECREF26) is inspired by cross-lingual sentence embedding algorithms using NMT BIBREF15, BIBREF16. Transfer learning was first introduced to NMT by zoph2016transfer, where only the source language is switched before/after the transfer. nguyen2017transfer and kocmi2018trivial use shared subword vocabularies to work with more languages and help target language switches. kim2019effective propose additional techniques to enable NMT transfer even without shared vocabularies. To the best of our knowledge, we are the first to propose transfer learning strategies specialized in utilizing a pivot language, transferring a source encoder and a target decoder at the same time. Also, for the first time, we present successful zero-shot translation results only with pivot-based NMT pre-training. Pivot-based Transfer Learning Our methods are based on a simple transfer learning principle for NMT, adjusted to a usual data condition for non-English language pairs: lots of source-pivot and pivot-target parallel data, little (low-resource) or no (zero-resource) source-target parallel data. Here are the core steps of the plain transfer (Figure FIGREF10): Pre-train a source$\rightarrow $pivot model with a source-pivot parallel corpus and a pivot$\rightarrow $target model with a pivot-target parallel corpus. Initialize the source$\rightarrow $target model with the source encoder from the pre-trained source$\rightarrow $pivot model and the target decoder from the pre-trained pivot$\rightarrow $target model. Continue the training with a source-target parallel corpus. If we skip the last step (for zero-resource cases) and perform the source$\rightarrow $target translation directly, it corresponds to zero-shot translation. Thanks to the pivot language, we can pre-train a source encoder and a target decoder without changing the model architecture or training objective for NMT. On the contrary to other NMT transfer scenarios BIBREF13, BIBREF17, BIBREF18, this principle has no language mismatch between transferor and transferee on each source/target side. Experimental results (Section SECREF4) also show its competitiveness despite its simplicity. Nonetheless, the main caveat of this basic pre-training is that the source encoder is trained to be used by an English decoder, while the target decoder is trained to use the outputs of an English encoder — not of a source encoder. In the following, we propose three techniques to mitigate the inconsistency of source$\rightarrow $pivot and pivot$\rightarrow $target pre-training stages. Note that these techniques are not exclusive and some of them can complement others for a better performance of the final model. Pivot-based Transfer Learning ::: Step-wise Pre-training A simple remedy to make the pre-trained encoder and decoder refer to each other is to train a single NMT model for source$\rightarrow $pivot and pivot$\rightarrow $target in consecutive steps (Figure FIGREF15): Train a source$\rightarrow $pivot model with a source-pivot parallel corpus. Continue the training with a pivot-target parallel corpus, while freezing the encoder parameters of 1. In the second step, a target decoder is trained to use the outputs of the pre-trained source encoder as its input. Freezing the pre-trained encoder ensures that, even after the second step, the encoder is still modeling the source language although we train the NMT model for pivot$\rightarrow $target. Without the freezing, the encoder completely adapts to the pivot language input and is likely to forget source language sentences. We build a joint vocabulary of the source and pivot languages so that the encoder effectively represents both languages. The frozen encoder is pre-trained for the source language in the first step, but also able to encode a pivot language sentence in a similar representation space. It is more effective for linguistically similar languages where many tokens are common for both languages in the joint vocabulary. Pivot-based Transfer Learning ::: Pivot Adapter Instead of the step-wise pre-training, we can also postprocess the network to enhance the connection between the source encoder and the target decoder which are pre-trained individually. Our idea is that, after the pre-training steps, we adapt the source encoder outputs to the pivot encoder outputs to which the target decoder is more familiar (Figure FIGREF19). We learn a linear mapping between the two representation spaces with a small source-pivot parallel corpus: Encode the source sentences with the source encoder of the pre-trained source$\rightarrow $pivot model. Encode the pivot sentences with the pivot encoder of the pre-trained pivot$\rightarrow $target model. Apply a pooling to each sentence of 1 and 2, extracting representation vectors for each sentence pair: ($\mathbf {s}$, $\mathbf {p}$). Train a mapping $\mathbf {M}\in \mathbb {R}^{d \times d}$ to minimize the distance between the pooled representations $\mathbf {s}\in \mathbb {R}^{d \times 1}$ and $\mathbf {p}\in \mathbb {R}^{d \times 1}$, where the source representation is first fed to the mapping: where $d$ is the hidden layer size of the encoders. Introducing matrix notations $\mathbf {S}\in \mathbb {R}^{d \times n}$ and $\mathbf {P}\in \mathbb {R}^{d \times n}$, which concatenate the pooled representations of all $n$ sentences for each side in the source-pivot corpus, we rewrite Equation DISPLAY_FORM24 as: which can be easily computed by the singular value decomposition (SVD) for a closed-form solution, if we put an orthogonality constraint on $\mathbf {M}$ BIBREF14. The resulting optimization is also called Procrustes problem. The learned mapping is multiplied to encoder outputs of all positions in the final source$\rightarrow $target tuning step. With this mapping, the source encoder emits sentence representations that lie in a similar space of the pivot encoder. Since the target decoder is pre-trained for pivot$\rightarrow $target and accustomed to receive the pivot encoder outputs, it should process the mapped encoder outputs better than the original source encoder outputs. Pivot-based Transfer Learning ::: Cross-lingual Encoder As a third technique, we modify the source$\rightarrow $pivot pre-training procedure to force the encoder to have cross-linguality over source and pivot languages; modeling source and pivot sentences in the same mathematical space. We achieve this by an additional autoencoding objective from a pivot sentence to the same pivot sentence (Figure FIGREF27). The encoder is fed with sentences of both source and pivot languages, which are processed by a shared decoder that outputs only the pivot language. In this way, the encoder is learned to produce representations in a shared space regardless of the input language, since they are used in the same decoder. This cross-lingual space facilitates smoother learning of the final source$\rightarrow $target model, because the decoder is pre-trained to translate the pivot language. The same input/output in autoencoding encourages, however, merely copying the input; it is said to be not proper for learning complex structure of the data domain BIBREF19. Denoising autoencoder addresses this by corrupting the input sentences by artificial noises BIBREF20. Learning to reconstruct clean sentences, it encodes linguistic structures of natural language sentences, e.g., word order, better than copying. Here are the noise types we use BIBREF21: Drop tokens randomly with a probability $p_\mathrm {del}$ Replace tokens with a <BLANK> token randomly with a probability $p_\mathrm {rep}$ Permute the token positions randomly so that the difference between an original index and its new index is less than or equal to $d_\mathrm {per}$ We set $p_\mathrm {del}=0.1$, $p_\mathrm {rep}=0.1$, and $d_\mathrm {per}=3$ in our experiments. The key idea of all three methods is to build a closer connection between the pre-trained encoder and decoder via a pivot language. The difference is in when we do this job: Cross-lingual encoder (Section SECREF26) changes the encoder pre-training stage (source$\rightarrow $pivot), while step-wise pre-training (Section SECREF14) modifies decoder pre-training stage (pivot$\rightarrow $target). Pivot adapter (Section SECREF18) is applied after all pre-training steps. Main Results We evaluate the proposed transfer learning techniques in two non-English language pairs of WMT 2019 news translation tasks: French$\rightarrow $German and German$\rightarrow $Czech. Data We used the News Commentary v14 parallel corpus and newstest2008-2010 test sets as the source-target training data for both tasks. The newstest sets were oversampled four times. The German$\rightarrow $Czech task was originally limited to unsupervised learning (using only monolingual corpora) in WMT 2019, but we relaxed this constraint by the available parallel data. We used newstest2011 as a validation set and newstest2012/newstest2013 as the test sets. Both language pairs have much abundant parallel data in source-pivot and pivot-target with English as the pivot language. Detailed corpus statistics are given in Table TABREF33. Preprocessing We used the Moses tokenizer and applied true-casing on all corpora. For all transfer learning setups, we learned byte pair encoding (BPE) BIBREF22 for each language individually with 32k merge operations, except for cross-lingual encoder training with joint BPE only over source and pivot languages. This is for modularity of pre-trained models: for example, a French$\rightarrow $English model trained with joint French/English/German BPE could be transferred smoothly to a French$\rightarrow $German model, but would not be optimal for a transfer to e.g., a French$\rightarrow $Korean model. Once we pre-train an NMT model with separate BPE vocabularies, we can reuse it for various final language pairs without wasting unused portion of subword vocabularies (e.g., German-specific tokens in building a French$\rightarrow $Korean model). On the contrary, baselines used joint BPE over all languages with also 32k merges. Model and Training The 6-layer base Transformer architecture BIBREF23 was used for all of our experiments. Batch size was set to 4,096 tokens. Each checkpoint amounts to 10k updates for pre-training and 20k updates for fine-tuning. Each model was optimized with Adam BIBREF24 with an initial learning rate of 0.0001, which was multiplied by 0.7 whenever perplexity on the validation set was not improved for three checkpoints. When it was not improved for eight checkpoints, we stopped the training. The NMT model training and transfer were done with the OpenNMT toolkit BIBREF25. Pivot adapter was trained using the Muse toolkit BIBREF26, which was originally developed for bilingual word embeddings but we adjusted for matching sentence representations. Baselines We thoroughly compare our approaches to the following baselines: Direct source$\rightarrow $target: A standard NMT model trained on given source$\rightarrow $target parallel data. Multilingual: A single, shared NMT model for multiple translation directions BIBREF6. Many-to-many: Trained for all possible directions among source, target, and pivot languages. Many-to-one: Trained for only the directions to target language, i.e., source$\rightarrow $target and pivot$\rightarrow $target, which tends to work better than many-to-many systems BIBREF27. In Table TABREF34, we report principal results after fine-tuning the pre-trained models using source-target parallel data. As for baselines, multilingual models are better than a direct NMT model. The many-to-many models surpass the many-to-one models; since both tasks are in a low-resource setup, the model gains a lot from related language pairs even if the target languages do not match. Plain transfer of pre-trained encoder/decoder without additional techniques (Figure FIGREF10) shows a nice improvement over the direct baseline: up to +2.7% Bleu for French$\rightarrow $German and +5.2% Bleu for German$\rightarrow $Czech. Pivot adapter provides an additional boost of maximum +0.7% Bleu or -0.7% Ter. Cross-lingual encoder pre-training is proved to be not effective in the plain transfer setup. It shows no improvements over plain transfer in French$\rightarrow $German, and 0.4% Bleu worse performance in German$\rightarrow $Czech. We conjecture that the cross-lingual encoder needs a lot more data to be fine-tuned for another decoder, where the encoder capacity is basically divided into two languages at the beginning of the fine-tuning. On the other hand, the pivot adapter directly improves the connection to an individually pre-trained decoder, which works nicely with small fine-tuning data. Pivot adapter gives an additional improvement on top of the cross-lingual encoder; up to +0.4% Bleu in French$\rightarrow $German and +0.6% Bleu in German$\rightarrow $Czech. In this case, we extract source and pivot sentence representations from the same shared encoder for training the adapter. Step-wise pre-training gives a big improvement up to +1.2% Bleu or -1.6% Ter against plain transfer in French$\rightarrow $German. It shows the best performance in both tasks when combined with the cross-lingual encoder: up to +1.2% Bleu in French$\rightarrow $German and +2.6% Bleu in German$\rightarrow $Czech, compared to the multilingual baseline. Step-wise pre-training prevents the cross-lingual encoder from degeneration, since the pivot$\rightarrow $target pre-training (Step 2 in Section SECREF14) also learns the encoder-decoder connection with a large amount of data — in addition to the source$\rightarrow $target tuning step afterwards. Note that the pivot adapter, which inserts an extra layer between the encoder and decoder, is not appropriate after the step-wise pre-training; the decoder is already trained to correlate well with the pre-trained encoder. We experimented with the pivot adapter on top of step-wise pre-trained models — with or without cross-lingual encoder — but obtained detrimental results. Compared to pivot translation (Table TABREF43), our best results are also clearly better in French $\rightarrow $German and comparable in German$\rightarrow $Czech. Analysis In this section, we conduct ablation studies on the variants of our methods and see how they perform in different data conditions. Analysis ::: Pivot Adapter Firstly, we compare variants of the pivot adapter (Section SECREF18) in Table TABREF40. The row “None” shows that a randomly initialized linear layer already guides the pre-trained encoder/decoder to harmonize with each other. Of course, when we train the adapter to map source encoder outputs to pivot encoder outputs, the performance gets better. For compressing encoder outputs over positions, average-pooling is better than max-pooling. We observed the same trend in the other test set and in French$\rightarrow $German. We also tested nonlinear pivot adapter, e.g., a 2-layer feedforward network with ReLU activations, but the performance was not better than just a linear adapter. Analysis ::: Cross-lingual Encoder Table TABREF42 verifies that the noisy input in autoencoding is indeed beneficial to our cross-lingual encoder. It improves the final translation performance by maximum +2.1% Bleu, compared to using the copying autoencoding objective. As the training data for autoencoding, we also compare between purely monolingual data and the pivot side of the source-pivot parallel data. By the latter, one can expect a stronger signal for a joint encoder representation space, since two different inputs (in source/pivot languages) are used to produce the exactly same output sentence (in pivot language). The results also tell that there are slight but consistent improvements by using the pivot part of the parallel data. Again, we performed these comparisons in the other test set and German$\rightarrow $Czech, observing the same tendency in results. Analysis ::: Zero-resource/Zero-shot Scenarios If we do not have an access to any source-target parallel data (zero-resource), non-English language pairs have two options for still building a working NMT system, given source-English and target-English parallel data: Zero-shot: Perform source$\rightarrow $target translation using models which have not seen any source-target parallel sentences, e.g., multilingual models or pivoting (Section SECREF2.UNKREF7). Pivot-based synthetic data: Generate synthetic source-target parallel data using source$\leftrightarrow $English and target$\leftrightarrow $English models (Section SECREF2.UNKREF8). Use this data to train a model for source$\rightarrow $target. Table TABREF43 shows how our pre-trained models perform in zero-resource scenarios with the two options. Note that, unlike Table TABREF34, the multilingual baselines exclude source$\rightarrow $target and target$\rightarrow $source directions. First of all, plain transfer, where the encoder and the decoder are pre-trained separately, is poor in zero-shot scenarios. It simply fails to connect different representation spaces of the pre-trained encoder and decoder. In our experiments, neither pivot adapter nor cross-lingual encoder could enhance the zero-shot translation of plain transfer. Step-wise pre-training solves this problem by changing the decoder pre-training to familiarize itself with representations from an already pre-trained encoder. It achieves zero-shot performance of 11.5% Bleu in French$\rightarrow $German and 6.5% Bleu in German$\rightarrow $Czech (newstest2013), while showing comparable or better fine-tuned performance against plain transfer (see also Table TABREF34). With the pre-trained cross-lingual encoder, the zero-shot performance of step-wise pre-training is superior to that of pivot translation in French$\rightarrow $German with only a single model. It is worse than pivot translation in German$\rightarrow $Czech. We think that the data size of pivot-target is critical in pivot translation; relatively huge data for English$\rightarrow $Czech make the pivot translation stronger. Note again that, nevertheless, pivoting (second row) is very poor in efficiency since it performs decoding twice with the individual models. For the second option (pivot-based synthetic data), we compare our methods against the sentence-level beam search version of the teacher-student framework BIBREF4, with which we generated 10M synthetic parallel sentence pairs. We also tried other variants of chen2017teacher, e.g., $N$-best hypotheses with weights, but there were no consistent improvements. Due to enormous bilingual signals, the model trained with the teacher-student synthetic data outperforms pivot translation. If tuned with the same synthetic data, our pre-trained model performs even better (last row), achieving the best zero-resource results on three of the four test sets. We also evaluate our best German$\rightarrow $Czech zero-resource model on newstest2019 and compare it with the participants of the WMT 2019 unsupervised news translation task. Ours yield 17.2% Bleu, which is much better than the best single unsupervised system of the winner of the task (15.5%) BIBREF28. We argue that, if one has enough source-English and English-target parallel data for a non-English language pair, it is more encouraged to adopt pivot-based transfer learning than unsupervised MT — even if there is no source-target parallel data. In this case, unsupervised MT unnecessarily restricts the data condition to using only monolingual data and its high computational cost does not pay off; simple pivot-based pre-training steps are more efficient and effective. Analysis ::: Large-scale Results We also study the effect of pivot-based transfer learning in more data-rich scenarios: 1) with large synthetic source-target data (German$\rightarrow $Czech), and 2) with larger real source-target data in combination with the synthetic data (French$\rightarrow $German). We generated synthetic parallel data using pivot-based back-translation BIBREF3: 5M sentence pairs for German$\rightarrow $Czech and 9.1M sentence pairs for French$\rightarrow $German. For the second scenario, we also prepared 2.3M more lines of French$\rightarrow $German real parallel data from Europarl v7 and Common Crawl corpora. Table TABREF47 shows our transfer learning results fine-tuned with a combination of given parallel data and generated synthetic parallel data. The real source-target parallel data are oversampled to make the ratio of real and synthetic data to be 1:2. As expected, the direct source$\rightarrow $target model can be improved considerably by training with large synthetic data. Plain pivot-based transfer outperforms the synthetic data baseline by up to +1.9% Bleu or -3.3% Ter. However, the pivot adapter or cross-lingual encoder gives marginal or inconsistent improvements over the plain transfer. We suppose that the entire model can be tuned sufficiently well without additional adapter layers or a well-curated training process, once we have a large source-target parallel corpus for fine-tuning. Conclusion In this paper, we propose three effective techniques for transfer learning using pivot-based parallel data. The principle is to pre-train NMT models with source-pivot and pivot-target parallel data and transfer the source encoder and the target decoder. To resolve the input/output discrepancy of the pre-trained encoder and decoder, we 1) consecutively pre-train the model for source$\rightarrow $pivot and pivot$\rightarrow $target, 2) append an additional layer after the source encoder which adapts the encoder output to the pivot language space, or 3) train a cross-lingual encoder over source and pivot languages. Our methods are suitable for most of the non-English language pairs with lots of parallel data involving English. Experiments in WMT 2019 French$\rightarrow $German and German$\rightarrow $Czech tasks show that our methods significantly improve the final source$\rightarrow $target translation performance, outperforming multilingual models by up to +2.6% Bleu. The methods are applicable also to zero-resource language pairs, showing a strong performance in the zero-shot setting or with pivot-based synthetic data. We claim that our methods expand the advances in NMT to many more non-English language pairs that are not yet studied well. Future work will be zero-shot translation without step-wise pre-training, i.e., combining individually pre-trained encoders and decoders freely for a fast development of NMT systems for a new non-English language pair. Acknowledgments This work has received funding from the European Research Council (ERC) (under the European Union's Horizon 2020 research and innovation programme, grant agreement No 694537, project "SEQCLAS") and eBay Inc. The work reflects only the authors' views and none of the funding agencies is responsible for any use that may be made of the information it contains.
No
6e76f114209f59b027ec3b3c8c9cdfc3e682589f
6e76f114209f59b027ec3b3c8c9cdfc3e682589f_0
Q: Is pivot language used in experiments English or some other language? Text: Introduction Machine translation (MT) research is biased towards language pairs including English due to the ease of collecting parallel corpora. Translation between non-English languages, e.g., French$\rightarrow $German, is usually done with pivoting through English, i.e., translating French (source) input to English (pivot) first with a French$\rightarrow $English model which is later translated to German (target) with a English$\rightarrow $German model BIBREF0, BIBREF1, BIBREF2. However, pivoting requires doubled decoding time and the translation errors are propagated or expanded via the two-step process. Therefore, it is more beneficial to build a single source$\rightarrow $target model directly for both efficiency and adequacy. Since non-English language pairs often have little or no parallel text, common choices to avoid pivoting in NMT are generating pivot-based synthetic data BIBREF3, BIBREF4 or training multilingual systems BIBREF5, BIBREF6. In this work, we present novel transfer learning techniques to effectively train a single, direct NMT model for a non-English language pair. We pre-train NMT models for source$\rightarrow $pivot and pivot$\rightarrow $target, which are transferred to a source$\rightarrow $target model. To optimize the usage of given source-pivot and pivot-target parallel data for the source$\rightarrow $target direction, we devise the following techniques to smooth the discrepancy between the pre-trained and final models: Step-wise pre-training with careful parameter freezing. Additional adapter component to familiarize the pre-trained decoder with the outputs of the pre-trained encoder. Cross-lingual encoder pre-training with autoencoding of the pivot language. Our methods are evaluated in two non-English language pairs of WMT 2019 news translation tasks: high-resource (French$\rightarrow $German) and low-resource (German$\rightarrow $Czech). We show that NMT models pre-trained with our methods are highly effective in various data conditions, when fine-tuned for source$\rightarrow $target with: Real parallel corpus Pivot-based synthetic parallel corpus (zero-resource) None (zero-shot) For each data condition, we consistently outperform strong baselines, e.g., multilingual, pivoting, or teacher-student, showing the universal effectiveness of our transfer learning schemes. The rest of the paper is organized as follows. We first review important previous works on pivot-based MT in Section SECREF2. Our three pre-training techniques are presented in Section SECREF3. Section SECREF4 shows main results of our methods with a detailed description of the experimental setups. Section SECREF5 studies variants of our methods and reports the results without source-target parallel resources or with large synthetic parallel data. Section 6 draws conclusion of this work with future research directions. Related Work In this section, we first review existing approaches to leverage a pivot language in low-resource/zero-resource MT. They can be divided into three categories: Pivot translation (pivoting). The most naive approach is reusing (already trained) source$\rightarrow $pivot and pivot$\rightarrow $target models directly, decoding twice via the pivot language BIBREF7, BIBREF0. One can keep $N$-best hypotheses in the pivot language to reduce the prediction bias BIBREF1 and improve the final translation by system combination BIBREF8, which however increases the translation time even more. In multilingual NMT, firat2016zero modify the second translation step (pivot$\rightarrow $target) to use source and pivot language sentences together as the input. Pivot-based synthetic parallel data. We may translate the pivot side of given pivot-target parallel data using a pivot$\rightarrow $source model BIBREF3, or the other way around translating source-pivot data using a pivot$\rightarrow $target model BIBREF0. For NMT, the former is extended by zheng2017maximum to compute the expectation over synthetic source sentences. The latter is also called teacher-student approach BIBREF4, where the pivot$\rightarrow $target model (teacher) produces target hypotheses for training the source$\rightarrow $target model (student). Pivot-based model training. In phrase-based MT, there have been many efforts to combine phrase/word level features of source-pivot and pivot-target into a source$\rightarrow $target system BIBREF1, BIBREF2, BIBREF9, BIBREF10, BIBREF11, BIBREF12. In NMT, cheng2017joint jointly train for three translation directions of source-pivot-target by sharing network components, where ren2018triangular use the expectation-maximization algorithm with the target sentence as a latent variable. lu2018neural deploy intermediate recurrent layers which are common for multiple encoders and decoders, while johnson2017google share all components of a single multilingual model. Both methods train the model for language pairs involving English but enable zero-shot translation for unseen non-English language pairs. For this, ha2017effective encode the target language as an additional embedding and filter out non-target tokens in the output. lakew2017improving combine the multilingual training with synthetic data generation to improve the zero-shot performance iteratively, where sestorain2018zero applies the NMT prediction score and a language model score to each synthetic example as gradient weights. Our work is based on transfer learning BIBREF13 and belongs to the third category: model training. On the contrary to the multilingual joint training, we suggest two distinct steps: pre-training (with source-pivot and pivot-target data) and fine-tuning (with source-target data). With our proposed methods, we prevent the model from losing its capacity to other languages while utilizing the information from related language pairs well, as shown in the experiments (Section SECREF4). Our pivot adapter (Section SECREF18) shares the same motivation with the interlingua component of lu2018neural, but is much compact, independent of variable input length, and easy to train offline. The adapter training algorithm is adopted from bilingual word embedding mapping BIBREF14. Our cross-lingual encoder (Section SECREF26) is inspired by cross-lingual sentence embedding algorithms using NMT BIBREF15, BIBREF16. Transfer learning was first introduced to NMT by zoph2016transfer, where only the source language is switched before/after the transfer. nguyen2017transfer and kocmi2018trivial use shared subword vocabularies to work with more languages and help target language switches. kim2019effective propose additional techniques to enable NMT transfer even without shared vocabularies. To the best of our knowledge, we are the first to propose transfer learning strategies specialized in utilizing a pivot language, transferring a source encoder and a target decoder at the same time. Also, for the first time, we present successful zero-shot translation results only with pivot-based NMT pre-training. Pivot-based Transfer Learning Our methods are based on a simple transfer learning principle for NMT, adjusted to a usual data condition for non-English language pairs: lots of source-pivot and pivot-target parallel data, little (low-resource) or no (zero-resource) source-target parallel data. Here are the core steps of the plain transfer (Figure FIGREF10): Pre-train a source$\rightarrow $pivot model with a source-pivot parallel corpus and a pivot$\rightarrow $target model with a pivot-target parallel corpus. Initialize the source$\rightarrow $target model with the source encoder from the pre-trained source$\rightarrow $pivot model and the target decoder from the pre-trained pivot$\rightarrow $target model. Continue the training with a source-target parallel corpus. If we skip the last step (for zero-resource cases) and perform the source$\rightarrow $target translation directly, it corresponds to zero-shot translation. Thanks to the pivot language, we can pre-train a source encoder and a target decoder without changing the model architecture or training objective for NMT. On the contrary to other NMT transfer scenarios BIBREF13, BIBREF17, BIBREF18, this principle has no language mismatch between transferor and transferee on each source/target side. Experimental results (Section SECREF4) also show its competitiveness despite its simplicity. Nonetheless, the main caveat of this basic pre-training is that the source encoder is trained to be used by an English decoder, while the target decoder is trained to use the outputs of an English encoder — not of a source encoder. In the following, we propose three techniques to mitigate the inconsistency of source$\rightarrow $pivot and pivot$\rightarrow $target pre-training stages. Note that these techniques are not exclusive and some of them can complement others for a better performance of the final model. Pivot-based Transfer Learning ::: Step-wise Pre-training A simple remedy to make the pre-trained encoder and decoder refer to each other is to train a single NMT model for source$\rightarrow $pivot and pivot$\rightarrow $target in consecutive steps (Figure FIGREF15): Train a source$\rightarrow $pivot model with a source-pivot parallel corpus. Continue the training with a pivot-target parallel corpus, while freezing the encoder parameters of 1. In the second step, a target decoder is trained to use the outputs of the pre-trained source encoder as its input. Freezing the pre-trained encoder ensures that, even after the second step, the encoder is still modeling the source language although we train the NMT model for pivot$\rightarrow $target. Without the freezing, the encoder completely adapts to the pivot language input and is likely to forget source language sentences. We build a joint vocabulary of the source and pivot languages so that the encoder effectively represents both languages. The frozen encoder is pre-trained for the source language in the first step, but also able to encode a pivot language sentence in a similar representation space. It is more effective for linguistically similar languages where many tokens are common for both languages in the joint vocabulary. Pivot-based Transfer Learning ::: Pivot Adapter Instead of the step-wise pre-training, we can also postprocess the network to enhance the connection between the source encoder and the target decoder which are pre-trained individually. Our idea is that, after the pre-training steps, we adapt the source encoder outputs to the pivot encoder outputs to which the target decoder is more familiar (Figure FIGREF19). We learn a linear mapping between the two representation spaces with a small source-pivot parallel corpus: Encode the source sentences with the source encoder of the pre-trained source$\rightarrow $pivot model. Encode the pivot sentences with the pivot encoder of the pre-trained pivot$\rightarrow $target model. Apply a pooling to each sentence of 1 and 2, extracting representation vectors for each sentence pair: ($\mathbf {s}$, $\mathbf {p}$). Train a mapping $\mathbf {M}\in \mathbb {R}^{d \times d}$ to minimize the distance between the pooled representations $\mathbf {s}\in \mathbb {R}^{d \times 1}$ and $\mathbf {p}\in \mathbb {R}^{d \times 1}$, where the source representation is first fed to the mapping: where $d$ is the hidden layer size of the encoders. Introducing matrix notations $\mathbf {S}\in \mathbb {R}^{d \times n}$ and $\mathbf {P}\in \mathbb {R}^{d \times n}$, which concatenate the pooled representations of all $n$ sentences for each side in the source-pivot corpus, we rewrite Equation DISPLAY_FORM24 as: which can be easily computed by the singular value decomposition (SVD) for a closed-form solution, if we put an orthogonality constraint on $\mathbf {M}$ BIBREF14. The resulting optimization is also called Procrustes problem. The learned mapping is multiplied to encoder outputs of all positions in the final source$\rightarrow $target tuning step. With this mapping, the source encoder emits sentence representations that lie in a similar space of the pivot encoder. Since the target decoder is pre-trained for pivot$\rightarrow $target and accustomed to receive the pivot encoder outputs, it should process the mapped encoder outputs better than the original source encoder outputs. Pivot-based Transfer Learning ::: Cross-lingual Encoder As a third technique, we modify the source$\rightarrow $pivot pre-training procedure to force the encoder to have cross-linguality over source and pivot languages; modeling source and pivot sentences in the same mathematical space. We achieve this by an additional autoencoding objective from a pivot sentence to the same pivot sentence (Figure FIGREF27). The encoder is fed with sentences of both source and pivot languages, which are processed by a shared decoder that outputs only the pivot language. In this way, the encoder is learned to produce representations in a shared space regardless of the input language, since they are used in the same decoder. This cross-lingual space facilitates smoother learning of the final source$\rightarrow $target model, because the decoder is pre-trained to translate the pivot language. The same input/output in autoencoding encourages, however, merely copying the input; it is said to be not proper for learning complex structure of the data domain BIBREF19. Denoising autoencoder addresses this by corrupting the input sentences by artificial noises BIBREF20. Learning to reconstruct clean sentences, it encodes linguistic structures of natural language sentences, e.g., word order, better than copying. Here are the noise types we use BIBREF21: Drop tokens randomly with a probability $p_\mathrm {del}$ Replace tokens with a <BLANK> token randomly with a probability $p_\mathrm {rep}$ Permute the token positions randomly so that the difference between an original index and its new index is less than or equal to $d_\mathrm {per}$ We set $p_\mathrm {del}=0.1$, $p_\mathrm {rep}=0.1$, and $d_\mathrm {per}=3$ in our experiments. The key idea of all three methods is to build a closer connection between the pre-trained encoder and decoder via a pivot language. The difference is in when we do this job: Cross-lingual encoder (Section SECREF26) changes the encoder pre-training stage (source$\rightarrow $pivot), while step-wise pre-training (Section SECREF14) modifies decoder pre-training stage (pivot$\rightarrow $target). Pivot adapter (Section SECREF18) is applied after all pre-training steps. Main Results We evaluate the proposed transfer learning techniques in two non-English language pairs of WMT 2019 news translation tasks: French$\rightarrow $German and German$\rightarrow $Czech. Data We used the News Commentary v14 parallel corpus and newstest2008-2010 test sets as the source-target training data for both tasks. The newstest sets were oversampled four times. The German$\rightarrow $Czech task was originally limited to unsupervised learning (using only monolingual corpora) in WMT 2019, but we relaxed this constraint by the available parallel data. We used newstest2011 as a validation set and newstest2012/newstest2013 as the test sets. Both language pairs have much abundant parallel data in source-pivot and pivot-target with English as the pivot language. Detailed corpus statistics are given in Table TABREF33. Preprocessing We used the Moses tokenizer and applied true-casing on all corpora. For all transfer learning setups, we learned byte pair encoding (BPE) BIBREF22 for each language individually with 32k merge operations, except for cross-lingual encoder training with joint BPE only over source and pivot languages. This is for modularity of pre-trained models: for example, a French$\rightarrow $English model trained with joint French/English/German BPE could be transferred smoothly to a French$\rightarrow $German model, but would not be optimal for a transfer to e.g., a French$\rightarrow $Korean model. Once we pre-train an NMT model with separate BPE vocabularies, we can reuse it for various final language pairs without wasting unused portion of subword vocabularies (e.g., German-specific tokens in building a French$\rightarrow $Korean model). On the contrary, baselines used joint BPE over all languages with also 32k merges. Model and Training The 6-layer base Transformer architecture BIBREF23 was used for all of our experiments. Batch size was set to 4,096 tokens. Each checkpoint amounts to 10k updates for pre-training and 20k updates for fine-tuning. Each model was optimized with Adam BIBREF24 with an initial learning rate of 0.0001, which was multiplied by 0.7 whenever perplexity on the validation set was not improved for three checkpoints. When it was not improved for eight checkpoints, we stopped the training. The NMT model training and transfer were done with the OpenNMT toolkit BIBREF25. Pivot adapter was trained using the Muse toolkit BIBREF26, which was originally developed for bilingual word embeddings but we adjusted for matching sentence representations. Baselines We thoroughly compare our approaches to the following baselines: Direct source$\rightarrow $target: A standard NMT model trained on given source$\rightarrow $target parallel data. Multilingual: A single, shared NMT model for multiple translation directions BIBREF6. Many-to-many: Trained for all possible directions among source, target, and pivot languages. Many-to-one: Trained for only the directions to target language, i.e., source$\rightarrow $target and pivot$\rightarrow $target, which tends to work better than many-to-many systems BIBREF27. In Table TABREF34, we report principal results after fine-tuning the pre-trained models using source-target parallel data. As for baselines, multilingual models are better than a direct NMT model. The many-to-many models surpass the many-to-one models; since both tasks are in a low-resource setup, the model gains a lot from related language pairs even if the target languages do not match. Plain transfer of pre-trained encoder/decoder without additional techniques (Figure FIGREF10) shows a nice improvement over the direct baseline: up to +2.7% Bleu for French$\rightarrow $German and +5.2% Bleu for German$\rightarrow $Czech. Pivot adapter provides an additional boost of maximum +0.7% Bleu or -0.7% Ter. Cross-lingual encoder pre-training is proved to be not effective in the plain transfer setup. It shows no improvements over plain transfer in French$\rightarrow $German, and 0.4% Bleu worse performance in German$\rightarrow $Czech. We conjecture that the cross-lingual encoder needs a lot more data to be fine-tuned for another decoder, where the encoder capacity is basically divided into two languages at the beginning of the fine-tuning. On the other hand, the pivot adapter directly improves the connection to an individually pre-trained decoder, which works nicely with small fine-tuning data. Pivot adapter gives an additional improvement on top of the cross-lingual encoder; up to +0.4% Bleu in French$\rightarrow $German and +0.6% Bleu in German$\rightarrow $Czech. In this case, we extract source and pivot sentence representations from the same shared encoder for training the adapter. Step-wise pre-training gives a big improvement up to +1.2% Bleu or -1.6% Ter against plain transfer in French$\rightarrow $German. It shows the best performance in both tasks when combined with the cross-lingual encoder: up to +1.2% Bleu in French$\rightarrow $German and +2.6% Bleu in German$\rightarrow $Czech, compared to the multilingual baseline. Step-wise pre-training prevents the cross-lingual encoder from degeneration, since the pivot$\rightarrow $target pre-training (Step 2 in Section SECREF14) also learns the encoder-decoder connection with a large amount of data — in addition to the source$\rightarrow $target tuning step afterwards. Note that the pivot adapter, which inserts an extra layer between the encoder and decoder, is not appropriate after the step-wise pre-training; the decoder is already trained to correlate well with the pre-trained encoder. We experimented with the pivot adapter on top of step-wise pre-trained models — with or without cross-lingual encoder — but obtained detrimental results. Compared to pivot translation (Table TABREF43), our best results are also clearly better in French $\rightarrow $German and comparable in German$\rightarrow $Czech. Analysis In this section, we conduct ablation studies on the variants of our methods and see how they perform in different data conditions. Analysis ::: Pivot Adapter Firstly, we compare variants of the pivot adapter (Section SECREF18) in Table TABREF40. The row “None” shows that a randomly initialized linear layer already guides the pre-trained encoder/decoder to harmonize with each other. Of course, when we train the adapter to map source encoder outputs to pivot encoder outputs, the performance gets better. For compressing encoder outputs over positions, average-pooling is better than max-pooling. We observed the same trend in the other test set and in French$\rightarrow $German. We also tested nonlinear pivot adapter, e.g., a 2-layer feedforward network with ReLU activations, but the performance was not better than just a linear adapter. Analysis ::: Cross-lingual Encoder Table TABREF42 verifies that the noisy input in autoencoding is indeed beneficial to our cross-lingual encoder. It improves the final translation performance by maximum +2.1% Bleu, compared to using the copying autoencoding objective. As the training data for autoencoding, we also compare between purely monolingual data and the pivot side of the source-pivot parallel data. By the latter, one can expect a stronger signal for a joint encoder representation space, since two different inputs (in source/pivot languages) are used to produce the exactly same output sentence (in pivot language). The results also tell that there are slight but consistent improvements by using the pivot part of the parallel data. Again, we performed these comparisons in the other test set and German$\rightarrow $Czech, observing the same tendency in results. Analysis ::: Zero-resource/Zero-shot Scenarios If we do not have an access to any source-target parallel data (zero-resource), non-English language pairs have two options for still building a working NMT system, given source-English and target-English parallel data: Zero-shot: Perform source$\rightarrow $target translation using models which have not seen any source-target parallel sentences, e.g., multilingual models or pivoting (Section SECREF2.UNKREF7). Pivot-based synthetic data: Generate synthetic source-target parallel data using source$\leftrightarrow $English and target$\leftrightarrow $English models (Section SECREF2.UNKREF8). Use this data to train a model for source$\rightarrow $target. Table TABREF43 shows how our pre-trained models perform in zero-resource scenarios with the two options. Note that, unlike Table TABREF34, the multilingual baselines exclude source$\rightarrow $target and target$\rightarrow $source directions. First of all, plain transfer, where the encoder and the decoder are pre-trained separately, is poor in zero-shot scenarios. It simply fails to connect different representation spaces of the pre-trained encoder and decoder. In our experiments, neither pivot adapter nor cross-lingual encoder could enhance the zero-shot translation of plain transfer. Step-wise pre-training solves this problem by changing the decoder pre-training to familiarize itself with representations from an already pre-trained encoder. It achieves zero-shot performance of 11.5% Bleu in French$\rightarrow $German and 6.5% Bleu in German$\rightarrow $Czech (newstest2013), while showing comparable or better fine-tuned performance against plain transfer (see also Table TABREF34). With the pre-trained cross-lingual encoder, the zero-shot performance of step-wise pre-training is superior to that of pivot translation in French$\rightarrow $German with only a single model. It is worse than pivot translation in German$\rightarrow $Czech. We think that the data size of pivot-target is critical in pivot translation; relatively huge data for English$\rightarrow $Czech make the pivot translation stronger. Note again that, nevertheless, pivoting (second row) is very poor in efficiency since it performs decoding twice with the individual models. For the second option (pivot-based synthetic data), we compare our methods against the sentence-level beam search version of the teacher-student framework BIBREF4, with which we generated 10M synthetic parallel sentence pairs. We also tried other variants of chen2017teacher, e.g., $N$-best hypotheses with weights, but there were no consistent improvements. Due to enormous bilingual signals, the model trained with the teacher-student synthetic data outperforms pivot translation. If tuned with the same synthetic data, our pre-trained model performs even better (last row), achieving the best zero-resource results on three of the four test sets. We also evaluate our best German$\rightarrow $Czech zero-resource model on newstest2019 and compare it with the participants of the WMT 2019 unsupervised news translation task. Ours yield 17.2% Bleu, which is much better than the best single unsupervised system of the winner of the task (15.5%) BIBREF28. We argue that, if one has enough source-English and English-target parallel data for a non-English language pair, it is more encouraged to adopt pivot-based transfer learning than unsupervised MT — even if there is no source-target parallel data. In this case, unsupervised MT unnecessarily restricts the data condition to using only monolingual data and its high computational cost does not pay off; simple pivot-based pre-training steps are more efficient and effective. Analysis ::: Large-scale Results We also study the effect of pivot-based transfer learning in more data-rich scenarios: 1) with large synthetic source-target data (German$\rightarrow $Czech), and 2) with larger real source-target data in combination with the synthetic data (French$\rightarrow $German). We generated synthetic parallel data using pivot-based back-translation BIBREF3: 5M sentence pairs for German$\rightarrow $Czech and 9.1M sentence pairs for French$\rightarrow $German. For the second scenario, we also prepared 2.3M more lines of French$\rightarrow $German real parallel data from Europarl v7 and Common Crawl corpora. Table TABREF47 shows our transfer learning results fine-tuned with a combination of given parallel data and generated synthetic parallel data. The real source-target parallel data are oversampled to make the ratio of real and synthetic data to be 1:2. As expected, the direct source$\rightarrow $target model can be improved considerably by training with large synthetic data. Plain pivot-based transfer outperforms the synthetic data baseline by up to +1.9% Bleu or -3.3% Ter. However, the pivot adapter or cross-lingual encoder gives marginal or inconsistent improvements over the plain transfer. We suppose that the entire model can be tuned sufficiently well without additional adapter layers or a well-curated training process, once we have a large source-target parallel corpus for fine-tuning. Conclusion In this paper, we propose three effective techniques for transfer learning using pivot-based parallel data. The principle is to pre-train NMT models with source-pivot and pivot-target parallel data and transfer the source encoder and the target decoder. To resolve the input/output discrepancy of the pre-trained encoder and decoder, we 1) consecutively pre-train the model for source$\rightarrow $pivot and pivot$\rightarrow $target, 2) append an additional layer after the source encoder which adapts the encoder output to the pivot language space, or 3) train a cross-lingual encoder over source and pivot languages. Our methods are suitable for most of the non-English language pairs with lots of parallel data involving English. Experiments in WMT 2019 French$\rightarrow $German and German$\rightarrow $Czech tasks show that our methods significantly improve the final source$\rightarrow $target translation performance, outperforming multilingual models by up to +2.6% Bleu. The methods are applicable also to zero-resource language pairs, showing a strong performance in the zero-shot setting or with pivot-based synthetic data. We claim that our methods expand the advances in NMT to many more non-English language pairs that are not yet studied well. Future work will be zero-shot translation without step-wise pre-training, i.e., combining individually pre-trained encoders and decoders freely for a fast development of NMT systems for a new non-English language pair. Acknowledgments This work has received funding from the European Research Council (ERC) (under the European Union's Horizon 2020 research and innovation programme, grant agreement No 694537, project "SEQCLAS") and eBay Inc. The work reflects only the authors' views and none of the funding agencies is responsible for any use that may be made of the information it contains.
Yes
6583e8bfa7bcc3a792a90b30abb316e6d423f49b
6583e8bfa7bcc3a792a90b30abb316e6d423f49b_0
Q: What are multilingual models that were outperformed in performed experiment? Text: Introduction Machine translation (MT) research is biased towards language pairs including English due to the ease of collecting parallel corpora. Translation between non-English languages, e.g., French$\rightarrow $German, is usually done with pivoting through English, i.e., translating French (source) input to English (pivot) first with a French$\rightarrow $English model which is later translated to German (target) with a English$\rightarrow $German model BIBREF0, BIBREF1, BIBREF2. However, pivoting requires doubled decoding time and the translation errors are propagated or expanded via the two-step process. Therefore, it is more beneficial to build a single source$\rightarrow $target model directly for both efficiency and adequacy. Since non-English language pairs often have little or no parallel text, common choices to avoid pivoting in NMT are generating pivot-based synthetic data BIBREF3, BIBREF4 or training multilingual systems BIBREF5, BIBREF6. In this work, we present novel transfer learning techniques to effectively train a single, direct NMT model for a non-English language pair. We pre-train NMT models for source$\rightarrow $pivot and pivot$\rightarrow $target, which are transferred to a source$\rightarrow $target model. To optimize the usage of given source-pivot and pivot-target parallel data for the source$\rightarrow $target direction, we devise the following techniques to smooth the discrepancy between the pre-trained and final models: Step-wise pre-training with careful parameter freezing. Additional adapter component to familiarize the pre-trained decoder with the outputs of the pre-trained encoder. Cross-lingual encoder pre-training with autoencoding of the pivot language. Our methods are evaluated in two non-English language pairs of WMT 2019 news translation tasks: high-resource (French$\rightarrow $German) and low-resource (German$\rightarrow $Czech). We show that NMT models pre-trained with our methods are highly effective in various data conditions, when fine-tuned for source$\rightarrow $target with: Real parallel corpus Pivot-based synthetic parallel corpus (zero-resource) None (zero-shot) For each data condition, we consistently outperform strong baselines, e.g., multilingual, pivoting, or teacher-student, showing the universal effectiveness of our transfer learning schemes. The rest of the paper is organized as follows. We first review important previous works on pivot-based MT in Section SECREF2. Our three pre-training techniques are presented in Section SECREF3. Section SECREF4 shows main results of our methods with a detailed description of the experimental setups. Section SECREF5 studies variants of our methods and reports the results without source-target parallel resources or with large synthetic parallel data. Section 6 draws conclusion of this work with future research directions. Related Work In this section, we first review existing approaches to leverage a pivot language in low-resource/zero-resource MT. They can be divided into three categories: Pivot translation (pivoting). The most naive approach is reusing (already trained) source$\rightarrow $pivot and pivot$\rightarrow $target models directly, decoding twice via the pivot language BIBREF7, BIBREF0. One can keep $N$-best hypotheses in the pivot language to reduce the prediction bias BIBREF1 and improve the final translation by system combination BIBREF8, which however increases the translation time even more. In multilingual NMT, firat2016zero modify the second translation step (pivot$\rightarrow $target) to use source and pivot language sentences together as the input. Pivot-based synthetic parallel data. We may translate the pivot side of given pivot-target parallel data using a pivot$\rightarrow $source model BIBREF3, or the other way around translating source-pivot data using a pivot$\rightarrow $target model BIBREF0. For NMT, the former is extended by zheng2017maximum to compute the expectation over synthetic source sentences. The latter is also called teacher-student approach BIBREF4, where the pivot$\rightarrow $target model (teacher) produces target hypotheses for training the source$\rightarrow $target model (student). Pivot-based model training. In phrase-based MT, there have been many efforts to combine phrase/word level features of source-pivot and pivot-target into a source$\rightarrow $target system BIBREF1, BIBREF2, BIBREF9, BIBREF10, BIBREF11, BIBREF12. In NMT, cheng2017joint jointly train for three translation directions of source-pivot-target by sharing network components, where ren2018triangular use the expectation-maximization algorithm with the target sentence as a latent variable. lu2018neural deploy intermediate recurrent layers which are common for multiple encoders and decoders, while johnson2017google share all components of a single multilingual model. Both methods train the model for language pairs involving English but enable zero-shot translation for unseen non-English language pairs. For this, ha2017effective encode the target language as an additional embedding and filter out non-target tokens in the output. lakew2017improving combine the multilingual training with synthetic data generation to improve the zero-shot performance iteratively, where sestorain2018zero applies the NMT prediction score and a language model score to each synthetic example as gradient weights. Our work is based on transfer learning BIBREF13 and belongs to the third category: model training. On the contrary to the multilingual joint training, we suggest two distinct steps: pre-training (with source-pivot and pivot-target data) and fine-tuning (with source-target data). With our proposed methods, we prevent the model from losing its capacity to other languages while utilizing the information from related language pairs well, as shown in the experiments (Section SECREF4). Our pivot adapter (Section SECREF18) shares the same motivation with the interlingua component of lu2018neural, but is much compact, independent of variable input length, and easy to train offline. The adapter training algorithm is adopted from bilingual word embedding mapping BIBREF14. Our cross-lingual encoder (Section SECREF26) is inspired by cross-lingual sentence embedding algorithms using NMT BIBREF15, BIBREF16. Transfer learning was first introduced to NMT by zoph2016transfer, where only the source language is switched before/after the transfer. nguyen2017transfer and kocmi2018trivial use shared subword vocabularies to work with more languages and help target language switches. kim2019effective propose additional techniques to enable NMT transfer even without shared vocabularies. To the best of our knowledge, we are the first to propose transfer learning strategies specialized in utilizing a pivot language, transferring a source encoder and a target decoder at the same time. Also, for the first time, we present successful zero-shot translation results only with pivot-based NMT pre-training. Pivot-based Transfer Learning Our methods are based on a simple transfer learning principle for NMT, adjusted to a usual data condition for non-English language pairs: lots of source-pivot and pivot-target parallel data, little (low-resource) or no (zero-resource) source-target parallel data. Here are the core steps of the plain transfer (Figure FIGREF10): Pre-train a source$\rightarrow $pivot model with a source-pivot parallel corpus and a pivot$\rightarrow $target model with a pivot-target parallel corpus. Initialize the source$\rightarrow $target model with the source encoder from the pre-trained source$\rightarrow $pivot model and the target decoder from the pre-trained pivot$\rightarrow $target model. Continue the training with a source-target parallel corpus. If we skip the last step (for zero-resource cases) and perform the source$\rightarrow $target translation directly, it corresponds to zero-shot translation. Thanks to the pivot language, we can pre-train a source encoder and a target decoder without changing the model architecture or training objective for NMT. On the contrary to other NMT transfer scenarios BIBREF13, BIBREF17, BIBREF18, this principle has no language mismatch between transferor and transferee on each source/target side. Experimental results (Section SECREF4) also show its competitiveness despite its simplicity. Nonetheless, the main caveat of this basic pre-training is that the source encoder is trained to be used by an English decoder, while the target decoder is trained to use the outputs of an English encoder — not of a source encoder. In the following, we propose three techniques to mitigate the inconsistency of source$\rightarrow $pivot and pivot$\rightarrow $target pre-training stages. Note that these techniques are not exclusive and some of them can complement others for a better performance of the final model. Pivot-based Transfer Learning ::: Step-wise Pre-training A simple remedy to make the pre-trained encoder and decoder refer to each other is to train a single NMT model for source$\rightarrow $pivot and pivot$\rightarrow $target in consecutive steps (Figure FIGREF15): Train a source$\rightarrow $pivot model with a source-pivot parallel corpus. Continue the training with a pivot-target parallel corpus, while freezing the encoder parameters of 1. In the second step, a target decoder is trained to use the outputs of the pre-trained source encoder as its input. Freezing the pre-trained encoder ensures that, even after the second step, the encoder is still modeling the source language although we train the NMT model for pivot$\rightarrow $target. Without the freezing, the encoder completely adapts to the pivot language input and is likely to forget source language sentences. We build a joint vocabulary of the source and pivot languages so that the encoder effectively represents both languages. The frozen encoder is pre-trained for the source language in the first step, but also able to encode a pivot language sentence in a similar representation space. It is more effective for linguistically similar languages where many tokens are common for both languages in the joint vocabulary. Pivot-based Transfer Learning ::: Pivot Adapter Instead of the step-wise pre-training, we can also postprocess the network to enhance the connection between the source encoder and the target decoder which are pre-trained individually. Our idea is that, after the pre-training steps, we adapt the source encoder outputs to the pivot encoder outputs to which the target decoder is more familiar (Figure FIGREF19). We learn a linear mapping between the two representation spaces with a small source-pivot parallel corpus: Encode the source sentences with the source encoder of the pre-trained source$\rightarrow $pivot model. Encode the pivot sentences with the pivot encoder of the pre-trained pivot$\rightarrow $target model. Apply a pooling to each sentence of 1 and 2, extracting representation vectors for each sentence pair: ($\mathbf {s}$, $\mathbf {p}$). Train a mapping $\mathbf {M}\in \mathbb {R}^{d \times d}$ to minimize the distance between the pooled representations $\mathbf {s}\in \mathbb {R}^{d \times 1}$ and $\mathbf {p}\in \mathbb {R}^{d \times 1}$, where the source representation is first fed to the mapping: where $d$ is the hidden layer size of the encoders. Introducing matrix notations $\mathbf {S}\in \mathbb {R}^{d \times n}$ and $\mathbf {P}\in \mathbb {R}^{d \times n}$, which concatenate the pooled representations of all $n$ sentences for each side in the source-pivot corpus, we rewrite Equation DISPLAY_FORM24 as: which can be easily computed by the singular value decomposition (SVD) for a closed-form solution, if we put an orthogonality constraint on $\mathbf {M}$ BIBREF14. The resulting optimization is also called Procrustes problem. The learned mapping is multiplied to encoder outputs of all positions in the final source$\rightarrow $target tuning step. With this mapping, the source encoder emits sentence representations that lie in a similar space of the pivot encoder. Since the target decoder is pre-trained for pivot$\rightarrow $target and accustomed to receive the pivot encoder outputs, it should process the mapped encoder outputs better than the original source encoder outputs. Pivot-based Transfer Learning ::: Cross-lingual Encoder As a third technique, we modify the source$\rightarrow $pivot pre-training procedure to force the encoder to have cross-linguality over source and pivot languages; modeling source and pivot sentences in the same mathematical space. We achieve this by an additional autoencoding objective from a pivot sentence to the same pivot sentence (Figure FIGREF27). The encoder is fed with sentences of both source and pivot languages, which are processed by a shared decoder that outputs only the pivot language. In this way, the encoder is learned to produce representations in a shared space regardless of the input language, since they are used in the same decoder. This cross-lingual space facilitates smoother learning of the final source$\rightarrow $target model, because the decoder is pre-trained to translate the pivot language. The same input/output in autoencoding encourages, however, merely copying the input; it is said to be not proper for learning complex structure of the data domain BIBREF19. Denoising autoencoder addresses this by corrupting the input sentences by artificial noises BIBREF20. Learning to reconstruct clean sentences, it encodes linguistic structures of natural language sentences, e.g., word order, better than copying. Here are the noise types we use BIBREF21: Drop tokens randomly with a probability $p_\mathrm {del}$ Replace tokens with a <BLANK> token randomly with a probability $p_\mathrm {rep}$ Permute the token positions randomly so that the difference between an original index and its new index is less than or equal to $d_\mathrm {per}$ We set $p_\mathrm {del}=0.1$, $p_\mathrm {rep}=0.1$, and $d_\mathrm {per}=3$ in our experiments. The key idea of all three methods is to build a closer connection between the pre-trained encoder and decoder via a pivot language. The difference is in when we do this job: Cross-lingual encoder (Section SECREF26) changes the encoder pre-training stage (source$\rightarrow $pivot), while step-wise pre-training (Section SECREF14) modifies decoder pre-training stage (pivot$\rightarrow $target). Pivot adapter (Section SECREF18) is applied after all pre-training steps. Main Results We evaluate the proposed transfer learning techniques in two non-English language pairs of WMT 2019 news translation tasks: French$\rightarrow $German and German$\rightarrow $Czech. Data We used the News Commentary v14 parallel corpus and newstest2008-2010 test sets as the source-target training data for both tasks. The newstest sets were oversampled four times. The German$\rightarrow $Czech task was originally limited to unsupervised learning (using only monolingual corpora) in WMT 2019, but we relaxed this constraint by the available parallel data. We used newstest2011 as a validation set and newstest2012/newstest2013 as the test sets. Both language pairs have much abundant parallel data in source-pivot and pivot-target with English as the pivot language. Detailed corpus statistics are given in Table TABREF33. Preprocessing We used the Moses tokenizer and applied true-casing on all corpora. For all transfer learning setups, we learned byte pair encoding (BPE) BIBREF22 for each language individually with 32k merge operations, except for cross-lingual encoder training with joint BPE only over source and pivot languages. This is for modularity of pre-trained models: for example, a French$\rightarrow $English model trained with joint French/English/German BPE could be transferred smoothly to a French$\rightarrow $German model, but would not be optimal for a transfer to e.g., a French$\rightarrow $Korean model. Once we pre-train an NMT model with separate BPE vocabularies, we can reuse it for various final language pairs without wasting unused portion of subword vocabularies (e.g., German-specific tokens in building a French$\rightarrow $Korean model). On the contrary, baselines used joint BPE over all languages with also 32k merges. Model and Training The 6-layer base Transformer architecture BIBREF23 was used for all of our experiments. Batch size was set to 4,096 tokens. Each checkpoint amounts to 10k updates for pre-training and 20k updates for fine-tuning. Each model was optimized with Adam BIBREF24 with an initial learning rate of 0.0001, which was multiplied by 0.7 whenever perplexity on the validation set was not improved for three checkpoints. When it was not improved for eight checkpoints, we stopped the training. The NMT model training and transfer were done with the OpenNMT toolkit BIBREF25. Pivot adapter was trained using the Muse toolkit BIBREF26, which was originally developed for bilingual word embeddings but we adjusted for matching sentence representations. Baselines We thoroughly compare our approaches to the following baselines: Direct source$\rightarrow $target: A standard NMT model trained on given source$\rightarrow $target parallel data. Multilingual: A single, shared NMT model for multiple translation directions BIBREF6. Many-to-many: Trained for all possible directions among source, target, and pivot languages. Many-to-one: Trained for only the directions to target language, i.e., source$\rightarrow $target and pivot$\rightarrow $target, which tends to work better than many-to-many systems BIBREF27. In Table TABREF34, we report principal results after fine-tuning the pre-trained models using source-target parallel data. As for baselines, multilingual models are better than a direct NMT model. The many-to-many models surpass the many-to-one models; since both tasks are in a low-resource setup, the model gains a lot from related language pairs even if the target languages do not match. Plain transfer of pre-trained encoder/decoder without additional techniques (Figure FIGREF10) shows a nice improvement over the direct baseline: up to +2.7% Bleu for French$\rightarrow $German and +5.2% Bleu for German$\rightarrow $Czech. Pivot adapter provides an additional boost of maximum +0.7% Bleu or -0.7% Ter. Cross-lingual encoder pre-training is proved to be not effective in the plain transfer setup. It shows no improvements over plain transfer in French$\rightarrow $German, and 0.4% Bleu worse performance in German$\rightarrow $Czech. We conjecture that the cross-lingual encoder needs a lot more data to be fine-tuned for another decoder, where the encoder capacity is basically divided into two languages at the beginning of the fine-tuning. On the other hand, the pivot adapter directly improves the connection to an individually pre-trained decoder, which works nicely with small fine-tuning data. Pivot adapter gives an additional improvement on top of the cross-lingual encoder; up to +0.4% Bleu in French$\rightarrow $German and +0.6% Bleu in German$\rightarrow $Czech. In this case, we extract source and pivot sentence representations from the same shared encoder for training the adapter. Step-wise pre-training gives a big improvement up to +1.2% Bleu or -1.6% Ter against plain transfer in French$\rightarrow $German. It shows the best performance in both tasks when combined with the cross-lingual encoder: up to +1.2% Bleu in French$\rightarrow $German and +2.6% Bleu in German$\rightarrow $Czech, compared to the multilingual baseline. Step-wise pre-training prevents the cross-lingual encoder from degeneration, since the pivot$\rightarrow $target pre-training (Step 2 in Section SECREF14) also learns the encoder-decoder connection with a large amount of data — in addition to the source$\rightarrow $target tuning step afterwards. Note that the pivot adapter, which inserts an extra layer between the encoder and decoder, is not appropriate after the step-wise pre-training; the decoder is already trained to correlate well with the pre-trained encoder. We experimented with the pivot adapter on top of step-wise pre-trained models — with or without cross-lingual encoder — but obtained detrimental results. Compared to pivot translation (Table TABREF43), our best results are also clearly better in French $\rightarrow $German and comparable in German$\rightarrow $Czech. Analysis In this section, we conduct ablation studies on the variants of our methods and see how they perform in different data conditions. Analysis ::: Pivot Adapter Firstly, we compare variants of the pivot adapter (Section SECREF18) in Table TABREF40. The row “None” shows that a randomly initialized linear layer already guides the pre-trained encoder/decoder to harmonize with each other. Of course, when we train the adapter to map source encoder outputs to pivot encoder outputs, the performance gets better. For compressing encoder outputs over positions, average-pooling is better than max-pooling. We observed the same trend in the other test set and in French$\rightarrow $German. We also tested nonlinear pivot adapter, e.g., a 2-layer feedforward network with ReLU activations, but the performance was not better than just a linear adapter. Analysis ::: Cross-lingual Encoder Table TABREF42 verifies that the noisy input in autoencoding is indeed beneficial to our cross-lingual encoder. It improves the final translation performance by maximum +2.1% Bleu, compared to using the copying autoencoding objective. As the training data for autoencoding, we also compare between purely monolingual data and the pivot side of the source-pivot parallel data. By the latter, one can expect a stronger signal for a joint encoder representation space, since two different inputs (in source/pivot languages) are used to produce the exactly same output sentence (in pivot language). The results also tell that there are slight but consistent improvements by using the pivot part of the parallel data. Again, we performed these comparisons in the other test set and German$\rightarrow $Czech, observing the same tendency in results. Analysis ::: Zero-resource/Zero-shot Scenarios If we do not have an access to any source-target parallel data (zero-resource), non-English language pairs have two options for still building a working NMT system, given source-English and target-English parallel data: Zero-shot: Perform source$\rightarrow $target translation using models which have not seen any source-target parallel sentences, e.g., multilingual models or pivoting (Section SECREF2.UNKREF7). Pivot-based synthetic data: Generate synthetic source-target parallel data using source$\leftrightarrow $English and target$\leftrightarrow $English models (Section SECREF2.UNKREF8). Use this data to train a model for source$\rightarrow $target. Table TABREF43 shows how our pre-trained models perform in zero-resource scenarios with the two options. Note that, unlike Table TABREF34, the multilingual baselines exclude source$\rightarrow $target and target$\rightarrow $source directions. First of all, plain transfer, where the encoder and the decoder are pre-trained separately, is poor in zero-shot scenarios. It simply fails to connect different representation spaces of the pre-trained encoder and decoder. In our experiments, neither pivot adapter nor cross-lingual encoder could enhance the zero-shot translation of plain transfer. Step-wise pre-training solves this problem by changing the decoder pre-training to familiarize itself with representations from an already pre-trained encoder. It achieves zero-shot performance of 11.5% Bleu in French$\rightarrow $German and 6.5% Bleu in German$\rightarrow $Czech (newstest2013), while showing comparable or better fine-tuned performance against plain transfer (see also Table TABREF34). With the pre-trained cross-lingual encoder, the zero-shot performance of step-wise pre-training is superior to that of pivot translation in French$\rightarrow $German with only a single model. It is worse than pivot translation in German$\rightarrow $Czech. We think that the data size of pivot-target is critical in pivot translation; relatively huge data for English$\rightarrow $Czech make the pivot translation stronger. Note again that, nevertheless, pivoting (second row) is very poor in efficiency since it performs decoding twice with the individual models. For the second option (pivot-based synthetic data), we compare our methods against the sentence-level beam search version of the teacher-student framework BIBREF4, with which we generated 10M synthetic parallel sentence pairs. We also tried other variants of chen2017teacher, e.g., $N$-best hypotheses with weights, but there were no consistent improvements. Due to enormous bilingual signals, the model trained with the teacher-student synthetic data outperforms pivot translation. If tuned with the same synthetic data, our pre-trained model performs even better (last row), achieving the best zero-resource results on three of the four test sets. We also evaluate our best German$\rightarrow $Czech zero-resource model on newstest2019 and compare it with the participants of the WMT 2019 unsupervised news translation task. Ours yield 17.2% Bleu, which is much better than the best single unsupervised system of the winner of the task (15.5%) BIBREF28. We argue that, if one has enough source-English and English-target parallel data for a non-English language pair, it is more encouraged to adopt pivot-based transfer learning than unsupervised MT — even if there is no source-target parallel data. In this case, unsupervised MT unnecessarily restricts the data condition to using only monolingual data and its high computational cost does not pay off; simple pivot-based pre-training steps are more efficient and effective. Analysis ::: Large-scale Results We also study the effect of pivot-based transfer learning in more data-rich scenarios: 1) with large synthetic source-target data (German$\rightarrow $Czech), and 2) with larger real source-target data in combination with the synthetic data (French$\rightarrow $German). We generated synthetic parallel data using pivot-based back-translation BIBREF3: 5M sentence pairs for German$\rightarrow $Czech and 9.1M sentence pairs for French$\rightarrow $German. For the second scenario, we also prepared 2.3M more lines of French$\rightarrow $German real parallel data from Europarl v7 and Common Crawl corpora. Table TABREF47 shows our transfer learning results fine-tuned with a combination of given parallel data and generated synthetic parallel data. The real source-target parallel data are oversampled to make the ratio of real and synthetic data to be 1:2. As expected, the direct source$\rightarrow $target model can be improved considerably by training with large synthetic data. Plain pivot-based transfer outperforms the synthetic data baseline by up to +1.9% Bleu or -3.3% Ter. However, the pivot adapter or cross-lingual encoder gives marginal or inconsistent improvements over the plain transfer. We suppose that the entire model can be tuned sufficiently well without additional adapter layers or a well-curated training process, once we have a large source-target parallel corpus for fine-tuning. Conclusion In this paper, we propose three effective techniques for transfer learning using pivot-based parallel data. The principle is to pre-train NMT models with source-pivot and pivot-target parallel data and transfer the source encoder and the target decoder. To resolve the input/output discrepancy of the pre-trained encoder and decoder, we 1) consecutively pre-train the model for source$\rightarrow $pivot and pivot$\rightarrow $target, 2) append an additional layer after the source encoder which adapts the encoder output to the pivot language space, or 3) train a cross-lingual encoder over source and pivot languages. Our methods are suitable for most of the non-English language pairs with lots of parallel data involving English. Experiments in WMT 2019 French$\rightarrow $German and German$\rightarrow $Czech tasks show that our methods significantly improve the final source$\rightarrow $target translation performance, outperforming multilingual models by up to +2.6% Bleu. The methods are applicable also to zero-resource language pairs, showing a strong performance in the zero-shot setting or with pivot-based synthetic data. We claim that our methods expand the advances in NMT to many more non-English language pairs that are not yet studied well. Future work will be zero-shot translation without step-wise pre-training, i.e., combining individually pre-trained encoders and decoders freely for a fast development of NMT systems for a new non-English language pair. Acknowledgments This work has received funding from the European Research Council (ERC) (under the European Union's Horizon 2020 research and innovation programme, grant agreement No 694537, project "SEQCLAS") and eBay Inc. The work reflects only the authors' views and none of the funding agencies is responsible for any use that may be made of the information it contains.
Direct source$\rightarrow $target: A standard NMT model trained on given source$\rightarrow $target, Multilingual: A single, shared NMT model for multiple translation directions, Many-to-many: Trained for all possible directions among source, target, and pivot languages, Many-to-one: Trained for only the directions to target language
9a5d02062fa7eec7097f1dc1c38b5e6d5c82acdf
9a5d02062fa7eec7097f1dc1c38b5e6d5c82acdf_0
Q: What are the common captioning metrics? Text: Introduction Image captioning—the task of providing a natural language description of the content within an image—lies at the intersection of computer vision and natural language processing. As both of these research areas are highly active and have experienced many recent advances, the progress in image captioning has naturally followed suit. On the computer vision side, improved convolutional neural network and object detection architectures have contributed to improved image captioning systems. On the natural language processing side, more sophisticated sequential models, such as attention-based recurrent neural networks, have similarly resulted in more accurate image caption generation. Inspired by neural machine translation, most conventional image captioning systems utilize an encoder-decoder framework, in which an input image is encoded into an intermediate representation of the information contained within the image, and subsequently decoded into a descriptive text sequence. This encoding can consist of a single feature vector output of a CNN (as in BIBREF0 ), or multiple visual features obtained from different regions within the image. In the latter case, the regions can be uniformly sampled (e.g., BIBREF1 ), or guided by an object detector (e.g., BIBREF2 ) which has been shown to yield improved performance. While these detection based encoders represent the state-of-the art, at present they do not utilize information about the spatial relationships between the detected objects such as relative position and size. This information can often be critical to understanding the content within an image, however, and is used by humans when reasoning about the physical world. Relative position, for example, can aid in distinguishing “a girl riding a horse” from “a girl standing beside a horse”. Similarly, relative size can help differentiate between “a woman playing the guitar” and “a woman playing the ukelele”. Incorporating spatial relationships has been shown to improve the performance of object detection itself, as demonstrated in BIBREF3 . Furthermore, in machine translation encoders, positional relationships are often encoded, in particular in the case of the Transformer BIBREF4 , an attention-based encoder architecture. The use of relative positions and sizes of detected objects, then, should be of benefit to image captioning visual encoders as well, as evidenced in Figure FIGREF1 . In this work, we propose and demonstrate the use of object spatial relationship modeling for image captioning, specifically within the Transformer encoder-decoder architecture. This is achieved by incorporating the object relation module of BIBREF3 within the Transformer encoder. The contributions of this paper are as follows: Related Work Many early neural models for image captioning BIBREF5 , BIBREF6 , BIBREF7 , BIBREF0 encoded visual information using a single feature vector representing the image as a whole and hence did not utilize information about objects and their spatial relationships. Karpathy and Fei-Fei in BIBREF8 , as a notable exception to this global representation approach, extracted features from multiple image regions based on an R-CNN object detector BIBREF9 and generated separate captions for the regions. As a separate caption was generated for each region, however, the spatial relationship between the detected objects was not modeled. This is also true of their follow-on dense captioning work BIBREF10 , which presented an end-to-end approach for obtaining captions relating to different regions within an image. Fang et al. in BIBREF11 generated image descriptions by first detecting words associated with different regions within the image. The spatial association was made by applying a fully convolutional neural network to the image and generating spatial response maps for the target words. Here again, the authors do not explicitly model any relationship between the spatial regions. A family of attention based approaches BIBREF1 , BIBREF12 , BIBREF13 to image captioning have also been proposed that seek to ground the words in the predicted caption to regions in the image. As the visual attention is often derived from higher convolutional layers from a CNN, the spatial localization is limited and often not semantically meaningful. Most similar to our work, Anderson et al. in BIBREF2 addressed this limitation of typical attention models by combining a “bottom-up” attention model with a “top-down” LSTM. The bottom-up attention acts on mean-pooled convolutional features obtained from the proposed regions of interest of a Faster R-CNN object detector BIBREF14 . The top-down LSTM is a two-layer LSTM in which the first layer acts as a visual attention model that attends to the relevant detections for the current token and a language LSTM that generates the next token. The authors demonstrated state-of-the-art performance for both visual question answering and image captioning using this approach, indicating the benefits of combining features derived from object detection with visual attention. Again, spatial information is not utilized, which we propose in this work via geometric attention, as first introduced by Hu et al. for object detection in BIBREF3 . The authors used bounding box coordinates and sizes to infer the importance of the relationship of pairs of objects, the assumption being that if two bounding boxes are closer and more similar in size to each other, then their relationship is stronger. Recent developments in NLP, namely the Transformer architecture BIBREF4 have led to significant performance improvements for various tasks such as translation BIBREF4 , text generation BIBREF15 , and language understanding BIBREF16 . In BIBREF17 , the Transformer was applied to the task of image captioning. The authors explored extracting a single global image feature from the image as well as uniformly sampling features by dividing the image into 8x8 partitions. In the latter case, the feature vectors were fed in a sequence to the Transformer encoder. In this paper we propose to improve upon this uniform sampling by adopting the bottom-up approach of BIBREF2 . The Transformer architecture is particularly well suited as a bottom-up visual encoder for captioning since it does not have a notion of order for its inputs, unlike an RNN. It can, however, successfully model sequential data with the use of positional encoding, which we apply to the decoded tokens in the caption text. Rather than encode an order to objects, our Object Relation Transformer seeks to encode how two objects are spatially related to each other and weight them accordingly. Proposed Approach Figure FIGREF5 shows an overview of the proposed image caption algorithm. We first use an object detector to extract appearance and geometry features from all the detected objects in the image. Thereafter we use the Object Relation Transformer to generate the caption text. Section SECREF7 describes how we use the Transformer architecture BIBREF4 in general for image captioning. Section SECREF13 explains our novel addition of box relational encoding to the encoder layer of the Transformer. Object Detection Following BIBREF2 , we use Faster R-CNN BIBREF14 with ResNet-101 BIBREF18 as the base CNN for object detection and feature extraction. Using intermediate feature maps from the ResNet-101 as inputs, a Region Proposal Network (RPN) generates bounding boxes for object proposals. Using non-maximum suppression, overlapping bounding boxes with an intersection-over-union (IoU) exceeding a threshold of 0.7 are discarded. A region-of-interest (RoI) pooling layer is then used to convert all remaining bounding boxes to the same spatial size (e.g. INLINEFORM0 2048). Additional CNN layers are applied to predict class labels and bounding box refinements for each box proposal. We further discard all bounding boxes where the class prediction probability is below a threshold of 0.2. Finally, we apply mean-pooling over the spatial dimension to generate a 2048-dimensional feature vector for each object bounding box. These feature vectors are then used as inputs to the Transformer model. Standard Transformer Model This section describes how apply the Transformer architecture of BIBREF4 to the task of image captioning. The Transformer model consists of an encoder and a decoder, both of which are composed of a stack of layers (in our case 6). Our architecture uses the feature vectors from the object detector as inputs and generates a sequence of words (i.e., the image caption) as outputs. Every image feature vector is first processed through an input embedding layer, which consists of a fully-connected layer to reduce the dimension from 2048 to INLINEFORM0 followed by a ReLU and a dropout layer. The embedded feature vectors are then used as input tokens to the first encoder layer of the Transformer model. We denote INLINEFORM1 as the n-th token of a set of INLINEFORM2 tokens. For encoder layers 2 to 6, we use the output tokens of the previous encoder layer as the input to the current layer. Each encoder layer consists of a multi-head self-attention layer followed by a small feed-forward neural network. The self-attention layer itself consists of 8 identical heads. Each attention head first calculates a query INLINEFORM0 , key INLINEFORM1 and value INLINEFORM2 for each of the INLINEFORM3 tokens given by DISPLAYFORM0 where INLINEFORM0 contains all the input vectors INLINEFORM1 stacked into a matrix and INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 are learned projection matrices. The attention weights for the appearance features are then computed according to DISPLAYFORM0 where INLINEFORM0 is an INLINEFORM1 attention weight matrix, whose elements INLINEFORM2 are the attention weights between the m-th and n-th token. Following the implementation of BIBREF4 , we choose a constant scaling factor of INLINEFORM3 , which is the dimension of the key, query, and value vectors. The output of the head is then calculated as DISPLAYFORM0 Equations EQREF8 to EQREF10 are calculated for every head independently. The output of all 8 heads are then concatenated to one output vector, INLINEFORM0 , and multiplied with a learned projection matrix INLINEFORM1 , i.e., DISPLAYFORM0 The next component of the encoder layer is the point-wise feed-forward network (FFN), which is applied to each output of the attention layer. DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 and INLINEFORM2 , INLINEFORM3 are the weights and biases of two fully connected layers. In addition, skip-connections and layer-norm are applied to the outputs of the self-attention and the feed-forward layer. The decoder then uses the generated tokens from the last encoder layer as input to generate the caption text. Since the dimensions of the output tokens of the Transformer encoder are identical to the tokens used in the original Transformer implementation, we make no modifications on the decoder side. We refer the reader to the original publication BIBREF4 for a detailed explanation of the decoder. Object Relation Transformer In our proposed model, we incorporate relative geometry by modifying the attention weight matrix INLINEFORM0 in Equation EQREF9 . We multiply the appearance based attention weights INLINEFORM1 of two objects INLINEFORM2 and INLINEFORM3 , by a learned function of their relative position and size. We use the same function that was first introduced in BIBREF3 to improve the classification and non-maximum suppression stages of a Faster R-CNN object detector. First we calculate a displacement vector INLINEFORM0 for two bounding boxes INLINEFORM1 and INLINEFORM2 from their geometry features INLINEFORM3 and INLINEFORM4 (center coordinates, width, and heights) as DISPLAYFORM0 The geometric attention weights are then calculated as follows DISPLAYFORM0 where Emb(.) calculates a high-dimensional embedding following the functions INLINEFORM0 described in BIBREF4 , where sinusoid functions computed for each value of INLINEFORM1 . In addition, we multiply the embedding with the learned vector INLINEFORM2 to project down to a scalar and apply the ReLU non-linearity. The geometric attention weights INLINEFORM3 are then incorporated into the attention mechanism according to DISPLAYFORM0 where INLINEFORM0 are the appearance based attention weights from Equation EQREF9 and INLINEFORM1 are the new combined attention weights. The output of the head can be calculated as follows DISPLAYFORM0 where INLINEFORM0 is the INLINEFORM1 matrix, whose elements are given by INLINEFORM2 . The Bounding Box Relational Encoding diagram in Figure FIGREF5 shows the multi-head self-attention layer of the Object Relation Transformer. Equations EQREF14 to EQREF17 are represented with the relation boxes. Implementation Details Our algorithm was developed in PyTorch using the image captioning implementation in BIBREF19 as our basis. We ran our experiments on NVIDIA Tesla V100 GPUs on AWS. Our best performing model is pre-trained for 30 epochs with a softmax cross-entropy loss, using the ADAM optimizer with learning rate defined as in the original Transformer paper with 20000 warmup steps, and a batch size of 10. We trained for an additional 30 epochs using self-critical reinforcement learning BIBREF20 optimizing for CIDEr-D score, and did early-stopping for best performance on the validation set (of 5000 images). On a single GPU the training with cross-entropy loss and the self-critical training take about 1 day and 3.5 days, respectively. The models compared in sections SECREF22 - SECREF29 are evaluated after training for 30 epochs, with ADAM optimization with the above learning rate schedule, and with batch size 15. Dataset and Metrics We trained and evaluated our algorithm on the Microsoft COCO (MS-COCO) 2014 Captions dataset BIBREF21 . We report results on the Karpathy validation and test splits BIBREF8 , which are commonly used in other image captioning publications. The dataset contains 113K training images with 5 human annotated captions for each image. The Karpathy test and validation sets contain 5K images each. We evaluate our models using the CIDEr-D BIBREF22 , SPICE BIBREF23 , BLEU BIBREF24 , METEOR BIBREF25 , and ROUGE-L BIBREF26 metrics. While it has been shown experimentally that BLEU and ROUGE have lower correlation with human judgments than the other metrics BIBREF23 , BIBREF22 , the common practice in the image captioning literature is to report all the mentioned metrics. Comparative Analysis We compare our proposed algorithm against the best results from a single model of the self-critical sequence training (Att2all) BIBREF20 and the Bottom-up Top-down (Up-Down) BIBREF2 algorithm. Table TABREF21 shows the metrics for the test split as reported by the authors. Following the implementation of BIBREF2 , we fine-tune our model using the self-critical training optimized for CIDEr-D score BIBREF20 and apply beam search with beam size 5, achieving a 6.8% relative improvement over the previous state-of-the-art. Positional Encoding Our proposed geometric attention can be seen as a replacement for the positional encoding of the original Transformer network. While objects do not have an inherent notion of order, there do exist some simpler analogues to positional encoding, such as ordering by object size, or left-to-right or top-to-bottom based on bounding box coordinates. We provide a comparison between our geometric attention and these object orderings in Table TABREF23 . For box size, we simply calculate the area of each bounding box and order from largest to smallest. For left-to-right we order bounding boxes according to the x-coordinate of their centroids. Analogous ordering is performed for top-to-bottom using the centroid y-coordinate. Based on the CIDEr-D scores shown, adding such an artificial ordering to the detected objects decreases the performance. We observed similar decreases in performance across all other metrics (SPICE, BLEU, METEOR and ROUGE-L). Ablation Study Table TABREF25 shows the results for our ablation study. We show the Bottom-Up and Top-Down algorithm BIBREF2 as our baseline algorithm. The second row replaces the LSTM with a Transformer network. The third row includes the proposed geometric attention. The last row includes beam search with beam size 2. The contribution of the Object Relation Transformer is small for METEOR, but significant for CIDEr-D and the BLEU metrics. Overall we can see the most improvements on the CIDEr-D and BLEU-4 score. Geometric Improvement In order to demonstrate the advantages of the geometric relative attention layer, we performed a more detailed comparison of the Standard Transformer against the Object Relation Transformer. For each of the metrics, we performed a two-tailed t-test with paired samples in order to determine whether the difference caused by adding the geometric relative attention layer was statistically significant. The metrics were computed for each individual image in the test set for each of the Transformer models, so that we are able to run the paired tests. In addition to the standard evaluation metrics, we also report metrics obtained from SPICE by splitting up the tuples of the scene graphs according to different semantic subcategories. For each subcategory, we are able to compute precision, recall, and F-scores. The reported measures are the F-scores computed by taking only the tuples in each subcategory. More specifically, we report SPICE scores for: Object, Relation, Attribute, Color, Count, and Size BIBREF23 . Note that for a given image, not all SPICE subcategory scores might be available. For example, if the reference captions for a given image have no mention of color, then the SPICE Color score is not defined and therefore we omit that image from that particular analysis. In spite of this, each subcategory analyzed had at least 1000 samples. For this experiment, we did not use self-critical training for either Transformer and they were both run with a beam size of 2. The metrics computed over the 5000 images of the test set are shown in Tables TABREF27 and TABREF28 . We first note that for all of the metrics, the Object Relation Transformer presents higher scores than the Standard Transformer. The score difference was statistically significant (using a significance level INLINEFORM0 ) for CIDEr-D, BLEU-1, ROUGE-L (Table TABREF27 ), Relation, and Count (Table TABREF28 ). The significant improvements in CIDEr-D and Relation are in line with our expectation that adding the geometric attention layer would help the model in determining the correct relationships between objects. In addition, it is interesting to see a significant improvement in the Count subcategory of SPICE, from 11.30 to 17.51. Image captioning methods in general show a large deficit in Count scores when compared with humans BIBREF23 , while we're able to show a significant improvement by adding explicit positional information. Some example images and captions illustrating these improvements are presented in Section SECREF29 . Qualitative Analysis To illustrate the advantages of the Object Relation Transformer relative to the Standard Transformer, we present example images with the corresponding captions generated by each model. The captions presented were generated using the following setup: both the Object Relation Transformer and the Standard Transformer were trained without self-critical training and both were run with a beam size of 2 on the 5000 images of the test set. We chose examples for which there were was a marked improvement in the score of the Object Relation Transformer relative to the Standard Transformer. This was done for the Relation and Count subcategories of SPICE scores. The example images and captions are presented in Tables TABREF30 and TABREF30 . The images in Table TABREF30 illustrate an improvement in determining when a relationship between objects should be expressed, as well as in determining what that relationship should be. An example of correctly determining that a relationship should exist is shown in the third image of Table TABREF30 , where the two chairs are actually related to the umbrella, by being underneath it. Additionally, an example where the Object Relation Transformer correctly infers the type of relationship between objects is shown in the first image of Table TABREF30 , where the man in fact is not on the motorcycle, but is working on it. The examples in Table TABREF30 specifically illustrate the Object Relation Transformer's marked ability to better count objects. Conclusion We present the Object Relation Transformer, a modification of the conventional Transformer specifically adapted to the task of image captioning. The proposed Transformer encodes 2D position and size relationships between detected objects in images, building upon the bottom-up and top-down image captioning approach. Our results on the MS-COCO dataset demonstrate that the Transformer does indeed benefit from incorporating spatial relationship information, most evidently when comparing the relevant sub-metrics of the SPICE captioning metric. We also present qualitative examples of how incorporating this information can yield captioning results demonstrating better spatial awareness. At present, our model only takes into account geometric information in the encoder phase. As a next step, we intend to incorporate geometric attention in our decoder cross-attention layers between objects and words. We aim to do this by explicitly associating decoded words with object bounding boxes. This should lead to additional performance gains as well as improved interpretability of the model.
the CIDEr-D BIBREF22 , SPICE BIBREF23 , BLEU BIBREF24 , METEOR BIBREF25 , and ROUGE-L BIBREF26 metrics
c38a48d65bb21c314194090d0cc3f1a45c549dd6
c38a48d65bb21c314194090d0cc3f1a45c549dd6_0
Q: Which English domains do they evaluate on? Text: [display] 1 0px Semi-Supervised Methods for Out-of-Domain Dependency Parsing Juntao Yu School of Computer Science Introduction Syntactic parsing is an important natural language processing (NLP) task that focuses on analysing the syntactic structures of sentences. The syntax of a sentence has been found to be important to many other NLP tasks that require deeper analysis of the sentences, such as semantic parsing BIBREF0 , BIBREF1 , anaphora resolution BIBREF2 , BIBREF3 and machine translation BIBREF4 . There are two major families of syntactic parsing, the first one is constituency parsing that generates parse trees of sentences according to phrase structure grammars, the other is dependency parsing that assigns head-child relations to the words of a sentence. Initially, the parsing community mainly focused on constituency parsing systems, as a result,Œ a number of high accuracy constituency parsers have been introduced, such as the Collins Parser BIBREF5 , Stanford PCFG Parser BIBREF6 , BLLIP reranking parser BIBREF7 and Berkeley Parser BIBREF8 . In the past decade, dependency-based systems have gained more and more attention BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , as they have a better multi-lingual capacity and are more efficient. For a long period, dependency parsing systems were mainly based on carefully selected feature sets, we denote those systems as conventional dependency parsers. In the recent years, a number of dependency parsing systems based on neural networks have also been investigated, some of which have achieved better accuracies when compared to conventional dependency parsers. We evaluated our approaches only on conventional dependency parsers, as these neural network-based systems were introduced after we finished most of the work. However, the techniques evaluated in this thesis have the potential to be adapted to neural network-based parsers as well. Many dependency parsers are based on supervised learning techniques, which could produce high accuracy when trained on a large amount of training data from the same domain BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . However, those models trained on the specific training data are vulnerable when dealing with data from domains different from the training data BIBREF14 , BIBREF15 . One effective way to make models less domain specific is to annotate more balanced corpora. However, the annotation work is very time-consuming and expensive. As a result of these difficulties, only very limited annotations are available to the community. As an alternative to annotating new corpora, domain adaptation techniques have been introduced to train more robust models for out-of-domain parsing. Semi-supervised methods are one family of those techniques that aim to improve the out-of-domain parsing performance by enhancing the in-domain models with a large amount of unlabelled data. Some semi-supervised methods use the unlabelled data as the additional training data, such as co-training BIBREF16 , BIBREF17 , BIBREF18 and self-training BIBREF19 , BIBREF20 , BIBREF21 . Alternatively, other research uses the unlabelled data indirectly. Word clusters BIBREF22 , BIBREF23 and word embeddings BIBREF24 , BIBREF25 are examples of this direction. Research Questions The focus of this thesis is on using semi-supervised techniques to bridge the accuracies between the in-domain and the out-of-domain dependency parsing. More precisely, this thesis evaluates three important semi-supervised methods, namely co-training, self-training and dependency language models. Two of the methods use unlabelled data directly as additional training data (i.e. co-/self-training). Co-training is a method that has been used in many domain adaptation tasks, it uses multiple learners to derive additional training data from unlabelled target domain data. The successful use of co-training is conditioned on learners being as different as possible. Previous work on parsing with co-training is mainly focused on using learners that are carefully designed to be very different. In this thesis, we use only off-the-shelf dependency parsers as our learners to form our co-training approaches. In total, we evaluate two co-training approaches, the normal co-training (uses two parsers) and the tri-training (uses three parsers). For both approaches, the evaluation learner is retrained on the additional training data annotated identically by two source learners. The normal co-training uses two learners, the evaluation learner is used as one of the source learners, while the tri-training uses three learners, two of which are used as source learners, the third one is used as the evaluation learner. Compare to the normal co-training, tri-training approach allows the evaluation learner to learn from the novel annotations that is not predicted by its own. For our evaluation on co-training, we trying to answer the following research questions: Q1. Could the off-the-shelf dependency parsers be successfully used in co-training for domain adaptation? Q2. Would tri-training be more effective for out-of-domain parsing when off-the-shelf dependency parsers are used? In contrast to co-training, which retrains the parser on additional training data annotated by multiple learners, self-training retrains the parser on training data enlarged by its own automatically labelled data. Previous research mainly focused on applying self-training to constituency parsers BIBREF19 , BIBREF20 , BIBREF21 . Attempts to use self-training for dependency parsing either need additional classifiers BIBREF26 or only use partial parse trees BIBREF27 . In this thesis, we aim to find a more effective way to use self-training for dependency parsing. We intend to answer the following research questions for our self-training evaluation: Q3. How could self-training be effectively used in out-of-domain dependency parsing? Q4. If self-training works for English dependency parsing, can it be adapted to other languages? To use auto-labelled data as additional training data is effective but comes with consequences. First of all, the re-trained models usually have a lower performance on the source domain data. Secondly, those approaches can only use a relatively small unlabelled data, as training parsers on a large corpus might be time-consuming or even intractable on a corpus of millions of sentences. To overcome those limitations we investigate dependency language models which use the unlabelled data indirectly. Dependency language models (DLM) were previously used by chen2012utilizing to leverage the performance and the efficiency of a weak second-order graph-based parser BIBREF9 . In this thesis, we adapt this method to a strong transition-based parser BIBREF12 that on its own can produce very promising accuracies. The research questions for this part are as follows: Q5. Can dependency language models be adapted to strong transition-based parsers? Q6. Can dependency language models be used for out-of-domain parsing? Q7. Quality or quantity of the auto-parsed data, which one is more important to the successful use of dependency language models? Thesis Structure After the introduction, in Chapter SECREF7 we begin by discussing the background knowledge and previous work related to this thesis. This mainly covers two topics, dependency parsing and domain adaptation. We then introduce the Mate parser in detail. Mate is a strong transition-based parser which is used in all of our evaluations. After that, we introduce the corpora and the evaluation/analysis methods. In Chapter SECREF14 we introduce our experiments on agreement-based co-training. It first discusses the effect of using different off-the-shelf parsers on a normal agreement-based co-training setting (i.e. only involves two parsers). And then we introduce our experiments on its variant that uses three parsers (tri-training). Chapter SECREF20 and Chapter SECREF26 introduce our confidence-based self-training approaches. In Chapter SECREF20 , we introduce our evaluations on confidence-based self-training for English out-of-domain dependency parsing. In total, two confidence-based methods are compared in our experiments. Chapter SECREF26 introduces our experiments on multi-lingual datasets. The confidence-based self-training approach is evaluated on nine languages. Chapter SECREF32 discusses our dependency language models method that is able to improve both in-domain and out-of-domain parsing. The evaluations on English include both in-domain and out-of-domain datasets, in addition to that, we also evaluated on the Chinese in-domain data. Chapter SECREF38 provides a summary of the thesis and gives conclusions. Published Work In total, there are four publications based on this thesis. Each of the publications is related to one chapter of this thesis, pekar2014exploring is related to our evaluation on co-training (Chapter SECREF14 ). yu2015iwpt is made from our English self-training evaluation (Chapter SECREF20 ). yu2015depling is associated with our multi-lingual self-training experiments (Chapter SECREF26 ). yu2017iwpt presents our work on dependency language models (Chapter SECREF32 ). Chapter Summary In this chapter, we first briefly introduced dependency parsing and the problems of out-of-domain parsing that we are trying to address in this thesis. We then discussed the research questions that we intend to answer. The chapter also gave a brief introduction of the thesis structure. Finally, the chapter illustrated the published works based on this thesis. This chapter introduced the background and the experiment set-up. The first part focused on dependency parsers, it introduced three major types of dependency parsers and gave a detailed introduction of the base parser used in this thesis. The second part discussed the problem caused by parsing out-of-domain text and the techniques that have been used by previous work to solve the problem. The third part introduced the corpora we used. The last two parts showed our evaluation methods and analysis techniques. In this chapter we present our evaluations on two co-training approaches (co-training and tri-training). The main contribution of our evaluation on co-training is to assess the suitability of using the off-the-shelf parsers to form co-training. We first evaluated on the normal agreement based co-training with four off-the-shelf parsers. Three of them are paired with the Mate parser to generate additional training data for retraining the Mate parser. We evaluated the parser pairs by adding different number of sentences into the training data. We also evaluated the pairs with additional training data that excluded the short annotations. The results show co-training is able to improve largely on target domain and additional gains are achieved when excluding the short sentences. We then evaluated the second approach (tri-training) that retrains the Mate parser on additional training data annotated identically by MST-Malt parsers. Benefit from the novel annotations that not predicted by the Mate parser, tri-training outperforms our best co-training setting. The further evaluation on tri-training shows large improvements on all four test domains. The method achieved the largest improvement of 1.8% and 0.6% for labelled and unlabelled accuracies. We then applied both token level and sentence level analysis to find out where the improvement comes from. The analysis suggests tri-training gained particularly large improvement on label OBJ (objects) and PRD (predicative complement). The analysis of unknown words on both token level and sentence level shows only a slightly larger improvement on unknown words when compared with known words. The analysis on sentence length suggests tri-training helped mainly on sentences with a length between 15 and 30 tokens. The analysis on prepositions and conjunctions shows larger gains are achieved on sentences containing prepositions or conjunctions. Overall we demonstrated that co-/tri-training are powerful techniques for out-of-domain parsing when the off-the-shelf parsers are used. In this chapter, we introduced two novel confidence-based self-training approaches to domain adaptation for dependency parsing. We compared a self-training approach that uses random selection and two confidence-based approaches. The random selection-based self-training method did not improve the accuracy which is in line with previously published negative results, both confidence-based methods achieved statistically significant improvements and showed relatively high accuracy gains. We tested both confidence-based approaches on three web related domains of our main evaluation corpora (Weblogs, Newsgroups, Reviews) and the Chemical domain. Our confidence-based approaches achieved statistically significant improvements in all tested domains. For web domains, we gained up to 0.8 percentage points for both labelled and unlabelled accuracies. On average the Delta-based approach improved the accuracy by 0.6% for both labelled and unlabelled accuracies. Similarly, the parse score-based method improved labelled accuracy scores by 0.6% and unlabelled accuracy scores by 0.5%. In terms of the Chemical domain, the Delta-based and the parse score-based approaches gained 1.42% and 1.12% labelled accuracies respectively when using predicted PoS tags. When we used gold PoS tags, a larger labelled improvement of 1.62% is achieved by the Delta method and 1.48% is gained by the parse score method. The unlabelled improvements for both methods are similar to their labelled improvements for all the experiments. In total, our approaches achieved significantly better accuracy for all four domains. We conclude from the experiments that self-training based on confidence is worth applying in a domain adaptation scenario and that a confidence-based self-training approach seems to be crucial for the successful application of self-training in dependency parsing. Our evaluation underlines the finding that the pre-selection of parse trees is probably a precondition that self-training becomes effective in the case of dependency parsing and to reach a significant accuracy gain. The further analysis compared the behaviour of two approaches and gave a clearer picture of in which part self-training helps most. As a preliminary analysis, we assessed the overlap between the top ranked sentences of two methods. When we compared the top ranked 50% of the development set by different methods, 56% of them are identical. As there are more than 40% sentences which are selected differently by different methods, we expect some clear differences in our in-depth analysis on token and sentence level. Surprisingly, the further analysis suggested that both methods played similar roles on most of the analysis, the behaviour differences are rather small. In our token level analysis, both methods gained large improvements on the root, coordination, modifiers and unclassified relations. We also found much larger unlabelled improvements for unknown words. For sentence level analysis, we noticed that our approaches helped most the medium length sentences (10-30 tokens/sentence). Generally speaking, they also have a better performance on sentences that have certain levels of complexity, such as sentences that have more than 2 unknown words or at least 2 prepositions. This might also because of the simpler sentences have already a reasonably good accuracy when baseline model is used, thus are harder to improve. In this chapter, we evaluated an effective confidence-based self-training approach on nine languages. Due to the lack of out-of-domain resources, we used an under-resourced in-domain setting instead. We used for all languages a unified setting, the parser is retrained on the new training set boosted by the top 50k ranked parse trees selected from a 100k auto-parsed dataset. Our approach successfully improved accuracies of five languages (Basque, German, Hungarian, Korean and Swedish) without tuning variables for the individual language. We can report the largest labelled and unlabelled accuracy gain of 2.14% and 1.79% on Korean, on average we improved the baselines of five languages by 0.87% (LAS) and 0.78% (UAS). We further did an in-depth analysis on Korean and French. For Korean, we did a number of analysis on both token level and sentence level to understand where the improvement comes from. The analysis on the individual label showed that the self-trained model achieved large improvement on all the major labels, and it achieved the largest gain on conjuncts (conj). The analysis of unknown words showed that the self-trained model gained a larger labelled improvement for unknown words. The analysis on sentence length suggested the self-training approach achieved larger improvements on longer sentences. For French, we aim to understand why self-training did not work. The analysis showed the confidence scores have a reasonably high correlation with the annotation quality, hence it is less likely be the reason of self-training's negative effect. While the large difference between unlabelled data and the training/test sets is more likely a major contributor to the accuracy drop. In this chapter, we adapted the dependency language models (DLM) approach of chen2012utilizing to a strong transition-based parser. We integrated a small number of DLM-based features into the parser to allow the parser to explore DLMs extracted from a large auto-parsed corpus. We evaluated the parser with single and multiple DLMs extracted from corpora of different size and quality to improve the in-domain accuracy of the English and Chinese texts. The English model enhanced by a unigram DLM extracted from double parsed high-quality sentences achieved statistically significant improvements of 0.46% and 0.51% for labelled and unlabelled accuracies respectively. Our results outperform most of the latest systems and are close to the state-of-the-art. By using all unigram, bigram and trigram DLMs in our Chinese experiments, we achieved large improvements of 0.93% and 0.98% for both labelled and unlabelled scores. When increasing the beam size to 150, our system outperforms the best reported results by 0.2%. In addition to that, our approach gained an improvement of 0.4% on Chinese part-of-speech tagging. We further evaluate our approach on our main evaluation corpus. The method is tested on both in-domain and out-of-domain parsing. Our DLM-based approach achieved large improvement on all five domains evaluated (Conll, Weblogs, Newsgroups, Reviews, Answers). We achieved the labelled and unlabelled improvements of up to 0.91% and 0.82% on Newsgroups domain. On average we achieved 0.6% gains for both labelled and unlabelled scores on four out-of-domain test sets. We also improved the in-domain accuracy by 0.36% (LAS) and 0.4% (UAS). The analysis on our English main evaluation corpus suggests that the DLM model behaves differently on in-domain and out-of-domain parsing for a number of factors. Firstly, the DLM model achieved the largest improvement on label CONJ (conjunct) and LOC (locative adverbial) for in-domain parsing, while the largest improvement for out-of-domain dataset is contributed by OBJ (object) and PRD (predicative complement). Secondly, the DLM model improved more on unknown words for in-domain data but for out-of-domain text, DLM model delivered larger gains on known words. Thirdly, the analysis on sentence level shows that our model achieved most improvement on sentences of a length between 10 and 20, the range is wider (10-35) for out-of-domain data. We also analysed the Chinese results. The analysis shows the improvement on Chinese data is mainly contributed by the objects (OBJ, POBJ), dependent of DE (DEC, DEG) and children of localizer (LC). The DLM model only shows a large improvement on the known words, it nearly does not affect the unknown words accuracy. The DLM model mostly helped the sentences that have at least 20 tokens. In this chapter, we summarised our work of this thesis by answering seven research questions that we introduced in Chapter SECREF2 . We successfully answered all the questions using our findings in the previous chapters. Background and Experiment Set-up In this chapter, we first introduce the background and related work of this thesis, which includes a brief introduction of dependency parsing systems, a detailed introduction of the baseline parser BIBREF12 and previous work on out-of-domain parsing (especially those on semi-supervised approaches). We then introduce the corpora that have been used in this thesis. Finally, we introduce the evaluation metric and the analysis methods. Dependency parsing Dependency parsing is one important way to analyse the syntactic structures of natural language. It has been widely studied in the past decade. A dependency parsing task takes natural language (usually tokenised sentence) as input and outputs a sequence of head-dependent relations. Figure FIGREF5 shows the dependency relations of a sentence (Tom played football with his classmate .) parsed by an off-the-shelf dependency parser. During the past decade, many dependency parsing systems have been introduced, most of them are graph-based or transition-based systems. The graph-based system solves the parsing problem by searching for maximum spanning trees (MST). A first-order MST parser first assigns scores to directed edges between tokens of a sentence. It then uses an algorithm to search a valid dependency tree with the highest score. By contrast, the transition-based system solves the parsing task as a sequence of transition decisions, in each step the parser deciding the next transition. In Section SECREF6 and SECREF9 we briefly describe the two major system types. In recent years, deep learning has been playing an important role in the machine learning community. As a result, several neural network-based systems have been introduced, some of them surpassing the state-of-the-art accuracy achieved by the conventional dependency parsers based on perceptions or SVMs. We briefly touch on neural network-based systems in Section SECREF11 , although most of them are still transition/graph-based systems. The evaluation of the neural network-based parsers is beyond the scope of this thesis, as they become popular after most of the work of this thesis has been done. We mainly use the Mate parser BIBREF12 , a transition-based approach that was state-of-the-art at the beginning of this work and whose performance remained competitive even after the introduction of the parsers based on neural network. Section SECREF13 introduces the technical details of the Mate parser. Graph-based Systems The graph-based dependency parser solves the parsing problem by searching for maximum spanning trees (MST). In the following, we consider the first-order MST parser of mcdonald05acl. Let INLINEFORM0 be the input sentence, INLINEFORM1 be the dependency tree of INLINEFORM2 , INLINEFORM3 is the INLINEFORM4 th word of INLINEFORM5 , INLINEFORM6 is the directed edge between INLINEFORM7 (head) and INLINEFORM8 (dependent). INLINEFORM9 is used to represent the set of possible dependency trees of the input sentence where INLINEFORM10 . The parser considers all valid directed edges between tokens in INLINEFORM11 and builds the parse trees in a bottom-up fashion by applying a CKY parsing algorithm. It scores a parse tree INLINEFORM12 by summing up the scores INLINEFORM13 of all the edges INLINEFORM14 . The INLINEFORM15 is calculated according to a high-dimensional binary feature representation INLINEFORM16 and a weight vector INLINEFORM17 learned from training data INLINEFORM18 ( INLINEFORM19 ). To be more specific, the score of a parse tree INLINEFORM20 of an input sentence INLINEFORM21 is calculated as follows: INLINEFORM22 Where INLINEFORM0 consists of a set of binary feature representations associated with a number of feature templates. For example, an edge INLINEFORM1 with a bi-gram feature template INLINEFORM2 will give a value of 1 for the following feature representation: INLINEFORM3 After scoring the possible parse trees INLINEFORM0 , the parser outputs the highest-scored dependency tree INLINEFORM1 . Figure FIGREF7 shows an example of a sentence being parsed with a first-order graph-based parser. In terms of training, the parser uses an online learning algorithm to learn the weight vector INLINEFORM0 from the training set INLINEFORM1 . In each training step, only one training instance INLINEFORM2 ( INLINEFORM3 ) is considered, the INLINEFORM4 is updated after each step. More precisely, the Margin Infused Relaxed Algorithm (MIRA) BIBREF28 is used to create a margin between the score of a correct parse tree INLINEFORM5 and the incorrect ones INLINEFORM6 ( INLINEFORM7 ). The loss INLINEFORM8 of a dependency tree is defined as the number of incorrect edges. Let INLINEFORM9 , INLINEFORM10 be the weight vector before and after the update of the INLINEFORM11 th training step, INLINEFORM12 is updated subject to keeping the margin at least as large as the INLINEFORM13 , while at the same time, keeping the norm of the changes to the INLINEFORM14 as small as possible. A more detailed training algorithm is showed in algorithm SECREF6 . [h] INLINEFORM0 INLINEFORM1 INLINEFORM2 (*[h]N training iterations) INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 MIRA algorithm for MST parser The MST parser is later improved by mcdonald2006online to include second-order features, however, the system is still weaker than its successors which also include third-order features BIBREF29 . Other mostly used strong graph-based parsers include Mate graph-based parser BIBREF30 and Turbo Parser BIBREF13 . Transition-based Systems The transition-based parsers build the dependency trees in a very different fashion compared to graph-based systems. Instead of searching for the maximum spanning trees, transition-based systems parse a sentence with a few pre-defined transitions. The Malt parser BIBREF31 is one of the earliest transition-based parsers which has been later widely used by researchers. The parser is well engineered and can be configured to use different transition systems. We take the parser's default transition system (arc-eager) as an example to show how the transition-based parser works. The Malt parser starts with an initial configuration and performs one transition at a time in a deterministic fashion until it reaches the final configuration. The parser's configurations are represented by triples INLINEFORM0 , where INLINEFORM1 is the stack that stores partially visited tokens, INLINEFORM2 is a list of remaining tokens that are unvisited, and INLINEFORM3 stores the directed arcs between token pairs that have already been parsed. The parser's initial configuration consists of an empty INLINEFORM4 and an empty INLINEFORM5 , while all the input tokens are stored in INLINEFORM6 . The final configuration is required to have an empty INLINEFORM7 . A set of four transitions (Shift, Left-Arc, Right-Arc and Reduce) are defined to build the parse trees. The Shift transition moves the token on the top of INLINEFORM8 into INLINEFORM9 , the Left-Arc transition adds an arc from the top of INLINEFORM10 to the top of INLINEFORM11 and removes the token on the top of INLINEFORM12 , the Right-Arc transition adds an arc from the top of INLINEFORM13 to the top of INLINEFORM14 and moves the token on the top of INLINEFORM15 into INLINEFORM16 , and the Reduce transition simply removes the token on the top of INLINEFORM17 . More precisely, table TABREF10 shows the details of the transitions of an arc-eager system. To train the parser, support vector machine classifier (SVM) with the one-versus-all strategy is used to solve the transition-based parser as a multi-classification problem. In a transition-based parsing scenario, the classes are different transitions. Each of the SVMs is trained to maximise the margin between the target transition and the other transitions, as in the one-versus-all strategy the classes other than the target class are treated the same as the negative examples. Since the data may not be linearly separable, they use in additional a quadratic kernel ( INLINEFORM0 ) to map the data into a higher dimensional space. The SVMs are trained to predict the next transition based on a given parser configuration. They used similar binary feature representations as those of the MST parser, in which the features are mapped into a high dimensional vector. The feature templates for the transition-based system are mainly associated with the configurations, for example, a feature between the INLINEFORM1 (the top of the stack) and the INLINEFORM2 (the top of the Buffer) is as follows: INLINEFORM3 Figure FIGREF8 shows an example of parsing the sentence (Tom plays football) with the Malt transition-based parser. Benefiting from the deterministic algorithm, the Malt parser is able to parse the non-projective sentences in linear time BIBREF10 , which is much faster compared to the second-order MST parser's cubic-time parsing BIBREF9 . Although the deterministic parsing is fast, the error made in the previous transitions will largely affect the decisions taken afterwards, which results in a lower accuracy. To overcome this problem beam search has been introduced to the transition-based systems, which leads to significant accuracy improvements BIBREF12 . Neural Network-based Systems Neural network-based systems have only been recently introduced to the literature. chen2014neural were the first to introduce a simple neural network to a deterministic transition-based parser, yielding good results. The parser used an arc-standard transition system. Similar to arc-eager, the arc-standard is another highly used transition-based system. Many dependency parsers are based on or have options to use an arc-standard approach, which include the Malt parser we introduced in the previous section (section SECREF9 ) and our main evaluation parser (Mate parser). We will introduce the arc-standard transition system in more detail in section SECREF13 . One of the major differences between the neural network based systems and the conventional systems is the use of feature representations. Instead of using the binary feature representations (commonly used by the conventional systems), the neural network based approaches represent the features by embeddings. During training, feature embeddings (e.g. word, part-of-speech embeddings) are capable of capturing the semantic information of the features. Take the part-of-speech tags as an example, adjective tags INLINEFORM0 will have similar embeddings. This allows the neural network-based systems to reduce the feature sparsity problem of the conventional parser systems. Conventional parsers usually represent different tokens or token combinations by independent feature spaces, thus are highly sparse. Another advantage of using the neural network based approach is that the system allows using the pre-trained word embeddings. Word embeddings extracted from large unlabelled data carry the statistical strength of the words, this could be a better bases for the system when compared to the randomly initialised embeddings. The empirical results confirmed that large improvements can be achieved by using the pre-trained word embeddings. The idea of using the pre-trained word embeddings goes into the same direction of the semi-supervised approaches that use unlabelled data indirectly, such as dependency language models evaluated in this thesis, or word clusters. In terms of the network architecture, chen2014neural used a single hidden layer and a softmax layer to predict the next transition based on the current configuration. To map the input layer to the hidden layer they used a cube activation function ( INLINEFORM0 ), in which INLINEFORM1 are feature embeddings of the words, part-of-speech tags and arc labels and INLINEFORM2 are the relative weights. Figure FIGREF12 shows the details of their neural network architecture. This first attempt of using the neural network for dependency parsing leads to many subsequent research. chen2014neural's system has been later extended by weiss2015neural who introduced beam search to the system and achieved state-of-the-art accuracy. Since then a number of more complex and powerful neural networks have been evaluated, such as the stack-LSTM BIBREF32 and the bi-directional LSTM BIBREF33 . The current state-of-the-art is achieved by the parser of dozat2017deep who used the bi-directional LSTM in their system. The Mate Parser In this thesis, we mainly used the Mate transition-based parser BIBREF34 , BIBREF35 , BIBREF12 . The parser is one of the best performing parsers on the data set of the major shared task (CoNLL 2009) on dependency parsing BIBREF1 and it is freely available . The parser uses the arc-standard transition system, it is also integrated with a number of techniques to maximise the parser's performance. Firstly, the parser employs a beam search to go beyond the greedy approach. Secondly, it uses an additional optional graph-based model to rescore the beam entries. In their paper BIBREF34 , they name it completion model as it scores factors of the graph as soon as they are finished by the parser. Furthermore, the parser has an option for joint tagging and parsing BIBREF35 . Same as the pipeline system, the tagger model is trained separately from the parser model. However, during the parsing, instead of using only the best-predicted part-of-speech (PoS) tag, they made the n-best ( INLINEFORM0 ) PoS tags of a token available to the parser. The joint system is able to gain a higher accuracy for both PoS tagging and parsing compared to a pipeline system. In this thesis, we use the Mate parser as our baseline and make the necessary modifications, where appropriate to comply with the requirements of our approaches. The transition-based part of the parser uses a modified arc-standard transition system. Comparing to the original arc-standard transition system (has only three transitions: Left-Arc, Right-Arc and Shift) of nivre2004incrementality, the Mate parser modified the Shift transition for joint tagging and parsing and included the Swap transition to handling non-projective parsing. More precisely, the parser tags and parses a sentence INLINEFORM0 using a sequence of transitions listed in Table TABREF15 . An additional artificial token INLINEFORM1 root INLINEFORM2 INLINEFORM3 is added to the beginning of the sentence to allow the parser assigning a Root to the sentence at the last step of the transitions. The transitions change the initial configuration ( INLINEFORM4 ) in steps until reaching a terminal configuration ( INLINEFORM5 ). bohnet2013joint used the 5-tuples INLINEFORM6 to represent all configurations, where INLINEFORM7 (the stack) and INLINEFORM8 (the buffer) refers to disjoint sublists of the sentence INLINEFORM9 , INLINEFORM10 is a set of arcs, INLINEFORM11 and INLINEFORM12 are functions to assign a part-of-speech tag to each word and a dependency label to each arc. The initial configuration ( INLINEFORM13 ) has an empty stack, the buffer consists of the full input sentence INLINEFORM14 , and the arc set INLINEFORM15 is empty. The terminal configuration ( INLINEFORM16 ) is characterised by an empty stack and buffer, hence no further transitions can be taken. The arc set INLINEFORM17 consists of a sequence of arc pairs ( INLINEFORM18 ), where INLINEFORM19 is the head and INLINEFORM20 is the dependent. They use Tree INLINEFORM21 to represent the tagged dependency tree defined for INLINEFORM22 by INLINEFORM23 . As shown in Table TABREF15 , the Left-Arc INLINEFORM0 adds an arc from the token ( INLINEFORM1 ) at the top of the stack ( INLINEFORM2 ) to the token ( INLINEFORM3 ) at the second top of the stack and removes the dependent ( INLINEFORM4 ) from the stack. At the same time, the INLINEFORM5 function assigns a dependency label ( INLINEFORM6 ) to the newly created arc INLINEFORM7 . The Left-Arc INLINEFORM8 transition is permissible as long as the token at the second top of the stack is not the INLINEFORM9 root INLINEFORM10 (i.e. INLINEFORM11 ). The Right-Arc INLINEFORM12 adds a labelled arc from the token ( INLINEFORM13 ) at the second top of the stack to the token ( INLINEFORM14 ) at the top of the stack and removes the later. The Shift INLINEFORM15 transition assigns a PoS tag INLINEFORM16 to the first node of the buffer and moves it to the top of the stack. The Swap transition that is used to handling non-projective tree extracts the token ( INLINEFORM17 ) at the second top of the stack and moves it back to the buffer. The Swap transition is only permissible when the top two tokens of the stack are in the original word order (i.e. INLINEFORM18 ), this prevents the same two tokens from being swapped more than once. In additional, the artificial INLINEFORM19 root INLINEFORM20 token is not allowed to be swapped back to the buffer (i.e. INLINEFORM21 ). Figure FIGREF16 shows an example of joint tagging and parsing a sentence by the Mate parser. The graph-based completion model consists of a number of different second- and third-order feature models to rescore the partial parse tree Tree INLINEFORM0 . Some feature models are similar to carreras07 and koo10acl. Take one of the models INLINEFORM1 as an example, which consists of the second-order factors of carreras07: The head and the dependent. The head, the dependent and the right/left-most grandchild in between. The head, the dependent and the right/left-most grandchild away from the head. The head, the dependent and between those words the right/left-most sibling. Feature models are independent to each other and can be easily turned on/off by configuration. The score of a parse tree Tree INLINEFORM0 or a partial parse tree Tree INLINEFORM1 is then defined as the sum of the scores from the both parts: INLINEFORM2 Where INLINEFORM0 is the score of the transition-based part of the parser and INLINEFORM1 is the score from the graph-based completion model. [t] INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 INLINEFORM12 INLINEFORM13 INLINEFORM14 INLINEFORM15 return INLINEFORM16 Beam search algorithm for the Mate parser Mate parser uses similar binary feature representations as those of the MST/Malt parser (the features are represented by a high dimensional feature vector ( INLINEFORM0 )). A learned weight vector ( INLINEFORM1 ) is used with the feature vector ( INLINEFORM2 ) to score the configurations in conjunction with the next transition. In addition, the parser uses the beam search to mitigate error propagation. Comparing with the deterministic parsing algorithm that only keeps the best partial parse tree, the beam search approach keeps the n-best partial parse trees during the inference. By using the beam search, errors made in the early stage can potentially be recovered in the late stage, as long as the correct configuration has not fallen out of the beam. The beam search algorithm takes a sentence ( INLINEFORM3 ), the weight vector ( INLINEFORM4 ) and the beam size parameter ( INLINEFORM5 ) and returns the best scoring parse tree (Tree INLINEFORM6 ). A parse hypothesis ( INLINEFORM7 ) of a sentence consists of a configuration ( INLINEFORM8 ), a score ( INLINEFORM9 ) and a feature vector ( INLINEFORM10 ). Initially the Beam only consists of the initial hypothesis ( INLINEFORM11 ), in which INLINEFORM12 contains a initial configuration of the sentence ( INLINEFORM13 ), a score of INLINEFORM14 and a initial feature vector ( INLINEFORM15 ). The transitions ( INLINEFORM16 ) change the hypotheses in steps and create new hypotheses by applying different permissible transitions to them. For each step, the top INLINEFORM17 scoring hypotheses are kept in the Beam. The beam search terminates when every hypothesis in the Beam contains a terminal configuration ( INLINEFORM18 ). It then returns the top scoring parse tree (Tree INLINEFORM19 ). Algorithm SECREF13 outlines the details of the beam search algorithm used by the Mate parser. In order to learn the weight vector, the parser goes through the training set ( INLINEFORM0 ) for INLINEFORM1 iterations. The weight vector is updated for every sentence INLINEFORM2 when an incorrect parse is returned (i.e. the highest scoring parse INLINEFORM3 is different from the gold parse INLINEFORM4 ). More precisely, the passive-aggressive update of crammer2006online is used: INLINEFORM5 In this thesis, unless specified, we used the default settings of the parser: We use all the graph-based features of the completion model. We use the joint PoS-tagging with two-best tags for each token. We use a beam of 40. We use 25 iterations of training. We do not change the sentence order of the training data during training. Out-of-domain Parsing The release of the large manually annotated Penn Treebank (PTB) BIBREF36 and the development of the supervised learning techniques enable researchers to work on the supervised learning based parsing systems. Over the last two decades, the parsing accuracy has been significantly improved. A number of strong parsing systems for both constituency and dependency families have been developed BIBREF6 , BIBREF8 , BIBREF12 , BIBREF13 , BIBREF25 , BIBREF33 . The parsers based on supervised learning techniques capture statistics from labelled corpora to enable the systems to correctly predict parse trees when input the corresponding sentences. Since the PTB corpus contains mainly texts from news domain, the supervised learning based parsers trained on PTB corpus are sensitive to domain shifting. Those systems are able to achieve high accuracies when tested on the PTB test set (i.e. in-domain parsing). However, when applying them on data from different sources (i.e. out-of-domain parsing), such as web domain BIBREF15 and chemical text BIBREF14 , the accuracy drops significantly. Table TABREF27 shows a comparison of the in-domain and out-of-domain parsing performance of three parsers that have been frequently used by researchers (i.e. MST BIBREF9 , Malt BIBREF10 , and Mate parser BIBREF12 ). Those parsers are trained on the training data from the major shared task on dependency parsing (i.e. CoNLL 2009 BIBREF1 ). The training set contains mainly the news domain data from the Penn Treebank. In our evaluation, we first test them on the CoNLL test set which denotes our in-domain examples; for our out-of-domain examples we test the parsers on a number of different domains from the OntoNotes v5.0 corpus. As we can see from the results, the accuracies on out-of-domain texts are much lower than that of in-domain texts, with the largest accuracy difference of more than 15% (i.e. Mate parser has an accuracy of 90.1% on in-domain texts and an accuracy of 74.4% on texts from broadcast conversations). How can we reduce the accuracy gap between the in-domain and the out-of-domain parsing? The most straightforward way would be annotating more text for the target domain, however, this approach is very expensive and time-consuming. There are only very limited manually annotated corpora available, which confirms the high costs of the annotation process. Domain adaptation is a task focused on solving the out-of-domain problems but without the need for manual annotation. There are a number of directions to work on the domain adaptation task, each of them focusing on a different aspect. These directions include semi-supervised techniques, domain specific training data selection, external lexicon resources and parser ensembles. Each direction has its own advantages and disadvantages, we briefly discuss in Section SECREF28 . In this thesis, we mainly focus on one direction that improves the out-of-domain accuracy by using unlabelled data (Semi-supervised approaches). Similar to other domain adaptation approaches, semi-supervised approaches do not require to manually annotate new data, but instead, they use the widely available unlabelled data. Some semi-supervised approaches focus on boosting the training data by unlabelled data that is automatically annotated by the base models, others aid the parsers by incorporating features extracted from the large unlabelled data. In Section SECREF29 we discuss both approaches in detail. Approaches to Out-of-Domain Parsing As stated above, the domain adaptation techniques are designed to fill the accuracy gaps between the source domain and the target domain. Previous work on domain adaptation tasks is mainly focused on four directions: semi-supervised techniques BIBREF16 , BIBREF19 , BIBREF20 , BIBREF17 , BIBREF37 , BIBREF21 , BIBREF22 , BIBREF18 , target domain training data selection BIBREF38 , BIBREF39 , BIBREF40 , external lexicon resources BIBREF41 , BIBREF42 , BIBREF23 and parser ensembles BIBREF14 , BIBREF43 , BIBREF18 , BIBREF15 . The semi-supervised techniques focus on exploring the largely available unlabelled data. There are two major ways to use the unlabelled data. The first family aims to boost the training data. Data that has been automatically annotated by the base models are used directly in re-training as the additional training set, up-training, self-training and co-training are techniques of this family. The other family uses the features extracted from unlabelled data to aid the base model, this type of techniques include word embeddings, word clusters and dependency language models. In this thesis, we use semi-supervised techniques from both families and we will discuss them in detail in Section SECREF29 . Domain specific training data selection is a technique based on the assumption that similarity methods are able to derive a subset of the source domain training data that fits an individual test domain. plank2011effective investigated several similarity methods to automatically select sentences from training data for the target domain, which gain significant improvements when comparing with random selection. Positive impacts are also found by khan13towards when they experimented with training data selection on parsing five sub-genres of web data. The advantage of this technique is that it does not need any extra data, however, it is also restricted to learn only from the source domain training set. Lack of the knowledge of the unknown words is one of the well-known problems faced by domain adaptation tasks, i.e. target domain test sets usually contain more unknown words (vocabularies which did not appear in the training data) than source domain test sets BIBREF14 , BIBREF15 . One way to solve this problem is to use the external lexicon resources created by the linguistics. External lexicons provide additional information for tokens, such as word lemma, part-of-speech tags, morphological information and so on. This information can be used by parsers directly to help making the decision. Previously, lexicons have been used by szolovits2003adding and pyysalo2006 to improve the link grammar parser on the medical domain. Both approaches showed large improvements on parsing accuracy. Recently, pekar2014exploring extracted a lexicon from a crowd-sourced online dictionary (Wiktionary) and applied it to a strong dependency parser. Unfortunately, in their approach, the dictionary achieved a moderate improvement only. The fourth direction of domain adaptation is parser ensembles, it becomes more noticeable, due to its good performance in shared tasks. For example, in the first workshop on syntactic analysis of non-canonical language (SANCL), the ensemble-based systems on average produced much better results than that of single parsers BIBREF43 , BIBREF18 , BIBREF15 . However, those ensemble-based systems are not used in real-world tasks, due to the complex architectures and high running time. Semi-Supervised Approaches Semi-supervised approaches use unlabelled data to bridge the accuracy gap between in-domain and out-of-domain. In recent years, unlabelled data has gained large popularity in syntactic parsing tasks, as it can easily and inexpensively be obtained, cf. BIBREF16 , BIBREF44 , BIBREF45 , BIBREF37 , BIBREF46 , BIBREF15 , BIBREF47 , BIBREF25 . This is in stark contrast to the high costs of manually labelling new data. Some techniques such as self-training BIBREF45 and co-training BIBREF16 use auto-parsed data as additional training data. This enables the parser to learn from its own or other parsers' annotations. Other techniques include word clustering BIBREF37 and word embedding BIBREF48 which are generated from a large amount of unlabelled data. The outputs can be used as features or inputs for parsers. Both groups of techniques have been shown effective on syntactic parsing tasks BIBREF49 , BIBREF20 , BIBREF21 , BIBREF46 , BIBREF50 , BIBREF25 . The first group uses unlabelled data (usually parsed data) directly in the training process as additional training data. The most common approaches in this group are co-training and self-training. Co-training is a technique, that has been frequently used by domain adaptation for parsers BIBREF16 , BIBREF17 , BIBREF18 , BIBREF15 . The early version of co-training uses two different 'views' of the classifier, each 'view' has a distinct feature set. Two 'views' are used to annotate unlabelled set after trained on the same training set. Then both classifiers are retrained on the newly annotated data and the initial training set BIBREF51 . blum98 first applied a multi-iteration co-training on classifying web pages. Then it was extended by collins99 to investigate named entity classification. At that stage, co-training strongly depended on the splitting of features BIBREF52 . One year after, goldman00 introduced a new variant of co-training which used two different learners, but both of them took the whole feature sets. One learner's high confidence data are used to teach the other learner. After that, zhou2005tri proposed another variant of co-training (tri-training). Tri-training used three learners, each learner is designed to learn from data on which the other two learners have agreed. In terms of the use of co-training in the syntactic analysis area, sarkar01 first applied the co-training to a phrase structure parser. He used a subset (9695 sentences) of labelled Wall Street Journal data as initial training set and a larger pool of unlabelled data (about 30K sentences). In each iteration of co-training, the most probable INLINEFORM0 sentences from two views are added to the training set of the next iteration. In their experiments, the parser achieved significant improvements in both precision and recall (7.79% and 10.52% respectively) after 12 iterations of co-training. The work most close to ours was presented by BIBREF17 in the shared task of the conference on computational natural language learning (CoNLL). They used two different settings of a shift-reduce parser to complete a one iteration co-training, and their approach successfully achieved improvements of approximately 2-3%. Their outputs have also scored the best in the out-of-domain track BIBREF14 . The two settings they used in their experiments are distinguished from each other in three ways. Firstly, they parse the sentences in reverse directions (forward vs backward). Secondly, the search strategies are also not the same (best-first vs deterministic). Finally, they use different learners (maximum entropy classifier vs support vector machine). The maximum entropy classifier learns a conditional model INLINEFORM0 by maximising the conditional entropy ( INLINEFORM1 ), while the support vector machines (SVMs) are linear classifiers trained to maximise the margin between different classes. In order to enable the multi-class classification, they used the all-versus-all strategy to train multiple SVMs for predicting the next transition. In addition, a polynomial kernel with degree 2 is used to make the data linearly separable. sagae07 proved their assumptions in their experiments. Firstly, the two settings they used are different enough to produce distinct results. Secondly, the perfect agreement between two learners is an indication of correctness. They reported that the labelled attachment score could be above 90% when the two views agreed. By contrast, the labelled attachment scores of the individual view were only between 78% and 79%. Tri-training is a variant of co-training. A tri-training approach uses three learners, in which the source learner is retrained on the data produced by the other two learners. This allows the source learner to explore additional annotations that are not predicted by its own, thus it has a potential to be more effective than the co-training. Tri-training is used by BIBREF18 in the first workshop on syntactic analysis of non-canonical language (SANCL) BIBREF15 . They add the sentences which the two parsers agreed on into the third parser's training set, then retrain the third parser on the new training set. However, in their experiments, tri-training did not significantly affect their results. Recently, weiss2015neural used normal agreement based co-training and tri-training in their evaluation of a state-of-the-art neural network parser. Their evaluation is similar to the Chapter SECREF14 of this thesis, although they used different parsers. Please note their paper is published after our evaluation on co-training BIBREF23 . In their work, the annotations agreed by a conventional transition-based parser (zPar) BIBREF53 and the Berkeley constituency parser BIBREF8 have been used as additional training data. They retrained their neural network parser and the zPar parser on the extended training data. The neural network parser gained around 0.3% from the tri-training, and it outperforms the state-of-the-art accuracy by a large 1%. By contrast, their co-training evaluation on the zPar parser found only negative effects. Self-training is another semi-supervised technique that only involves one learner. In a typical self-training iteration, a learner is firstly trained on the labelled data, and then the trained learner is used to label some unlabelled data. After that, the unlabelled data with the predictions (usually the high confident predictions of the model) are added to the training data to re-train the learner. The self-training iteration can also be repeated to do a multi-iteration self-training. When compared with co-training, self-training has a number of advantages. Firstly unlike the co-training that requires two to three learners, the self-training only requires one learner, thus it is more likely we can use the self-training than co-training in an under resourced scenario. Secondly, to generate the additional training data, co-training requires the unlabelled data to be double annotated by different learners, this is more time-consuming than self-training's single annotation requirement. In term of the previous work on parsing via self-training, charniak1997statistical first applied self-training to a PCFG parser, but this first attempt of using self-training for parsing failed. steedman2003semi implemented self-training and evaluated it using several settings. They used a 500 sentences training data and parsed only 30 sentences in each self-training iteration. After multiple self-training iterations, it only achieved moderate improvements. This is caused probably by the small number of additional sentences used for self-training. mcclosky06naacl reported strong self-training results with an improvement of 1.1% f-score by using the Charniak-parser, cf. BIBREF7 . The Charniak-parser is a two stage parser that contains a lexicalized context-free parser and a discriminative reranker. They evaluated on two different settings. In the first setting, they add the data annotated by both stages and retrain the first stage parser on the new training set, this results in a large improvement of 1.1%. In the second setting, they retrain the first stage parser on its own annotations, the result shows no improvement. Their first setting is similar to the co-training as the first stage parser is retrained on the annotation co-selected by the second stage reranker, in which the additional training data is more accurate than the predictions of first stage parser. mcclosky2006reranking applied the same method later on out-of-domain texts which show good accuracy gains too. reichart2007self showed that self-training can improve the performance of a constituency parser without a reranker for the in-domain parsing. However, their approach used only a rather small training set when compared to that of mcclosky06naacl. sagae2010self investigated the contribution of the reranker for a constituency parser in a domain adaptation setting. Their results suggest that constituency parsers without a reranker can achieve statistically significant improvements in the out-of-domain parsing, but the improvement is still larger when the reranker is used. In the workshop on syntactic analysis of non-canonical language (SANCL) 2012 shared task, self-training was used by most of the constituency-based systems, cf. BIBREF15 . The top ranked system is also enhanced by self-training, this indicates that self-training is probably an established technique to improve the accuracy of constituency parsing on out-of-domain data, cf. BIBREF43 . However, none of the dependency-based systems used self-training in the SANCL 2012 shared task. One of the few successful approaches to self-training for dependency parsing was introduced by chen2008learning. They improved the unlabelled attachment score by about one percentage point for Chinese.chen2008learning added parsed sentences that have a high ratio of dependency edges that span only a short distance, i.e. the head and dependent are close together. The rationale for this procedure is the observation that short dependency edges show a higher accuracy than longer edges. kawahara2008learning used a separately trained binary classifier to select reliable sentences as additional training data. Their approach improved the unlabelled accuracy of texts from a chemical domain by about 0.5%. goutam2011exploring applied a multi-iteration self-training approach on Hindi to improve parsing accuracy within the training domain. In each iteration, they add a small number (1,000) of additional sentences to a small initial training set of 2,972 sentences, the additional sentences were selected due to their parse scores. They improved upon the baseline by up to 0.7% and 0.4% for labelled and unlabelled attachment scores after 23 self-training iterations. While many other evaluations on self-training for dependency parsing are found unhelpful or even have negative effects on results. bplank2011phd applied self-training with single and multiple iterations for parsing of Dutch using the Alpino parser BIBREF54 , which was modified to produce dependency trees. She found self-training produces only a slight improvement in some cases but worsened when more unlabelled data is added. plank2013experiments used self-training in conjunction with dependency triplets statistics and the similarity-based sentence selection for Italian out-of-domain parsing. They found the effects of self-training are unstable and does not lead to an improvement. cerisara2014spmrl and bjorkelund2014spmrl applied self-training to dependency parsing on nine languages. cerisara2014spmrl could only report negative results in their self-training evaluations for dependency parsing. Similarly, bjorkelund2014spmrl could observe only on Swedish a positive effect. The second group uses the unlabelled data indirectly. Instead of using the unlabelled data as training data, they incorporate the information extracted from large unlabelled data as features to the parser. Word clusters BIBREF37 , BIBREF55 and word embeddings BIBREF24 , BIBREF25 are most well-known approaches of this family. However, other attempts have also been evaluated, such as dependency language models (DLM) BIBREF56 . Word Clustering is an unsupervised algorithm that is able to group the similar words into the same classes by analysing the co-occurrence of the words in a large unlabelled corpus. The popular clustering algorithm includes Brown BIBREF57 , BIBREF58 and the Latent dirichlet allocation (LDA) BIBREF59 clusters. koo08 first employed a set of features based on brown clusters to a second-order graph-based dependency parser. They evaluated on two languages (English and Czech) and yield about one percentage improvements for both languages. The similar features have been adapted to a transition-based parser of bohnet2012emnlp. The LDA clusters have been used by cerisara2014spmrl in the workshop on statistical parsing of morphologically rich languages (SPMRL) 2014 shared tasks BIBREF60 on parsing nine different languages, their system achieved the best average results across all non-ensemble parsers. Word embeddings is another approach that relies on the co-occurrence of the words. Instead of assigning the words into clusters, word embedding represent words as a low dimensional vector (such as 50 or 300 dimensional vector), popular word embedding algorithms include word2vec BIBREF61 and global vectors for word representation (GloVe) BIBREF62 . Due to the nature of the neural networks, word embeddings can be effectively used in the parsers based on neural networks. By using pre-trained word embeddings the neural network-based parsers can usually achieve a higher accuracy compared with those who used randomly initialised embeddings BIBREF24 , BIBREF25 , BIBREF33 . Other Approaches that use different ways to extract features from unlabelled data have also been reported. mirroshandel12 used lexical affinities to rescore the n-best parses. They extract the lexical affinities from parsed French corpora by calculating the relative frequencies of head-dependent pairs for nine manually selected patterns. Their approach gained a labelled improvement of 0.8% over the baseline. chen2012utilizing applied high-order DLMs to a second-order graph-based parser. This approach is most close to the Chapter SECREF32 of this thesis. The DLMs allow the new parser to explore higher-order features without increasing the time complexity. The DLMs are extracted from a 43 million words English corpus BIBREF63 and a 311 million words corpus of Chinese BIBREF64 parsed by the baseline parser. Features based on the DLMs are used in the parser. They gained 0.66% UAS for English and an impressive 2.93% for Chinese. chen2013feature combined the basic first- and second-order features with meta features based on frequencies. The meta features are extracted from auto-parsed annotations by counting the frequencies of basic feature representations in a large corpus. With the help of meta features, the parser achieved the state-of-the-art accuracy on Chinese. Corpora As mentioned previously, one contribution of this thesis is evaluating major semi-supervised techniques in a unified framework. For our main evaluation, we used English data from the conference on computational natural language learning (Conll) 2009 shared task BIBREF1 as our source of in-domain evaluation. For out-of-domain evaluation, we used weblogs portion of OntoNotes v5.0 corpus (Weblogs) and the first workshop on syntactic analysis of non-canonical language shared task data (Newsgroups,Reviews,Answers) BIBREF15 . Section SECREF35 introduces our main evaluation corpora in detail. For comparison and multi-lingual evaluation, we also evaluated some of our approaches in various additional corpora. Our self-training approach has been evaluated on chemical domain data (Chemical) from the conference on computational natural language learning 2007 shared task BIBREF14 and nine languages datasets from the workshop on statistical parsing of morphologically rich languages (Spmrl) 2014 shared task BIBREF60 . Our dependency language models approach has been evaluated in addition on Wall Street Journal portion of Penn English Treebank 3 (Wsj) BIBREF36 and Chinese Treebank 5 (Ctb) BIBREF65 . As both treebanks do not contain unlabelled data, we used the data of chelba13onebillion and the Xinhua portion of Chinese Gigaword Version 5.0 for our English and Chinese tests respectively. We introduce those corpora in the experiment set-up section of the relevant chapters. The Main Evaluation Corpora In this section, we introduce our main evaluation corpora that have been used in all of the semi-supervised approaches evaluated in this thesis. The Conll English corpus built on the Penn English Treebank 3 BIBREF36 which contains mainly Wall Street Journals but also included a small portion of Brown corpus BIBREF66 . The training set contains only Wall Street Journals, the small subset of the Brown corpus has been included in the test set. The constituency trees from Penn English Treebank are converted to dependency representation by the LTH constituent-to-dependency conversion tool, cf. BIBREF67 . A basic statistic of the corpus can be found in Table TABREF36 . For our Weblogs domain test we used the Ontonotes v5.0 corpus. The Ontonotes corpus contains various domains of text such as weblogs, broadcasts, talk shows and pivot texts. We used the last 20% of the weblogs portion of the Ontonotes v5.0 corpus as our target domain development set and the main test set. The selected subset allows us to build sufficient sized datasets similar to the source domain test set. More precisely, the first half of the selected corpus is used as a test set while the second half is used as the development set. Table TABREF37 shows some basic statistic of those datasets. Newsgroups, Reviews and Answers domain data are used as additional test sets for our evaluation. Those additional test domains are provided by the first workshop on syntactic analysis of non-canonical language (SANCL) shared task BIBREF15 . The shared task is focused on the parsing English web text, in total, they prepared five web domain datasets, two of them are development datasets (Email, Weblogs) and the other three (Newsgroups, Reviews and Answers) are used as test sets. For each of the domains, a small labelled set and a large unlabelled set are provided. In this thesis, we used all three test datasets (both labelled and unlabelled data). In addition, one of the unlabelled texts (Weblogs) from the development portion of the shared task is also used. We used for each domain a similar sized unlabelled dataset to make the evaluation more unified. The only exception is the answers domain, as its unlabelled dataset is much smaller than the other three domains, thus we used all of the data provided. A basic statistic of the labelled test sets and unlabelled data can be found in Table TABREF37 and TABREF38 respectively. In term of the dependency representation, we used the LTH conversion for our main evaluation corpora. Same as the CoNLL 2009 shared task we converted all the labelled data from constituent trees to dependency representation by the LTH constituent-to-dependency conversion tool BIBREF67 when needed. Evaluation Methods To measure the parser's performance, we report labelled attachment scores (LAS) and unlabelled attachment scores (UAS). For our evaluation on the main corpora, we use the official evaluation script of the CoNLL 2009 shared task, in which all punctuation marks are included in the evaluation. The LAS and UAS are the standard ways to evaluate the accuracy of a dependency parser. Due to the single-head property of the dependency trees, the dependency parsing can be seen as a tagging task, thus the single accuracy metric is well suited for the evaluation. Both LAS and UAS measure the accuracy by calculating the percentage of the dependency edges that have been correctly attached. The UAS considers an edge is correct if the attachment is correct, it does not take the label into account, while the LAS counts only the edges that are both correctly attached and the correct label also assigned. The LAS is more strict than UAS thus we mainly focus on LAS in our evaluation. Let INLINEFORM0 be the number of edges that are correctly attached, INLINEFORM1 be the number of edges that are both correctly attached and have the correct label, INLINEFORM2 be the total number of edges, we compute: DISPLAYFORM0 DISPLAYFORM0 For significance testing, we use the randomised parsing evaluation comparator from a major shared task on dependency parsing BIBREF14 . The script takes predictions annotated by two different models of the same dataset. Let the first input be the one which has a higher overall accuracy. The null hypothesis of the script is that the accuracy difference between the first input and the second input is not statistically significant. And the p-values represent the probability that the null hypothesis is correct. We use the script's default setting of 10,000 iterations ( INLINEFORM0 ), for each iteration, the comparator randomly selects one sentence from the dataset and compares the accuracies of the sentence predicted in the two different inputs. Let INLINEFORM1 be the number of randomly selected instances that are predicted less accurately in the first input when compared to the predictions in the second input. The p-value is calculated by: INLINEFORM2 We mark the significance levels based on their p-values, * for INLINEFORM0 , ** for INLINEFORM1 . Analysis Techniques To understand the behaviour of our methods, we assess our results on a number of tests. We analyse the results on both token level and sentences level. For token level, we focus on the accuracies of individual syntactic labels and the known/unknown words accuracies. For sentence level, we used the methods from mcclosky06naacl to evaluate sentences in four factors. We used all four factors from their analysis, i.e. sentence length, the number of unknown words, the number of prepositions and number of conjunctions. Token Level Analysis. Our token level analysis consists of two tests, the first test assesses the accuracy changes for individual labels. The goal of this test is to find out the effects of our semi-supervised methods on different labels. For an individual label, we calculate the recall, precision and the f-score. Let INLINEFORM0 be the number of the label INLINEFORM1 predicted by the parser, INLINEFORM2 be the count of label INLINEFORM3 presented in the gold data and INLINEFORM4 be the number of the label predicted correctly. The precision ( INLINEFORM5 ), recall ( INLINEFORM6 ) and the f-score ( INLINEFORM7 ) are calculated as follows: DISPLAYFORM0 DISPLAYFORM0 DISPLAYFORM0 We compute for each label, the score differences between our enhanced model and the base model. The results for the most frequent labels are visualised by the bar chart. Figure FIGREF45 is an example of the bar chart we used, the x-axis shows the names of the relevant label, the y-axis shows the accuracy changes in percentage. For each of the labels, we report the accuracy changes of all three scores (recall, precision and f-score), the left (blue) bar represents the recall, the middle (red) bar represents the precision and the right (brown) bar is for f-score. The second test assesses the overall accuracy of known words and unknown words. The unknown words are defined as the words that are not presented in the initial training set. The initial training set is the one we used to train the base model. To compute the accuracy for known and unknown words, we first assign all the tokens in the dataset into two groups (known and unknown) and then we calculate the labelled and unlabelled accuracies for each of the groups separately. We compare the improvements achieved by our enhanced model on known and unknown words to understand the ability of our model on handling unknown words. Sentence Level Analysis. For our sentence level analysis, we evaluate on four factors (sentence length, the number of unknown words, the number of prepositions and the number of conjunctions) that are known to be problematic in parsing. We use a method similar to mcclosky06naacl in our analysis. For each of the factors, we assign sentences to different classes according to their property, sentences that have the same property are assigned to the same class. Take unknown words factor as an example, sentences which contain the same number of unknown words are grouped together. For each group, we calculate the percentage of sentences that are improved, worsened or unchanged in accuracy by our enhanced model. The reason for using the percentage instead of the number of sentences that were used by mcclosky06naacl is mainly because the absolute numbers vary greatly both within the factor and between factors, thus is not suitable for comparison. The percentage, on the other hand, can be easily compared. In addition to the above values, we also report the number of the sentences in each class. Figure FIGREF46 shows an example of our sentence level analysis on the different number of unknown words per sentence. The x-axis shows the conditions of the classes. In this example, it represents the different number of unknown words in a single sentence. The y-axis to the left is the percentage and the y-axis to the right is the number of sentences. The blue dashed line represents the percentage of the sentences that are parsed better by our enhanced model, the red dotted line represent the portion that is parsed less accurate, the black dash-dotted line shows the portion of sentences whose accuracy are unchanged. The black solid line is the number of sentences in the individual classes. Co-training In this chapter, we introduce our co-training approach. Co-training is one of the popular semi-supervised techniques that has been applied to many natural language processing tasks, such as named entity recognition BIBREF68 , constituency parsing BIBREF16 and dependency parsing BIBREF17 , BIBREF15 . Although co-training approaches are popular, they do not always bring positive effects BIBREF18 , BIBREF25 . Improvements on results are usually reported by learners that are carefully designed to be as different as possible. Such as in sagae07's approach, they form the co-training with parsers consisting of different learning algorithms and search strategies. However, off-the-shelf parsers use many similar features, the output of these parsers are more likely to agree with each other. Thus it is unclear whether the off-the-shelf parsers are suitable for co-training. In this work we evaluate co-training with a number of off-the-shelf parsers that are freely available to the research community, namely Malt parser BIBREF10 , MST parser BIBREF9 , Mate parser BIBREF12 , and Turbo parser BIBREF11 . We evaluate those parsers on agreement based co-training algorithms. The evaluation learner is retrained on the training set that is boosted by automatically annotated sentences agreed by two source learners. We investigate both normal agreement based co-training and a variant called tri-training. In a normal co-training setting the evaluation learner is used as one of the source learners, and in a tri-training scenario, the source learners are different from the evaluation learner. In the following sections we introduce our approaches in Section SECREF15 . We then introduce our experiment settings and results in Section SECREF16 and Section SECREF17 respectively. After that, in Section SECREF18 we analyse the results and trying to understand how co-training helps. In the last section (Section SECREF19 ), we summarise our finding. Agreement Based Co-training In this work, we apply an agreement based co-training to out-of-domain dependency parsing. Our agreement based co-training is inspired by the observation from sagae07 in which the two parsers agreeing on an annotation is an indication of a higher accuracy. We proposed two types of agreement based approaches: one uses parser pairs (normal co-training), the other uses three parsers which is also known as tri-training. Two approaches use a similar algorithm, which involves two source learners and one evaluation learner. Two source learners are used to produce additional training data for retraining the evaluation learner. More precisely, our algorithm is as follows: Although both approaches share the similar algorithm, the major differences between them are: both parsers involved by normal co-training are used as the source learners, in which one of them is also used as the evaluation learner; by contrast, tri-training uses three parsers in total, in which two of them are used as the source learners and the third one is used as the evaluation learner. In terms of parsers selection, we selected four public available dependency parsers, which include two benchmark parsers (Malt parser BIBREF10 and MST parser BIBREF9 ), one transition-based Mate parser BIBREF12 , and one graph-based Turbo parser BIBREF11 . These parsers have been widely used by researchers. A more detailed discussion of the dependency parser can be found in section SECREF8 . The agreement based co-training depends on the assumption that identical annotations between two learners indicate the correctness. To confirm the suitability of selected parsers, in the preliminary evaluation we assessed the accuracy of identical analysis generated by parser pairs. Because we intend to use the Mate parser as our evaluation parser, we paired each of the other three parsers with Mate parser to create three co-training pairs. We assess our assumption by annotating our Weblogs development set, the development set is parsed by all four parsers. We then extract the identical annotations (whole sentence) from parser pairs. We show the accuracy of individual parsers and the accuracy of identical annotations in Table TABREF51 . The second row shows the labelled accuracy of each parser on the Weblogs development set. The third row shows the labelled accuracy of the identical annotations between the named parser and Mate parser. The fourth row shows the agreement rate of the parser pairs. The last row shows the average sentence length of the identical annotations. As we can see from the table, our assumption is correct on all the parser pairs. Actually, when they agreed on the annotations, the accuracies can be 16% higher than that of individual parsers. However, we also noticed that the average sentence length of the identical annotations is in stark contrast with that of the entire development set (19.6 tokens/sentence). We will discuss this potential conflict in the later section. Experiment Set-up In our evaluation on co-training we use our main evaluation corpora that consists of a source domain training set (Conll), a Weblogs domain development set, a in-domain test set (Conll) and four out-of-domain test sets (Weblogs, Newsgroups, Reviews and Answers). For each target domains, we used in addition a large unlabelled dataset to supply the additional training set. We evaluate various different settings on the development set to tune the best configuration, after that, we apply the best setting to all the test domains. As mentioned before, we used four parsers in our experiments. cf. the Malt parser BIBREF10 , MST parser BIBREF9 , Mate parser BIBREF12 , and the Turbo parser BIBREF11 . We use the default settings for all the parsers. The part-of-speech tags is annotated by Mate parser's internal tagger. To create the additional training corpus, the unlabelled datasets are annotated by all the parsers which are trained on the Conll source domain training set. The Mate parser is used as our evaluation learner, the baseline for all the domains are generated by Mate parser trained on the same Conll training set and applied directly to target domains. We mainly report the labelled attachment scores (LAS), but also include the unlabelled attachment scores (UAS) for our evaluations on test sets. We mark the significance levels according to the p-values, * indicates significance at the p < 0.05 level, ** for the p < 0.01 level. For our evaluation on self-training, we used our main evaluation corpora and the Chemical domain text from the domain adaptation track of CoNLL 2007 shared task. We mainly evaluated on our main evaluation corpora and the best setting is also tuned on the development set of the main evaluation corpora. The Chemical domain evaluation is only used for comparison with previous work, we do not optimise our approaches specifically for this domain. For the main evaluation corpora, we used the Conll source domain training set, the Weblogs domain development set, the Conll source domain test set and Weblogs, Newsgroups, Reviews domain test sets. We do not evaluate our approach on the Answers domain as the unlabelled data for this domain is not large enough for our self-training. The evaluation corpus for Chemical domain is taken from the domain adaptation track of the CoNLL 2007 shared task BIBREF14 . The shared task is the second year running for the dependency parsing task. Besides the multi-lingual parsing track introduced from the previous year, the 2007 shared task also included a track on domain adaptation task. The domain adaptation track provided mainly two domains (Biomedical and Chemical), in which the biomedical domain is used as development set and the chemical domain is used as evaluation set. The source domain training set consists of sections 2-11 of the Wall Street Journal section of the Penn Treebank BIBREF36 . A sufficient size of unlabelled data are also provided by the organiser, we used the first 256k sentences in our work. The labelled data are converted to dependency relations by the LTH constituent-to-dependency conversion tool BIBREF67 . Table TABREF79 shows the basic statistics of the training, development and the test set. For the Chemical domain test we used only the data from the CoNLL 2007 shared task to make a fair comparison with kawahara2008learning's results. We use the Mate transition-based parser in our experiments. The parser is modified to output the confidence scores, other than that we used its default settings. For part-of-speech tagging, we use predicted tags from Mate's internal tagger for all the evaluated domains. For Chemical domain we evaluated additionally on gold tags as they are used by previous work. The baselines are trained only on the respective source domain training data. For the evaluation of the parser's accuracy, we report both labelled (LAS) and unlabelled (UAS) attachment scores, but mainly focus on the labelled version. We included all punctuation marks in the evaluation. The significance levels are marked according to the p-values, * and ** are used to represent the p-value of 0.05 and 0.01 levels respectively. We evaluate our adjusted parse score-based self-training approach with the Spmrl multi-lingual corpora. The Spmrl multi-lingual corpora consist of nine languages (Arabic, Basque, French, German, Hebrew, Hungarian, Korean, Polish, Swedish) in-domain datasets available from 2014 Shared Task at the workshop on statistical parsing of morphologically rich languages (SPMRL), cf. BIBREF60 . We have chosen the datasets as there are no multi-lingual out-of-domain corpora available. Actually, even the in-domain corpora for many languages are rather small. We used the 5k smaller training set from the shared task, to make the scenario similar to the domain adaptation task that assumes a small number of target domain data is available. This setting is also a good basis for exploration for improving parsing accuracy of under-resourced languages. For each language, the shared task also provided a sufficient unlabelled data which is required by our evaluation. We evaluate nine languages in a unified setting, in which the 5k training set and a 100k unlabelled dataset are used for all the languages. For additional training set, we parse all 100k sentences for each of the languages and use 50k of them as the additional training set. For tuning the INLINEFORM0 value of our adjusted parse score-based method, we used only the German development set, as we intend to use a unified setting for all languages and the German development set is the largest in size. Table TABREF103 shows statistics about the corpora that we used in our experiments. We evaluate all nine languages on the Mate parser BIBREF12 , the default settings are used in all the experiments. To output the confidence scores we slightly modified the parser, however, this does not affect the parser's accuracy. For part-of-speech tagging, we use the Mate parser's internal tagger for all the evaluations. The baselines are obtained from models trained only on the 5k initial training data. We report both labelled (LAS) and unlabelled (UAS) attachment scores, and mainly focus on the labelled accuracy. In line with the shared task official evaluation method, we include all the punctuations in our evaluation. The statistically significance levels are marked according to their p-values, (*) p-value < 0.05, (**) p-value < 0.01. For our experiments on English in-domain text, we used the Wall Street Journal portion (Wsj) of the Penn English Treebank BIBREF36 . The constituency trees are converted to the Stanford style dependency relations. The Stanford conversion attracts more attention during the recent years, it has been used in the SANCL 2012 shared tasks BIBREF15 and many state-of-the-art results were also reported using this conversion BIBREF25 , BIBREF78 , BIBREF33 . We follow the standard splits of the corpus, section 2-21 are used for training, section 22 and 23 are used as the development set and the test set respectively. We used the Stanford parser v3.3.0 to convert the constituency trees into Stanford style dependencies BIBREF79 . For unlabelled data, we used the data of chelba13onebillion which contains around 30 million sentences (800 million words) from the news domain. Table TABREF126 shows the basic statistics about the corpus; In addition to the Wsj corpus, we also evaluate our approach on the main evaluation corpus of this thesis. Our main evaluation corpus consists of a Conll source domain training set, a source domain test set and four target domain test sets (Weblogs, Newsgroups, Reviews and Answers). Unlike our Wsj corpus that uses Stanford dependencies, the main evaluation corpus is based on the LTH conversion BIBREF67 . Experimenting on different conversions and domains allow us to evaluate our method's robustness. For unlabelled data, we use the same dataset as in our Wsj evaluation. For Chinese, we evaluate our approach only on the in-domain scenario, this is due to the lack of out-of-domain corpus. We use Chinese Treebank 5 (CTB5) BIBREF65 as the source of our gold standard data. The Chinese Treebank 5 corpus mainly consists of articles from Xinhua news agency but also contains some articles from Sinorama magazine and information services department of HKSAR. We follow the splits of zhang11, the constituency trees are converted to dependency relations by the Penn2Malt tool using head rules of zhang08. We use the Xinhua portion of Chinese Gigaword Version 5.0 as our source for unlabelled data. We noticed that the unlabelled data we used actually contains the Xinhua portion of the CTB5; to avoid potential conflict we removed them from the unlabelled data. After the pre-processing, our Chinese unlabelled data consists of 20 million sentences which are roughly 450 million words. We use ZPar v0.7.5 as our pre-processing tool. The word segmentor of ZPar is trained on the CTB5 training set. Table TABREF128 gives some statistics about the corpus. We use a modified version of the Mate transition-based parser in our experiments. We enhance the parser with our DLM-based features; other than this we used the parser's default setting. The part-of-speech tags are supplied by Mate parser's internal tagger. The baselines are trained only on the initial training set. In most of our experiments, DLMs are extracted from data annotated by the base model of Mate parser. For the evaluation on higher quality DLMs, the unlabelled data is additionally tagged and parsed by Berkeley parser BIBREF8 and is converted to dependency trees with the same tools as for gold data. We report both labelled (LAS) and unlabelled (UAS) attachment scores for our evaluation. The punctuation marks are excluded for our English and Chinese in-domain evaluations. For English evaluation on our main evaluation corpus we include the punctuations. The significance levels are marked due to their p-values, we use * and ** to represent the p-value of 0.05 and 0.01 levels respectively. Empirical Results Agreement based co-training. We first evaluate the parser pairs on the normal agreement based co-training. Each of the other three parsers is paired with Mate parser to be the source learners of our co-training. For each pairwise parser combinations, the unlabelled Weblogs text is double parsed by the parser pairs. The sentences that are annotated identically by both parsers are used as candidates for the additional training set. We take different amount of additional training sentences from the candidates pool to retrain the Mate parser. Figure FIGREF52 shows the co-training results of adding 10k to 30k additional training data for all three parser pairs. As we can see from the figure, all the co-training results achieved improvements when compared with the Mate baseline. The largest improvement of one percentage point is achieved by Mate-Malt parser pair when adding 20k or 30k additional training data. We also notice a negative correlation between the improvement and the identical rate mentioned previously in Table TABREF51 . The Turbo parser has the highest identical rate, in which it annotated 479 out of 2150 sentences (22.28%) exactly the same as Mate parser when evaluated on the development set. This is 2% higher than that of MST parser and 2.5% higher than the Malt parser. However, the improvements achieved by the pairs are shown to be negatively correlated, i.e. the Mate-Malt pair gains the largest improvement, the Mate-Turbo pair achieved the lowest gain. This finding is in-line with the fundamental of co-training that requires the learners to be as different as possible. Removing short sentences from identical data. The identical annotations between the parsers are like a double-edged sword, they consist of a higher accuracy but in the same time shorter in average sentence length. Take our Mate-Malt pair as an example, the average sentence length of the identical annotations is only 8 tokens, this is much lower than the development set's 19.6 tokens/sentence and the Conll training set's 24.4 tokens/sentence. To make the additional training data more similar to the manually annotated data, we exclude the extremely short sentences from the pool. More precisely we set three minimal sentence length thresholds (4, 5 and 6 tokens), sentences shorter than the thresholds are removed from the pool. We then take 30k sentences from the remaining pool as the additional training data. By taking out the short sentences the average sentence length of the selected sentences is closer to that of the development set. As shown in Table TABREF53 , the average sentence length reached 13 tokens/sentence. One of the major concerns when we exclude the short sentences from the pool is that the accuracy of the remaining pool might drop. The short sentences are easier to parse, thus they usually have a higher accuracy. However, an evaluation on the development set shows that there is almost no effect on the accuracies (see Table TABREF53 ). In term of the results, we gained a 0.27% additional improvement when discarding short sentences (Figure FIGREF54 ). Three learners co-training. In the normal co-training setting, the Mate parser is used as one of the source learners to provide additional training data for retraining itself. Based on this setting the Mate parser can learn only from the annotations it has already known. The tri-training algorithm is on the other hand designed to allow the evaluation learner to learn from sources other than itself. This gives the Mate parser the potential to explore novel examples from other parsers. In our tri-training experiments, we used the Malt parser and the MST parser as our source learners. The sentences that are annotated identically by these parsers are added to the pool for retraining the Mate parser. To assess the quality of the identical annotations between Malt and MST parsers we apply them to our development set. We also assessed the sentences that are annotated identically by Malt and MST parsers but different to Mate parser's annotation, this allows us to know the scale of the novel examples. As shown in Table TABREF55 , the accuracy of the sentences agreed by Malt and MST parsers is even slightly higher than that of Mate and Malt parsers, this is surprising as MST parser is less accurate than Mate parser. The analysis also showed that half of the identical annotations from Malt and MST parsers are actually novel to Mate parser. We compared our tri-training and co-training results in Figure FIGREF56 , the tri-training results constantly outperform the normal co-training. The best result of 79.12% is achieved by retraining the Mate parser with 20k additional training data agreed by Malt-MST parsers (tri-training). The best tri-training result is 0.24% higher than that of co-training and nearly 1.6% higher than the Mate baseline. Evaluating on test domains. We then evaluated our best configuration (tri-training) on our four test domains. Under the tri-training setting, the unlabelled datasets of each domain are double parsed by Malt-MST pairs, the first 20k identical annotations are used as additional training data to retrain the Mate parser. The only exception is for answers domain. Due to the lack of unlabelled data the additional training data is much smaller, we used all 3k identical sentences for retraining. Table TABREF57 shows our tri-training results accompanied by the baselines. The tri-training setting achieved large labelled improvements up to 1.8 percentage points. For unlabelled attachment scores, the models gained up to 0.59% absolute improvements. We also tested the retrained Weblogs domain model on the in-domain test set. The results show the tri-trained model does not affect the in-domain accuracy. Random Selection-based Self-training. To have an idea of the performance of basic self-training, we first evaluated with randomly selected additional training data. The triangle marked curve in Figure FIGREF80 shows the accuracy of the random selection-based self-training. We used from 50k to 200k randomly selected additional training data to retrain the Mate parser. The retrained models obtain some small improvements when compared with the baseline. The improvements achieved by the different number of additional training data are very similar: they all around 0.2%. Those small improvements obtained by the basic self-training are not statistically significant. This finding is in line with previous work of applying non-confidence-based self-training approaches to dependency parsing, cf. BIBREF55 , BIBREF70 . Parse Score-based Self-training. We then evaluate with our first confidence-based method, that uses parse scores. As proposed the automatically annotated sentences are ranked in descending order by the adjusted parse scores before they are used as additional training data. As shown in Figure FIGREF80 , we add between 50k to 300k top ranked sentences from the Weblogs auto-annotated dataset. The method achieved 0.52% improvement when we use 50k additional training data and the improvement increased to 0.66% when 250k sentences are used. After that, the improvement decreased. We use an auto-labelled dataset of 500k sentences. After we rank the sentences by our confidence-based methods, the first half is expected to have an accuracy higher than the average, and the second half is expected to have one lower than average. Thus we should avoid using sentences from the second half of the ranked dataset. Delta-based self-training. For our Delta-based approach, we select additional training data with the Delta method. We train the parser by adding between 50k to 300k sentences from the target domain. Same as the parse score-based method, we gain the largest improvement when 250k sentences are used, which improves the baseline by 0.73% (cf. Figure FIGREF80 ). Although this improvement is slightly higher than that of the parse score-based method, the accuracies are lower than the baseline when we use 50k and 100k ranked sentences from Delta based method. Our error analysis shows that these parse trees are mainly short sentences consisting of only three words. These sentences contribute probably no additional information that the parser can exploit. Evaluating on test domains. We adapt our best settings of 250k additional sentences for both approaches and apply them to three test sets (Weblogs, Newsgroups and Reviews). As illustrated in Table TABREF81 , nearly all the results produced by both approaches are statistically significant improvements when compared to the baselines. The only exception is the unlabelled improvement of the parse score approach on Reviews domain which has a p-value of 0.08. Both approaches achieved the largest improvements on Weblogs domain. The largest labelled improvement of 0.81% is achieved by the parse score-based method, while the largest unlabelled improvement of 0.77% is achieved by the Delta method. For Newsgroups domain both approaches gained the similar labelled and unlabelled improvements of 0.6%. For Reviews domain the Delta method achieved 0.4 - 0.5% improvements on labelled and unlabelled accuracies. The parse score-based approach achieved lower improvements of 0.3%. In terms of the in-domain evaluation, the accuracies of both approaches are lower than the baseline. We further evaluate our best settings on Chemical texts provided by the CoNLL 2007 shared task. We adapt the best settings of the main evaluation corpora and apply both confidence-based approaches to the Chemical domain. For the constant INLINEFORM0 , we use 0.015 and we use 125k additional training data out of the 256k from the unlabelled data of the Chemical domain. We evaluate our confidence-based methods on both predicted and gold part-of-speech tags. After retraining, both confidence-based methods achieve significant improvements in all experiments. Table TABREF82 shows the results for the Chemical domain. When we use predicted part-of-speech tags, the Delta-based method gains a labelled improvement of 1.42%, while the parse score-based approach gains 1.12%. For the experiments based on gold tags, we achieved larger labelled improvements of 1.62% for the Delta-based and 1.48% for the parse score-based methods. For all experiments, the unlabelled improvements are similar to that of labelled ones. Table TABREF82 compares our results with that of kawahara2008learning. We added also the results of sagae07 but those are not directly comparable since they were gained with co-training. sagae07 gained additional training data by parsing the unlabelled data with two parsers and then they select those sentences where the parsers agree on. kawahara2008learning reported positive results for self-training. They used a separately trained binary classifier to select additional training data and are evaluated only on gold tags. Our baseline is higher than kawahara2008learning's self-training result. Starting from this strong baseline, we could improve by 1.62% LAS and 1.52% UAS which is an error reduction of 9.6% on the UAS (cf. Table TABREF82 ). The largest improvement of 1.52% compared to that of kawahara2008learning (0.54% UAS) is substantially larger. We obtained the result by a simple method, and we do not need a separately trained classifier. In this section, we report our results of the adjusted parse score-based self-training approach on the test sets of nine languages. To obtain the increased training data for our self-trained model, the unlabelled data is parsed and ranked by their confidence scores. The 50% (50k) top ranked sentences are added to the initial training set. We retrain the Mate parser on the new training set. The empirical results on nine languages show that our approach worked for five languages which are Basque, German, Hungarian, Korean and Swedish. Moreover, the self-trained model achieved on average (nine languages) 0.4% gains for both labelled and unlabelled accuracies. These improvements are achieved only by a unified experiment setting, we do not tune parameters for individual language. Our self-training approach has the potential to achieve even better performances if we treat each of the languages separately, however, this is beyond the scope of this work. More precisely, our self-training method achieved the largest labelled and unlabelled improvements on Korean with absolute gains of 2.14 and 1.79 percentage points respectively. Other than Korean, we also gain statistically significant improvements on Basque, German, Hungarian and Swedish. For Basque, the method achieved 0.87% gain for labelled accuracy and the improvement for unlabelled accuracy is 0.81%. For German, improvements of 0.33% and 0.46% are gained by our self-trained model for labelled and unlabelled scores respectively. For Hungarian, we achieved a 0.42% gain on labelled accuracy, the unlabelled improvement is smaller (0.17%) thus not statistically significant. For Swedish, improvements of 0.59% and 0.68% are achieved for labelled and unlabelled accuracies. The unlabelled gain is statistically significant, while the labelled gain is not a statistically significant improvement which has a p-value of 0.067. As the improvements on Swedish are large but the test set is small (only contains 666 sentences), we decided to enlarge the test set by the Swedish development set. The Swedish development set contains 494 sentences and is not used for tuning in our experiments. The evaluation on the combined set showed 0.7% and 0.6% statistically significant (p <0.01) improvements for labelled and unlabelled scores. This confirms the effectiveness of our self-training method on Swedish. In terms of the effects of our method on other languages, our method gains moderate improvements on Arabic and Hebrew but these are statistically insignificant accuracy gains. We find negative results for French and Polish. Table TABREF104 shows detailed results of our self-training experiments. We compare our self-training results with the best non-ensemble parsing system of the SPMRL shared tasks BIBREF77 , BIBREF60 . The best results of the non-ensemble system are achieved by cerisara2014spmrl. Their system is also based on the semi-supervised learning, the LDA clusters BIBREF59 are used to explore the unlabelled data. The average labelled accuracy of our baseline on nine languages is same as the one achieved by cerisara2014spmrl and our self-trained results are 0.41% higher than their results. The average unlabelled accuracy of our self-trained model also surpasses that of cerisara2014spmrl but with a smaller margin of 0.18%. Overall, our self-trained models perform better in six languages (Arabic, Hebrew, Hungarian, Korean, Polish and Swedish) compared to the best non-ensemble system of cerisara2014spmrl. Parsing with Single DLM. We first evaluate the effect of the single DLM for both English and Chinese. We generate the unigram, bigram and trigram DLMs from 5 million auto-annotated sentences of the individual language. We then retrain the parser by providing different DLMs to generate new models. The lines marked with triangles in Figure FIGREF132 shows the results of our new models. Unigram DLM achieved the largest improvements for both English and Chinese. The unigram model achieved 0.38% labelled improvement for English and the improvement for Chinese is 0.9%. Parsing with Multiple DLMs. We then evaluate the parser with multiple DLMs. We use DLMs up to N-gram to retrain the parser. Take N=2 as an example, we use both unigram and bigram DLMs for retraining. This setting allows the parser to explore multiple DLMs at the same time. We plot our multi-DLM results by lines marked with the circle in Figure FIGREF132 a) and b) for English and Chinese respectively. As we can see from the figures, the best setting for English remains the same, the parser does not gain additional improvement from the bigram and trigram. For Chinese, the improvement increased when more DLMs are used. We achieved the largest improvement by using unigram, bigram and trigram DLMs at the same time (N=3). By using multiple DLMs we achieved a 1.16% gain on Chinese. Extracting DLMs from Larger datasets. To determine the optimal corpus size to build DLMs we extract DLMs from different size corpora. We start with 10 million sentences and increase the size in steps until all the unlabelled data (30 million for English and 20 million for Chinese) are used. We compare our results with the best result achieved by the DLMs extracted from 5 million annotations in Figure FIGREF133 . The results on English data suggest that the DLMs generated from larger corpora do not gain additional improvement when compared to the one that used 5 million sentences. The Chinese results show a moderate additional gain of 0.04% when compared to the previous best result. The effects indicate that 5 million sentences might already be enough for generating reasonably good DLMs. Extracting DLMs from High Quality Data. To evaluate the influence of the quality of the input corpus for building the DLMs, we experiment in addition with DLMs extracted from high-quality corpora. The higher quality corpora are prepared by parsing unlabelled sentences with the Mate parser and the Berkeley parser. We add only the sentences that are parsed identically by both parsers to the high-quality corpus. For Chinese, only 1 million sentences that consist of 5 tokens in average have the same syntactic structures assigned by the two parsers. Unfortunately, this amount is not sufficient for the experiments as their average sentence length is in stark contrast with the training data (27.1 tokens). For English, we obtained 7 million sentences with an average sentence length of 16.9 tokens. To get an impression of the quality, we parse the development set with those parsers. When the parsers agree, the parse trees have an accuracy of 97% (LAS), while the labelled scores of both parsers are around 91%. This indicates that parse trees where both parsers return the same tree have a higher accuracy. The DLMs extracted from 7 million higher quality sentences achieved a labelled accuracy of 91.56% which is 0.13% higher than the best result achieved by DLMs extracted from single parsed sentences. In total, the new model outperforms the baseline by 0.51%, with an error reduction rate of 5.7%. Evaluating on Test Sets. We apply the best settings tuned on the development sets to the test sets. The best setting for English is the unigram DLM derived from the double parsed sentences. Table TABREF134 presents our results and top performing dependency parsers which were evaluated on the same English dataset. Our approach surpasses our baseline by 0.46/0.51% (LAS/UAS) and is only lower than the three best neural network systems. When using a larger beam of 150, our system achieved a more competitive result. To have an idea of the performance difference between our baseline and that of chen2012utilizing, we include the accuracy of Mate parser on the same yamada03 conversion used by chen2012utilizing. Our baseline is 0.64% higher than their enhanced result and is 1.28% higher than their baseline. This confirms that our approach is evaluated on a much stronger parser. For Chinese, we extracted the DLMs from 10 million sentences parsed by the Mate parser and using the unigram, bigram and the trigram DLMs together. Table TABREF135 shows the results of our approach and a number of the best Chinese parsers. Our system gained a large improvement of 0.93/0.98% for labelled and unlabelled attachment scores. Our scores with the default beam size (40) are competitive and are 0.2% higher than the best reported result BIBREF47 when increasing the beam size to 150. Moreover, we gained improvements up to 0.42% for part-of-speech tagging on Chinese tests, and our tagging accuracies for English are constantly higher than the baselines. Results on English Main Evaluation Corpus. Finally, we apply our best English setting to our main evaluation corpus. We first extract new DLMs from the double parsed annotations of the LTH conversion, as LTH conversion is used in our main evaluation corpus. We then retain the parser with newly generated DLMs and apply the model to all five test domains (Conll, Weblogs, Newsgroups, Reviews and Answers). Table TABREF136 shows the results of our best model and the baselines. Our newly trained model outperforms the baseline in all of the domains for both labelled and unlabelled accuracies. The largest improvements of 0.91% and 0.82% is achieved on Newsgroups domain for labelled and unlabelled accuracy respectively. On average our approach achieved 0.6% labelled and unlabelled improvements for four target domains. The enhanced model also improved the source domain accuracy by 0.36% and 0.4% for labelled and unlabelled scores respectively. Analysis From the above experiments, we demonstrated the effect of co-/tri-training on parsing out-of-domain text with the off-the-shelf parsers. It remains unclear how the additional training data helps the target domain parsing. To understand where the improvements come from, in this section we give a detailed study on the results. We compare the annotations produced by our tri-training approach and the baseline and evaluate the changes on both token level and sentence level. For our analysis, we treat all the target domain as the same, the Weblogs, Newsgroups, Reviews and Answers domain test sets are used as a single set. Our self-training approaches demonstrated their merit in the above experiments, two confidence-based methods work equally well on most of the domains. This suggests self-training can be used for out-of-domain dependency parsing when there is a reasonably good confidence-based method available. As two confidence-based methods showed similar performances on our tested domains, the first guess would be they might consist of a large portion of identical additional training data. We assess our assumption on the development set. We first rank the dataset by different methods. Let INLINEFORM0 and INLINEFORM1 be the top ranked INLINEFORM2 % sentences of the development set by their Delta and adjusted parse scores. The identical rate is defined as the percentage of sentences that are presented in both INLINEFORM3 and INLINEFORM4 . Figure FIGREF83 shows the identical rate of our methods. The identical rates are lower than we expected, for top ranked 10% sentences only 5% of them are identical, and the identical rate is 56% for the first half of the ranked list. As the additional training data from Delta and adjusted parse scores can consist of more than 40 percent different sentences, we suspect there might be some behaviour difference between two methods. In order to have a more clear picture about the behaviours of our confidence-based methods, we applied both token level and sentence level analysis to those methods. This allows us to have an in-depth comparison between our confidence-based methods. In the same way as we did in our analysis for co-training, we plot the accuracy changes of major syntactic labels and compute improvements different on unknown/known words in our token level analysis. For sentence level analysis, we evaluate all four factors on both confidence-based methods, cf. sentence length, the number of unknown words, the number of prepositions and the number of conjunctions. For our analysis, three target domain test sets are used as a single set. In this section, we analyse the results achieved by our self-training approach. Our approach achieved improvements on most of the languages, but also showed negative effects on two languages. Thus, we analyse both positive and negative effects introduced by our self-training approach. For the analysis on positive effects, we choose the Korean dataset, as our self-training method achieved the largest improvement on it. The goal for our analysis on Korean is to find out where the improvement comes from. We apply our token and sentence level analysis to Korean. We evaluate for the token level the accuracy changes of individual labels and compare the improvements of unknown and known words. For our sentence level evaluation, we evaluate the performances on different sentence length and the number of unknown words per sentence. We do not evaluate on the number of subjects, the number of prepositions and number of conjunctions as those factors are language specific, thus they might not suitable for Korean. For the analysis of negative effects, we analyse the French dataset as the French test set is larger than that of Polish. We aim to have an idea why our self-training approach has a negative effect on results. Our analysis focuses on two directions, firstly, we check the correlation between the quality of French data and our confidence scores, as the correlation is the pre-condition of the successful use of our self-training approach; secondly, we check the similarity between the test set and the unlabelled set to assess the suitability of unlabelled data. In this section, we analyse the improvements achieved by our DLM-enhanced models. We analyse both English and Chinese results. For English, we analyse the results of our main evaluation corpus, as the corpus contains both in-domain and out-of-domain data. This allows us to compare the source domain and target domain results in a unified framework. We analyse the Conll in-domain test set and a combined out-of-domain dataset which consists of the Weblogs, Newsgroups, Reviews and Answers domain test sets. For Chinese, we analyse the in-domain test set to find out the sources of the improvements. We apply the token and sentence level analysis for both languages. The token level analysis includes the accuracy assessment of individual labels and the improvements comparison of known and unknown words. The sentence level analysis consists of assessments on four factors: sentence lengths, the number of unknown words, the number of prepositions and the number of conjunctions. For each of the factors, we group the sentences based on their properties assessed by each factor, we then calculate for each group the percentage of sentences that are improved, worsened and unchanged in accuracy. The improvements of each group can then be visualised by the gaps between improved and worsened sentences. Token Level Analysis Individual Label Accuracy. We first compared the individual label accuracies of the tri-trained model and the baseline. For each of the label we calculate recalls, precisions and f-scores, we then compute the score differences between the tri-trained model and the baseline model. Table FIGREF59 shows the score changes of the most frequent labels. All the f-scores of our tri-trained model outperform the baseline, the only exception is the P (punctuations) which drops slightly by 0.1%. Eight labels achieved around 0.5% improvements which include ROOT (root of the sentence), SBJ (subject), COORD (coordination), CONJ (conjunct), modifiers (NMOD (modifier of nominal), PMOD (modifier of preposition), AMOD (modifier of adjective or adverbial)) and DEP (unclassified relations). ADV (adverbial), VC (verb chain) and TMP (temporal adverbial or nominal modifier) are labels that have improvements between 1% and 2%. The accuracy changes are much larger for label OBJ and PRD, thus we used a secondary y-axis for them. More precisely, an improvement of 5.9% is found on OBJ (object), a much better precision of 10% suggests this improvement is mainly contributed by the reduced false positive. The largest improvement of 15% comes from label PRD (predicative complement), the improvement is as a result of significant recall change. The baseline parser can only recall 43% of the label, it has been improved significantly (34%) by the tri-trained model. Table TABREF60 shows the confusion matrix of dependency labels. As we can see from the table, the PRD has been frequently labeled as OBJ by the baseline, but this has been largely corrected by our tri-training model. Unknown Words Accuracy. We then evaluate unknown words at the token level, by comparing the labelled and unlabelled accuracy scores between words that presented in the source domain training data (Known) and words that are unseen from training sets (Unknown). We present the accuracy comparison of known/unknown words together with that of all tokens in Table TABREF61 . The tri-trained model achieved better gains on unknown words for both labelled and unlabelled accuracies. The labelled gains of the tri-trained model on unknown words are 1.8%, which is 0.2% higher than that of known words (1.6%). The unlabelled improvements on unknown words (0.7%) is 0.3% higher than known words (0.4%). Although the absolute gains for unknown words are larger, the performance of known words is still better in terms of the error reduction rate. For known words, tri-trained model reduced 7% errors on labelled accuracy and this is 2.4% better than that of unknown words. The error reduction for unlabelled accuracy is the same (2.5%) for both unknown and known words. Individual Label Accuracy. Figure FIGREF85 shows the comparison of accuracy changes between our adjusted parse score-based approach and the Delta-based approach. Two approaches show similar patterns on the individual labels, both of them show no effect on labels such as P (punctuations), CONJ (conjunct) and PRD (predicative complement). They both gained more than 0.5% f-score on ROOT (root of the sentence), COORD (coordination), some modifiers (PMOD, AMOD) and unclassified relations (DEP). In addition to the common improvements between two methods, the Delta method also gains a 0.9% improvement on VC (Verb chain), and the parse score method achieved 0.5% improvement on SBJ (subject). Figure TABREF86 shows the confusion matrix of your self-training methods compared with the baseline. Unknown Words Accuracy. For unlabelled improvements, both methods showed a large gap between known words and unknown words. Improvements on unknown words are at least doubled in value when compared to that of known words. The improvement differences are smaller on the labelled accuracies. The value for unknown words is only 0.2% higher than that of known words. This is an indication that self-training is able to improve unknown words attachment but still does not have sufficient information to make label decisions. The improvements of the entire set are same as that of known words and are not affected largely by the unknown words. This is due to the unknown words only occupying 5% of the dataset. Sentence Level Analysis We then carry out our sentence level analysis, the sentence level analysis use sentences as a whole, all the tokens in the same sentences are always put into the same class. In total, we analysis four different sentences factors, our goal is to have a more clear picture about the improvements of different type of sentences. Sentence Length. Figure FIGREF63 shows the performance changes for sentences of different length, the results of the tri-trained model is compared with the baseline. As we can see from the figure, the percentage of sentences that remain the same accuracies continuously decrease when the sentence length increases. We suggest this is mainly because longer sentences are harder to parse, thus are less likely to have the same accuracy. The rate of sentences parsed better is constantly larger than that of parsed worse. The gaps widened when the sentence length increases until reached the widest point at a length of 30, after that the gap narrowed and become very close at 40 tokens. However, there are only less than 200 sentences in the classes which have a sentence length of more than 35, thus the results of those classes become less reliable. Overall, the analysis suggests the major improvements are contributed by sentences that have a length between 15 and 30 tokens. Unknown Words. Unknown words are hard to parse as the model trained on training data do not have sufficient information to annotate those words. Thus a large number of unknown words in a sentence usually results in a poor accuracy. We group sentences that have the same number of unknown words and then apply our analysis method to each class. We noted that 50% of the sentences do not contain unknown words, 30% of them contain one unseen word, 12% of which contain 2 such words, the rest 8% contain 3 or 4 unknown words. For the sentences that do not contain unknown words, about 60% of them remain the same accuracy, 25% of them have a higher accuracy and 15% of them are pared worse. This gap widened slowly until 3 unknown words per sentence, after that the gap narrowed for sentences have 4 unknown words. Overall, the gains on sentences with unknown words are slightly better than that of sentences contain only known words. This is in line with our finding in the token level analysis. Prepositions. The attachment of prepositions is one of the complex problems that are difficult for parsing. It can be found even harder when going out-of-domain, as their behaviour might change. To address those changes we looked at the labels assigned to the prepositions. For both source and target domain we find NMOD (Modifier of nominal), ADV (General adverbial), LOC (Locative adverbial or nominal modifier) and TMP (Temporal adverbial or nominal modifier) are the most frequently assigned labels, those labels covering 80% of the total prepositions. However, the percentages for the source domain and the target domain are very different. In the source domain 35% of the prepositions are labelled as NMOD and 19% of them are labelled as ADV, while, in the target domain, the rate for NMOD and ADV are very close, both labels contribute around 28%. In terms of our sentence level analysis on the number of prepositions, Figure FIGREF65 illustrates the performance changes when the number of prepositions increases in sentences. The percentages of sentences parsed better and worse increased smoothly when the number of preposition increases, the tri-training gains at least 10% for all the cases. Generally speaking, tri-training works better for sentences that have prepositions, the average gain for sentences that have prepositions is 15% and this is 5% more than that of sentences that do not have a proposition. Conjunctions. The annotation of conjunctions is another well-known problem for parsing. More conjunction usually results in a longer sentence and are more complex as well. Figure FIGREF66 shows the analysis on conjunctions. The figure is similar to that of prepositions, the tri-training model gained more than 11% for all the classes and have higher gains for sentences containing conjunctions. Example Sentences. Table TABREF67 shows some example sentences that have been improved largely by our tri-training approach. Sentence Length. For the sentence level analysis we first evaluate the performance of our self-training approaches on the different sentence lengths. The sentences that have the same length are grouped into classes. For each class, the sentences are further classified into three subclasses (better, worse and no change) according to their accuracies when compared with the baseline. We plot them together with the number of sentences in individual classes in Figure FIGREF89 . The left-hand side is the figure for the parse score-based method, while the right-hand side is that of the Delta-based method. At a first glance, both methods show similar behaviours, they both do not help the very short sentences. The percentages for sentences longer than 30 tokens are varied. More precisely, the parse score-based method helps most on the sentences containing between 10 and 35 tokens, and the Delta-based method is most productive on sentences which have a length between 15 and 30 tokens. Unknown Words. For the sentence level analysis of unknown words, we evaluate on both labelled and unlabelled accuracy scores. This is mainly because according to our token level analysis our self-training gained much larger unlabelled improvements on the unknown words than that of known words. Figure FIGREF90 shows our analysis of unknown words, the upper figures are the analysis of labelled accuracies and the lower two are that of unlabelled accuracies. As we can see from the above two figures, the gap between sentences that have a better labelled accuracy and sentences worsened in accuracy are not affected by the increasing number of unknown words in sentences. The gap on unlabelled accuracies shows a clear increasement when more than two unknown words are found in the sentence. This is in line with our finding in the token level analysis that self-training could improve more on unknown words attachment. Prepositions. The preposition analysis of our confidence-based self-training is shown in Figure FIGREF91 . Both methods show very similar curves, they gain small improvements around 1% on sentences that have up to one preposition, but they achieved larger improvements on sentences that have at least 2 prepositions. Although the differences between sentences that are parsed better and those parsed worse varies for the different number of prepositions, most of the gains are larger than 6% and the largest gain is around 14%. Overall, the confidence-based self-training methods show clear better performances on sentences that have multiple prepositions. Conjunctions. In terms of conjunctions, both methods show similar figures, cf. Figure FIGREF92 . They both show gains for most of the cases, except that the parse score-based method shows no effect on sentences that have 3 conjunctions. They both start with a small gain of 2-3% when there is no conjunction in the sentence and the improvement widened to 7-10% for sentences have more conjunctions. There are only 100 sentences in the class of 3 conjunctions, thus the numbers of this class are less reliable. Generally speaking, the self-training approaches work slightly better on the sentences that have more conjunctions. Example Sentences. Table TABREF93 and table TABREF94 present example sentences that have been improved by the parse score-based and the Delta-based self-training approaches respectively. We choose four sentences (the first four sentences) that have been largely improved by both approaches, as we can see from table the improvements achieved by both models are very similar, some are even identical. Self-training In this chapter, we introduce our self-training approach for English out-of-domain text. Self-training is one of the semi-supervised techniques that improves the learner's performance by its own annotations. Taking parsing as an example, a basic self-training iteration usually consists of three steps: firstly a base model is trained on the original manually annotated training data, then the base model is used to annotate unlabelled sentences (usually much larger than the original training set), finally the parser is retrained on the new training set, which consists of both manually and automatically annotated data. The self-training iteration can also be repeated to conduct a multi-iteration approach. Self-training has been adapted first to constituency parsers and achieved reasonably good gains for both in- and out-of-domain parsing BIBREF19 , BIBREF45 , BIBREF20 , BIBREF21 , BIBREF15 . While self-training approaches for dependency parsing are less successful, the evaluations usually found no impact or even negative effects on accuracy BIBREF38 , BIBREF69 , BIBREF55 , BIBREF70 . There are only a few successful self-training approaches reported on the dependency parsing, but those approaches are usually more complex than the basic self-training iterations. kawahara2008learning's approach needs a separately trained classifier to select additional training data, chen2008learning used only partial parse trees and goutam2011exploring's approach conditions on a small initial training set. In this work, we introduce a novel confidence-based self-training approach to out-of-domain dependency parsing. Our approach uses confidence-based methods to select training sentences for self-training. The confidence scores are generated during the parsing thus we do not need to train a separate classifier. Our self-training approach employs a single basic self-training iteration, except for the second step we add only sentences that have higher confidence scores to the training set. Overall, we present a simple but effective confidence-based self-training approach for English out-of-domain dependency parsing. We compare two confidence-based methods to select training data for our self-training. We evaluate our approaches on the main evaluation corpora as well as the Chemical domain text from the domain adaptation track of CoNLL 2007 shared task. The remaining parts of this chapter are organised as follows. Section SECREF21 shows the detail of our self-training approaches. Section SECREF22 introduces the experiment set-up of our evaluation. We then discuss and analyse the results in Section SECREF23 and SECREF24 respectively. The last section (Section SECREF25 ) summarises the chapter. Confidence-based Self-training The confidence-based self-training approach is inspired by the successful use of the high-quality dependency trees in our agreement based co-training and the correlation between the prediction quality and the confidence-based methods BIBREF71 , BIBREF72 , BIBREF73 . The confidence-besed methods were previously used by mejer2012 to assess the parsing quality of a graph-based parser, but they haven't been used in self-training or transition-based parser before this work. Based on our experience on co-training and the results of the previous work on self-training, we believe the selection of high-quality dependency trees is a crucial precondition for the successful application of self-training to dependency parsing. Therefore, we explore two confidence-based methods to select such dependency trees from newly parsed sentences. More precisely, our self-training approach consists of the following steps: We test two methods to gain confidence scores for a dependency tree. The first method uses the parse scores, which is based on the observation that a higher parse score is correlated with a higher parsing quality. The second method uses the method of mejer2012 to compute the Delta score. mejer2012 compute a confidence score for each edge. The algorithm attaches each edge to an alternative head. The Delta is the score difference between the original dependency tree and the tree with the changed edge. This method provides a per-edge confidence score. Note that the scores are real numbers and might be greater than 1. We changed the Delta-approach in two aspects from that of mejer2012. We request that the new parse tree contains a node that has either a different head or might have a different edge label or both, since we use labelled dependency trees in contrast to mejer2012. To obtain a single score for a tree, we use the averaged score of scores computed for the individual edge by the Delta function. We use our main evaluation parser (Mate parser BIBREF12 ) to implement our self-training approach. Mate is an arc-standard transition-based parser which employs beam search and a graph-based rescoring model. This parser computes a score for each dependency tree by summing up the scores for each transition and dividing the score by the total number of transitions. Due to the swap-operation (used for non-projective parsing), the number of transitions can vary, cf. BIBREF74 , BIBREF75 . Our second confidence-based method requires the computation of the score differences between the best tree and alternative trees. To compute the smallest difference (Delta), we modified the parser to derive the highest scoring alternative parse tree that replaces a given edge with an alternative one. This means either that the dependent is attached to another node or the edge label is changed, or both the dependent is attached to another node and the edge is relabelled. More precisely, during the parsing for alternative trees, beam candidates that contain the specified labelled edge will be removed from the beam at the end of each transition. Let INLINEFORM0 be the score of the best tree, INLINEFORM1 be the score of the alternative tree for the INLINEFORM2 labelled edge and INLINEFORM3 be the length of the sentence, the Delta ( INLINEFORM4 ) for a parse tree is then calculated as follows: DISPLAYFORM0 To obtain high-accuracy dependency trees is crucial for our self-training approaches, thus we first assess the performance of the confidence-based methods on the development set for selecting high-quality dependency trees. We rank the parsed sentences by their confidence scores in a descending order. Figure FIGREF73 shows the accuracy scores when selecting 10-100% of sentences with an increment of 10%. The Delta method shows the best performance for detecting high-quality parse trees. We observed that when inspecting 10% of sentences, the accuracy score difference between the Delta method and the average score of the entire set is nearly 14%. The method using the parse score does not show such a high accuracy difference. The accuracy of the 10% top ranked sentences are lower. We observed that despite that the parse score is the averaged value of the transitions, long sentences generally exhibit a higher score. Thus, short sentences tend to be ranked at the bottom, regardless of the accuracy. To give a more clear view, we plot the relations between the sentence lengths, parse scores and the accuracies in figure FIGREF74 . The sentences of the Weblogs development set are represented by dots in the figure based on their properties. To soften the sentences proportional to their length, we penalise the original parser score according to the sentence length, i.e. longer sentences are penalised more. The penalisation is done assuming a subtractive relationship between the original score and the length of the sentences ( INLINEFORM0 ) weighted by a constant ( INLINEFORM1 ) which we fit on the development set. The new parse scores are calculated as follows: DISPLAYFORM0 To obtain the constant INLINEFORM0 , we apply the defined equation to all sentences of the development set and rank the sentences according to their adjusted scores in a descending order. The value of INLINEFORM1 is selected to minimise the root mean square-error ( INLINEFORM2 ) of the ranked sentences. Following mejer2012 we compute the INLINEFORM3 by: DISPLAYFORM0 We use 100 bins to divide the accuracy into ranges of one percent. As the parse scores computed by the parser are generally in the range of [0,3], the parse scores in the range of INLINEFORM0 are assigned to the INLINEFORM1 bin. Let INLINEFORM2 be the number of sentences in INLINEFORM3 bin, INLINEFORM4 be the estimated accuracy of the bin calculated by INLINEFORM5 and INLINEFORM6 be the actual accuracy of the bin. We calculate INLINEFORM7 by iterating stepwise over INLINEFORM8 from 0 to 0.05 with an increment of 0.005. Figure FIGREF78 shows the INLINEFORM9 for the adjusted parse scores with different values of INLINEFORM10 . The lowest INLINEFORM11 is achieved when INLINEFORM12 , this reduces the INLINEFORM13 from 0.15 to 0.06 when compared to the parse score method without adjustment ( INLINEFORM14 ). In contrast to the INLINEFORM15 calculated when INLINEFORM16 is set to 0.015, the unranked sentences have a INLINEFORM17 of 0.38, which is six times larger than that of the adjusted one. The reduction on INLINEFORM18 achieved by our adjustment indicates that the adjusted parse scores have a higher correlation to the accuracy when compared to the ones without the adjustment. Figure FIGREF73 shows the performance of the adjusted parse scores for finding high accuracy parse trees in relation to the original parse score and the Delta-based method. The adjusted parse score-based method performs significantly better than that of the original score with a performance similar to the Delta method. The method based on the parse scores is faster as we do not need to apply the parser to find alternatives for each edge of a dependency tree. Multi-lingual Self-training Self-training approaches have previously been used mainly for English parsing BIBREF45 , BIBREF19 , BIBREF20 , BIBREF26 , BIBREF21 , BIBREF15 . The few successful attempts of using self-training for languages other than English were limited only to a single language BIBREF27 , BIBREF76 . The evaluations of using self-training for multiple languages are still found no improvements on accuracies BIBREF55 , BIBREF70 . In the previous chapter we demonstrated the power of the confidence-based self-training on English out-of-domain parsing, the evaluation on four different domains showed large gains. We wonder if the self-training methods could be adapted to other languages. The first problem with going beyond English is the lack of resources. To the best of our knowledge, there is no out-of-domain corpus available for languages other than English. In fact, even for English, the out-of-domain dataset is very limited. Thus, we are not able to evaluate on the same domain adaptation scenario as we did for English. In English evaluation, we do not use any target domain manually annotated data for training, which is a typical domain adaptation scenario that assume no target domain training data is annotated. The other common domain adaptation scenario assumes that there is a small number of target domain training data available. In this chapter, we use a small training set (5,000 sentences) to simulate the latter scenario. The same domain unlabelled set is annotated by the base model to enlarge the training data. Strictly speaking, this is an under-resourced in-domain parsing setting as in the 2014 shared task at the workshop on statistical parsing of morphologically rich language (SPMRL) BIBREF60 . More precisely, in this chapter, we evaluate with the adjusted parse score-based method, as both methods have very similar performances and the adjusted parse scores are fast to compute. We evaluate this method on nine languages (Arabic, Basque, French, German, Hebrew, Hungarian, Korean, Polish, Swedish) corpora of the SPMRL shared task BIBREF60 . The rest of the chapter are organized as follows: We introduce our approach and experiment settings in Section SECREF27 and SECREF28 respectively. Section SECREF29 and SECREF30 discusses and analyses the results. We summarise the chapter in Section SECREF31 . Multi-lingual Confidence-based Self-training Our goal for the multi-lingual experiments is to evaluate the performance of our confidence-based method on more languages. Our previous evaluations on multiple web domains and the Chemical domain showed that our configuration is robust and can be directly used across domains. Thus, in our multi-lingual evaluation we again directly adapt our best configuration from our English evaluation, in which the first half of the ranked auto-annotated dataset is used as additional training data for all the languages. We also do not tune different configurations for individual language, as we want to evaluate the confidence-based self-training in a unified framework. More precisely, our multi-lingual self-training approach consists of a single iteration with the following steps: Here we give a recap of our adjusted parse score method and confirm the correlation between accuracy and the adjusted parse scores on the multi-lingual development set. The adjusted parse score method which we proposed in the previous chapter is mainly based on the observation that the parse scores of sentences are correlated with their accuracies. However, the original parse scores are sensitive to sentence length, in which longer sentences usually have higher scores. To tackle this problem, we introduce a simple but effective adjustment on the scores. The original parse score of an auto-parsed sentence ( INLINEFORM0 ) is subtracted by its sentence length ( INLINEFORM1 ) multiplied by a fixed number INLINEFORM2 . More precisely, the adjusted parse scores are calculated as follows: DISPLAYFORM0 To obtain the constant INLINEFORM0 , we apply the defined equation with different values of INLINEFORM1 to all sentences of the development set and rank the sentences by their adjusted scores in a descending order. Let INLINEFORM2 be the position number of the INLINEFORM3 sentence after ranking them by the adjusted scores. The value of INLINEFORM4 is selected to maximize the accuracy of sentences that have a INLINEFORM5 within the top 50%. We evaluate stepwise different values of INLINEFORM6 from 0 to 0.05 with an increment of 0.005. The highest accuracy of the top ranked sentences is achieved when INLINEFORM7 (see Figure FIGREF100 ), thus INLINEFORM8 is set to 0.015 in our experiments. The INLINEFORM9 value used in our English evaluations is the same 0.015, this shows a stability of our equation. Figure FIGREF101 shows the accuracies when inspecting 10 -100% of sentences ranked by adjusted and original parse scores. We found that adjusted parse scores lead to a higher correlation with accuracies compared to original parse scores. This is in line with our finding in previous evaluation on English out-of-domain data. Positive Effects Analysis Individual Label Accuracy. The Korean syntactic labels set used in the shared task contains 22 labels BIBREF60 . We listed the 12 most frequently used labels in our analysis. Those labels are presented in the Korean test set for at least 1,000 times. As we can see from the Figure FIGREF108 , the largest f-score improvement of 5.6% is achieved on conjuncts (conj). Large gains of more than 0.4% are achieved on nearly all the labels, the only exception is punctuations (p), for punctuations our self-training approach only achieved a moderate improvement of 0.1%. The adverbial modifier (adv), topic (tpc), subordination (sub), auxiliary verb (aux) and modifier of predicate (vmod) have improvements between 0.4% and 0.9%. The other five labels, adnominal modifier (adn), modifier of nominal (nmod), root of the sentence (root), object (obj), subject (sbj) are improved by more than 1%. Table TABREF107 shows the confusion matrix of the dependency labels. Unknown Words Accuracy. Table TABREF109 shows our analysis of the unknown words. The unknown words rate for the Korean test is surprisingly higher than expected, more than 45% of the words in the test set are not presented in the training set. This might due to two reasons: firstly the training set is very small only contains 5k sentences thus have a less coverage of vocabulary; secondly and the main reason is the Korean tokens used in the shared task are combinations of the word form and the grammatical affixes. The latter creates much more unique tokens. The vocabulary of the training set is 29,715, but the total number of tokens is only 68,336, which means each token only shows less than 2.3 times on average. Despite the high unknown words rate, our self-training approach showed a better labelled improvement (2.4%) on unknown words than that of known words (1.9%). While the unlabelled improvement (1.8%) is exactly the same for both known and unknown words. Sentence Length. We then apply the sentence level analysis for Korean test set. We first evaluate on the different sentence length, sentences that have the same length are assigned into the same group. We then calculate the percentage of sentences that are improved, decreased or unchanged in accuracy for each group. We plot the results along with the number of sentences in each of the groups in Figure FIGREF111 . As we can see from the figure, the gap between the improved and decreased sentences are smaller (about 3%) on short sentences that contain less than 10 tokens. The gap significantly widens when the sentence length grows. The gap increased to 30% for sentences containing more than 20 tokens. This is a clear indication that our self-training yielded stronger enhancements on longer sentences. Unknown Words. As we found in the token level analysis, the unknown words rate is very high for Korean test set. In the extreme case, there could be more than 20 unknown words in a single sentence. The curve shows an overall increased gap between the sentences improved by the self-trained model and those worsened when the number of unknown words per sentence increases. However, the gains sometimes drop, the most notable group is the one for sentences containing 7 unknown words. The percentage of worsened sentences are even 0.5% higher than that of improved ones. It is unclear the reason why the behaviour changes, but due to the group size is small (only 200 sentences) we suggest this might caused by chance. Negative Effects Analysis As our confidence-based self-training is based on the hypothesis that the confidence scores are able to indicate the quality of the annotations. Thus when our self-training approach showed a negative effect on the accuracy, the first thing comes to our mind is to check the correlation between confidence scores and accuracies. We analyse the correlation on the French test set by ranking the sentences in the dataset according to their confidence scores. We assess the accuracy of the top ranked INLINEFORM0 percent sentences. We set INLINEFORM1 to 10% and increase it by 10% in each step until all the sentences are included. We show the analysis in Figure FIGREF114 . The analysis suggests that there is a reasonably high correlation between the quality of the sentences and our confidence-based method. The top ranked 10% sentences have an accuracy of 89.99% which is 8% higher than the average. The accuracy for top ranked 50% sentences is 86.77% which surpasses the average by 5%. The quality of unlabelled data is another issue that might affect the results. We first compute the basic statistics of the training, test and unlabelled dataset to have a surface level comparison. As shown in Table TABREF116 the unlabelled data is very different from the training and test set. More precisely, the average sentence length of the unlabelled data is much shorter. The unknown words rate of the unlabelled dataset (16.82%) is three times higher than that of the test set (5.91%). We further calculate the cosine similarity between the training set and the test/unlabelled dataset. The test set is highly similar to the training set with a similarity of 99.74%. The similarity score of the unlabelled data is more than 4% lower, which suggests the unlabelled data is more different. Dependency Language Models In this chapter, we introduce our dependency language models (DLM) approach for both in-domain and out-of-domain dependency parsing. The co-training and self-training approaches evaluated in the previous chapters have demonstrated their effectiveness on the out-of-domain parsing, however, neither approaches gained large improvements on the source domain accuracy. In fact, sometimes they even have a negative effect on the in-domain results. Another disadvantage of co-/self-training is that they can use only a relatively small additional training dataset, as training parsers on a large corpus might be time-consuming or even intractable on a corpus of millions of sentences. The goal of our DLM approach is to create a robust model that is able to improve both in-domain and out-of-domain accuracies. Unlike the co-/self-training, the DLM approach does not use the unlabelled data directly for retraining. Instead, a small number of features based on DLMs are integrated into the parser, thus we could explore much larger unlabelled datasets. Other semi-supervised techniques that use the unlabelled data indirectly include word clustering BIBREF57 , BIBREF59 and word embedding BIBREF48 , BIBREF61 , BIBREF62 . However, both word clustering and word embedding are generated from unannotated data, thus do not consider the syntactic structures. The DLMs used in this work are generated from the automatically annotated dataset, which could benefit additionally from the syntactic annotations. Dependency language models are variants of language models based on dependency structures. An N-gram DLM is able to predict the next child when given N-1 immediate previous children and their head. DLMs were first introduced by shen2008new and were later adapted to dependency parsing by chen2012utilizing. chen2012utilizing integrated DLMs extracted from large auto-parsed corpora into a second-order graph-based parser. DLMs allow the parser to explore higher order features but without increasing the time complexity. We use a similar approach as chen2012utilizing, but our approach is different in six important aspects: In the rest of this chapter, we introduce our approaches in Section SECREF33 , we present our experiment set-up in Section SECREF34 . In Section SECREF35 and SECREF36 we discuss and analyse the results. In the final section (Section SECREF37 ) we summarise the chapter. Dependency Language Models for Transition-based System Dependency language models were introduced by shen2008new to capture long distance relations in syntactic structures. An N-gram DLM predicts the next child based on N-1 immediate previous children and their head. We integrate DLMs extracted from a large parsed corpus into the Mate parser BIBREF12 . We first train a base model with the manually annotated training set. The base model is then used to annotate a large number of unlabelled sentences. After that, we extract DLMs from the auto-annotated corpus. Finally, we retrain the parser with additional DLM-based features. Further, we experimented with techniques to improve the quality of the syntactic annotations which we use to build the DLMs. We parse the unlabelled data with two different parsers and then select the annotations on which both parsers agree on. The method is similar to co-training except that we do not train the parser directly on these auto-labelled sentences. We build the DLMs with the method of chen2012utilizing. For each child INLINEFORM0 , we gain the probability distribution INLINEFORM1 , where INLINEFORM2 refers to INLINEFORM3 immediate previous children and their head INLINEFORM4 . The previous children for INLINEFORM5 are those who share the same head with INLINEFORM6 but are closer to the head word according to the word sequence in the sentence. Consider the left side child INLINEFORM7 in the dependency relations INLINEFORM8 as an example; the N-1 immediate previous children for INLINEFORM9 are INLINEFORM10 . In our approach, we estimate INLINEFORM11 by the relative frequency: DISPLAYFORM0 By their probabilities, the N-grams are sorted in a descending order. We then used the thresholds of chen2012utilizing to replace the probabilities with one of the three classes ( INLINEFORM0 ) according to their position in the sorted list, i.e. the probabilities having an index in the first 10% of the sorted list are replaced with INLINEFORM1 , INLINEFORM2 refers to probabilities ranked between 10% and 30%, probabilities that are ranked below 30% are replaced with INLINEFORM3 . During parsing, we use an additional class INLINEFORM4 for relations not presented in DLMs. We use the classes instead of the probability is because our baseline parser uses the binary feature representations, classes are required to map the features into the binary feature representations. As a result, the real number features are hard to be integrated into the existing system. In the preliminary experiments, the INLINEFORM5 class is mainly filled by unusual relations that only appeared a few times in the parsed text. To avoid this we configured the DLMs to only use elements which have a minimum frequency of three, i.e. INLINEFORM6 . Table TABREF125 shows our feature templates, where INLINEFORM7 is an index which allows DLMs to be distinguished from each other, INLINEFORM8 , INLINEFORM9 are the top and the second top of the stack, INLINEFORM10 refers the coarse label of probabilities INLINEFORM11 (one of the INLINEFORM12 ), INLINEFORM13 refer to part-of-speech tags, word forms of INLINEFORM14 , and INLINEFORM15 is the dependency label between INLINEFORM16 and INLINEFORM17 . English Analysis Individual Label Accuracy. We first analyse accuracy changes of most frequent labels of our in-domain and out-of-domain test sets. As we can see from Figure FIGREF139 the most frequent labels of in-domain data are slightly different from that of out-of-domain data. Label NAME (name-internal link) and LOC (locative adverbial) that frequently showed in the in-domain set is less frequent in out-of-domain data. Instead, the out-of-domain data have more PRD (predicative complement) and AMOD (modifier of adjective or adverbial) than in-domain data. In term of the improvements of individual labels, they both show improvements on most of the labels. They achieved improvements of at least 0.4% on label OBJ (object), COORD (coordination), CONJ (conjunct). More precisely, the DLM model achieved large improvements of more than 1% for in-domain data on CONJ (conjunct) and LOC (locative adverbial) and gained moderate improvements of more than 0.4% on OBJ (object), COORD (coordination) and ADV (adverbial). While for out-of-domain data, our approach gained more than 1% f-scores on OBJ (object) and PRD (predicative complement), and improved three major modifiers (NMOD, PMOD and AMOD), VC (verb chain), COORD (coordination), CONJ (conjunct) and DEP (unclassified) for more than 0.4%. Table TABREF140 and table TABREF141 show the confusion matrices of the dependency labels on in-domain and out-of-domain test sets respectively. Unknown Words Accuracy. The unknown words rate for the in-domain test set is much lower than that of the out-of-domain one. For the in-domain test set, only 1,000 tokens are unknown and surprisingly both the DLM model and the base model have a better accuracy on the unknown words. Our DLM model achieved labelled improvement of 1% on the unknown words which is 3 times than the gain for that of known words (0.3%). While the unlabelled improvement for both known and unknown words are exactly the same 0.4%. The larger improvement on out-of-domain data is achieved on the known words, with a 0.1%-0.2% small difference when compared to that of unknown words. A detailed comparison can be found in Table TABREF142 . Sentence Length. Figure FIGREF143 shows our analysis on sentence length. The analysis of in-domain data shows the DLM model mostly helped the sentences consisting of 10-20 tokens. For sentences shorter than 10 tokens the DLM model even shows some negative effects. We suggest this might because for in-domain parsing the base model is already able to achieve a high accuracy on short sentences thus they are harder to improve. When sentences are longer than 20 tokens, the rates for both improved and worsened sentences varies, but the overall positive and negative effects are similar. In terms of the analysis on out-of-domain set, positive effects of more than 4.5% can be found in sentences that have a length of 10-35 tokens, but not in sentences shorter than 10 tokens. Unknown Words. As stated before, the in-domain test set contains fewer unknown words. In fact, most of the sentences do not contain unknown words or only have one unknown word. The DLM model achieved 3% gain for the former and 3.9% gain for the latter. For analysis of the out-of-domain data, our DLM model showed similar gains of around 5% for all the classes. Figure FIGREF145 shows our analysis on unknown words. Prepositions. The number of prepositions analysis for in-domain data does not show a clear picture of where the improvement comes from. The rates of sentences parsed better and sentences parsed worse varies, cf. Figure FIGREF146 . While the analysis for out-of-domain showed a clear increased gap between sentences have better accuracies and the sentences have lowered accuracies when the number of prepositions increases. The largest gap of 10% is achieved on sentences that have at least 5 prepositions. Conjunctions. Figure FIGREF147 shows our analysis of the different number of conjunctions. For in-domain test set, the DLM model gained 4% for sentences do not have conjunctions and the number decreased when the number of conjunctions increases. For the out-of-domain test set the enhanced model gained around 4% for sentences have up to 2 conjunctions, after that, the gap increased to 13% for sentences have 3 conjunctions. Example Sentences. Table TABREF148 and table TABREF149 show some example sentences that have been improved largely by our DLM-based approaches on the English in-domain and out-of-domain test sets respectively. Analysis for Chinese Individual Label Accuracy. The Chinese dataset has a smaller label set than that of English, the 10 most frequent labels already cover 97% of the test set. We illustrate accuracy changes of individual labels in Figure FIGREF152 . Our DLM model improved all major labels, the only exception is the label M (dependent of measure word, such as in words “ " (19 years),“ " is the dependent of the measure word “ ") which showed a 1% decreasement in f-score. Our model achieved the largest improvement of 1.9% on POBJ (object of preposition), large improvements of more than 1% can be also found for label OBJ (object), DEG (dependent of associative DE), DEC (dependent of DE in a relative-clause) and LC (Child of localizer). For all other labels, moderate improvements of 0.2%-0.3% are achieved by our method. Table TABREF153 shows the confusion matrix of the dependency labels on the Chinese test set. Unknown Words Accuracy. Table TABREF154 shows our analysis of the unknown words accuracies. Our DLM model improved mainly the known words, with 1% large gains for both labelled and unlabelled accuracies. While our model did not improve the labelled accuracy of the unknown words, the model only achieved a small 0.2% improvement on the unlabelled score. This is an indication that the Chinese unknown words are very hard to improve without the manually annotated examples. Sentence Length. As shown in Figure FIGREF156 , the Chinese sentences are evenly distributed in the classes of different sentence length. Our model had limited effects on sentences less than 20 tokens but showed large gains on sentences longer than that. The enhanced model achieved a gain of 5% on sentences of 20 tokens and the improvement increases until reaching the largest gain (24%) at the class of 35 tokens/sentence. Overall the major improvements of Chinese data were achieved on sentences that have at least 20 tokens. Unknown Words. We skip the unknown words factor for our Chinese sentence level analysis. This is due to the finding from our token level analysis, which suggests our model did not improve the accuracy of the unknown words. Thus it is not necessary for us to conduct further evaluation of this factor. Prepositions. As shown in Figure FIGREF157 most Chinese sentences have no or only single prepositions. The DLM model achieved an improvement of 3.6% for sentences do not contain a preposition. For sentences that contain single preposition, our model achieved 10.4% gain. The gain decreased largely when more prepositions are found in the sentences. Conjunctions. The curves of our analysis on the different number of conjunctions (Figure FIGREF158 ) are nearly identical to that of prepositions. For sentences that do not have conjunction a gain of 5.5% is achieved and the improvement for sentences containing a single conjunction is much larger (9.8%). The improvement dropped for sentences containing 2 conjunctions. Conclusions In this last chapter, we summarise the work of this thesis. In this thesis, we evaluated three semi-supervised techniques (co-training, self-training and dependency language models) on out-of-domain dependency parsing. The evaluations on various domains and languages demonstrated the effectiveness and robustness of all three techniques. We believe we have achieved the initial goals of this thesis. As introduced in Chapter SECREF2 , our goals for this thesis are to answer the following research questions: In the following sections, we answer all the questions in turns. Section SECREF39 summarises our work on agreement based co-training and tri-training, we answer questions 1 and 2 in this section. In Section SECREF40 we conclude our evaluations on English and multi-lingual confidence-based self-training; questions 3 and 4 are answered in this section. We discuss our work on dependency language models in Section SECREF41 and answer the last three questions. Conclusions on Co-training In this section, we discuss our work on agreement based co-training (Chapter SECREF14 ) and answer two research questions related to our co-training evaluation. Could the off-the-shelf dependency parsers be successfully used in co-training for domain adaptation? To answer this question we evaluated the agreement based co-training approach with four popular off-the-shelf parsers (Malt parser BIBREF10 , MST parser BIBREF9 , Mate parser BIBREF12 and Turbo parser BIBREF11 ). We pair the Mate parser with the rest of three parsers to create three co-training settings. The unlabelled data is double parsed by the parser pairs and the sentences that are annotated the same by both parsers are used as additional training data. New models are created by retraining the Mate parser on training data boosted by different parser pairs. All the enhanced models achieved large gains when compared to the baselines. The largest improvement of 1.1% is achieved by the Mate and Malt parsers. An additional 0.27% is achieved when we omit the short sentences from the additional training data. Our results demonstrated the effectiveness of the agreement-based co-training on out-of-domain parsing. The off-the-shelf parsers have proved their suitability on this task. Would tri-training be more effective for out-of-domain parsing when off-the-shelf dependency parsers are used? The tri-training different from the normal co-training by retraining the evaluation learner on additional training data agreed by other two learners. In total, three learners are required, to form the tri-training we used the Malt, MST parsers as the source learners and the Mate parser is used as the evaluation learner. The tri-trained model outperforms the best normal co-training setting on all the experiments, thus is more effective. A large 1.6% improvement is achieved on the development set when compared to the baseline. We further evaluate the tri-training approach on four test domains. It achieved largest labelled and unlabelled improvements of 1.8% and 0.58% respectively. On average it achieved 1.5% (LAS) and 0.4% (UAS) for all four test domains. Our results not only confirmed the tri-training is more effective than normal co-training but also demonstrated the merit of tri-training on multiple tested domains. Conclusions on Self-training In this section, we discuss our work on confidence-based self-training (Chapter SECREF20 and SECREF26 ) and answer two relevant questions. How could self-training be effectively used in out-of-domain dependency parsing? We start with the hypothesis that the selection of high-quality auto-annotated data is the pre-condition of the successful use of self-training on dependency parsing. To obtain the high-quality additional training data we introduced two confidence-based methods that are able to detect high accuracy annotations. We compared our confidence-based self-training with the random selection-based self-training and the baseline. The random selection-based self-training is not able to gain statistically significant improvement which is in line with previous work. Both confidence-based methods achieved large improvements on all three web domain test sets and the additional Chemical domain evaluation. For web domain, our method achieved up to 0.8% gains for both labelled and unlabelled scores. On average both methods improved the baseline by 0.6% (LAS and UAS). The evaluation on the Chemical domain resulted in larger improvements of up to 1.4% (LAS) and 1.2% (UAS). The evaluation on different domains confirmed our hypothesis. If self-training works for English dependency parsing, can it be adapted to other languages? We demonstrated the effectiveness of our confidence-based self-training for English dependency parsing in the last question, cf. Section SECREF168 . To assess the multi-lingual capacity of our confidence-based self-training, we evaluated it on nine languages (Arabic, Basque, French, German, Hebrew, Hungarian, Korean, Polish, Swedish) corpora. We evaluated on a unified setting for all the languages, the results show our method is able to achieve statistically significant improvements on five languages (Basque, German, Hungarian, Korean and Swedish). Our self-training approach achieved the largest labelled and unlabelled accuracy gain of 2.14% and 1.79% on Korean. The average improvements achieved by our method on five languages are 0.87% (LAS) and 0.78% (UAS). We further analyse the result of a negative effect (French) introduced by our method to assess the reason why self-training did not work. The analysis suggests the large difference between unlabelled data and the training data is likely to be the main reason disqualifies the self-training. Overall, our evaluations show that confidence-based self-training can be successfully applied to multi-lingual dependency parsing. Conclusions on Dependency Language Models In this section, we discuss our findings on dependency language models (Chapter SECREF32 ) and answer the last three research questions. Can dependency language models be adapted to strong transition-based parsers? To answer this question, we applied the dependency language models (DLM) to the Mate transition-based parser. We successfully integrated the DLM-based features to the transition-based parser by using a modified version of chen2012utilizing's original templates for the graph-based parser. The evaluations on English and Chinese in-domain parsing confirmed the effectiveness of dependency language models on the Mate parser. We improved a strong English baseline by 0.46% and 0.51% for labelled and unlabelled accuracies respectively. For Chinese, we achieved the state-of-the-art accuracy and gained large improvements of 0.93% (LAS) and 0.98% (UAS). The results show a strong evidence that dependency language models can be adapted successfully to a strong transition-based parser. Can dependency language models be used for out-of-domain parsing? To address this question, we applied our approach to four web domain texts (Weblogs, Newsgroups, Reviews, Answers). We achieved the largest labelled and unlabelled improvements of 0.91% and 0.82% on Newsgroups domain. And on average we achieved 0.6% gains for both labelled and unlabelled scores. The evaluations on multiple domains advised that DLM-based approach is an effective technique for domain adaptation tasks. Quality or quantity of the auto-parsed data, which one is more important to the successful use of dependency language models? The evaluations on both English and Chinese suggest no large additional gains can be achieved by using DLMs extracted from corpus larger than 5 million sentences. In fact, in most of the cases, the best model is achieved by using DLMs extracted from 5 million sentences. The evaluation of using DLMs extracted from high-quality data, on the other hand, surpasses the best results achieved by normal quality DLMs. Overall, the quality of the auto-labelled data used to generate DLMs is more important than the quantity.
Conll, Weblogs, Newsgroups, Reviews, Answers
5450f27ccc0406d3bffd08772d8b59004c2716da
5450f27ccc0406d3bffd08772d8b59004c2716da_0
Q: What is the road exam metric? Text: Introduction Likelihood-based language models with deep neural networks have been widely adopted to tackle language tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. By far, one of the most popular training strategies is teacher forcing, which derives from the general maximum likelihood estimation (MLE) principle BIBREF4. Under the teacher forcing schema, a model is trained to make predictions conditioned on ground-truth inputs. Although this strategy enables effective training of large neural networks, it is susceptible to aggravate exposure bias: a model may perform poorly at the inference stage, once its self-generated prefix diverges from the previously learned ground-truth data BIBREF5. A common approach to mitigate this problem is to impose supervision upon the model's own exploration. To this objective, existing literature have introduced REINFORCE BIBREF6 and actor-critic (AC) methods BIBREF7 (including language GANs BIBREF8), which offer direct feedback on a model's self-generated sequences, so the model can later, at the inference stage, deal with previously unseen exploratory paths. However, due to the well-known issue of reward sparseness and the potential noises in the critic's feedback, these methods are reported to risk compromising the generation quality, specifically in terms of precision. In this paper, we adopt two simple strategies, multi-range reinforcing and multi-entropy sampling to overcome the reward sparseness during training. With the tricks applied, our model demonstrates a significant improvement over competing models. In addition, we propose road exam as a new metric to reveal a model's robustness against exposure bias. Related Works As an early work to address exposure bias, BIBREF5 proposed a curriculum learning approach called scheduled sampling, which gradually replaces the ground-truth tokens with the model's own predictions while training. Later, BIBREF9 criticized this approach for pushing the model towards overfitting onto the corpus distribution based on the position of each token in the sequence, instead of learning about the prefix. In recent RL-inspired works, BIBREF10 built on the REINFORCE algorithm to directly optimize the test-time evaluation metric score. BIBREF11 employed a similar approach by training a critic network to predict the metric score that the actor's generated sequence of tokens would obtain. In both cases, the reliance on a metric to accurately reflect the quality of generated samples becomes a major limitation. Such metrics are often unavailable and difficult to design by nature. In parallel, adversarial training was introduced into language modeling by SeqGAN BIBREF8. This model consists of a generator pre-trained under MLE and a discriminator pre-trained to discern the generator's distribution from the real data. Follow-up works based on SeqGAN alter their training objectives or model architectures to enhance the guidance signal's informativeness. RankGAN replaces the absolute binary reward with a relative ranking score BIBREF12. LeakGAN allows the discriminator to “leak” its internal states to the generator at intermediate steps BIBREF13. BIBREF14 models a reward function using inverse reinforcement learning (IRL). While much progress have been made, we surprisingly observed that SeqGAN BIBREF8 shows more stable results in road exam in Section SECREF20. Therefore, we aim to amplify and denoise the reward signal in a direct and simple fashion. Model Description Problem Re-Formulation: Actor-Critic methods (ACs) consider language modeling as a generalized Markov Decision Process (MDP) problem, where the actor learns to optimize its policy guided by the critic, while the critic learns to optimize its value function based on the actor's output and external reward information. As BIBREF15 points out, GAN methods can be seen as a special case of AC where the critic aims to distinguish the actor's generation from real data and the actor is optimized in an opposite direction to the critic. Actor-Critic Training: In this work, we use a standard single-layer LSTM as the actor network. The training objective is to maximize the model's expected end rewards with policy gradient BIBREF16: Then, We use a CNN as the critic to predict the expected rewards for current generated prefix: In practice, we perform a Monte-Carlo (MC) search with roll-out policy following BIBREF8 to sample complete sentences starting from each location in a predicted sequence and compute their end rewards. Empirically, we found out that the maximum, instead of average, of rewards in the MC search better represents each token's actor value and yields better results during training. Therefore, we compute the action value by: In RL and GANs training, two major factors behind the unstable performance are the large variance and the update correlation during the sampling process BIBREF17, BIBREF18. We address these problems using the following strategies: Multi-Range Reinforcing: Our idea of multi-range supervision takes inspiration from deeply-supervised nets (DSNs) BIBREF19. Under deep supervision, intermediate layers of a deep neural network have their own training objectives and receive direct supervision simultaneously with the final decision layer. By design, lower layers in a CNN have smaller receptive fields, allowing them to make better use of local patterns. Our “multi-range" modification enables the critic to focus on local n-gram information in the lower layers while attending to global structural information in the higher layers. This is a solution to the high variance problem, as the actor can receive amplified reward with more local information compared to BIBREF8. Multi-Entropy Sampling: Language GANs can be seen an online RL methods, where the actor is updated from data generated by its own policy with strong correlation. Inspired by BIBREF20, we empirically find that altering the entropy of the actor's sample distribution during training is beneficial to the AC network's robust performance. In specific, we alternate the temperature $\tau $ to generate samples under different behavior policies. During the critic's training, the ground-truth sequences are assigned a perfect target value of 1. The samples obtained with $\tau < 1$ are supposed to contain lower entropy and to diverge less from the real data, that they receive a higher target value close to 1. Those obtained with $\tau > 1$ contain higher entropy and more errors that their target values are lower and closer to 0. This mechanism decorrelates updates during sequential sampling by sampling multiple diverse entropy distributions from actor synchronously. Model Description ::: Effectiveness of Multi-Range Reinforcing and Multi-Entropy Sampling Table TABREF5 demonstrates an ablation study on the effectiveness of multi-range reinforcing (MR) and multi-entropy sampling (ME). We observe that ME improves $\text{BLEU}_{\text{F5}}$ (precision) significantly while MR further enhances $\text{BLEU}_{\text{F5}}$ (precision) and $\text{BLEU}_{\text{F5}}$ (recall). Detailed explanations of these metrics can be found in Section SECREF4. Model Evaluation ::: Modeling Capacity & Sentence Quality We adopt three variations of BLEU metric from BIBREF14 to reflect precision and recall. $\textbf {BLEU}_{\textbf {F}}$, or forward BLEU, is a metric for precision. It uses the real test dataset as references to calculate how many n-grams in the generated samples can be found in the real data. $\textbf {BLEU}_{\textbf {B}}$, or backward BLEU, is a metric for recall. This metric takes both diversity and quality into computation. A model with severe mode collapse or diverse but incorrect outputs will receive poor scores in $\text{BLEU}_{\text{B}}$. $\textbf {BLEU}_{\textbf {HA}}$ is the harmonic mean of $\text{BLEU}_{\text{F}}$ and $\text{BLEU}_{\text{B}}$, given by: Model Evaluation ::: Exposure Bias Attacks Road Exam is a novel test we propose as a direct evaluation of exposure bias. In this test, a sentence prefix of length $K$, either taken from the training or testing dataset, is fed into the model under assessment to perform a sentence completion task. Thereby, the model is directed onto either a seen or an unseen “road" to begin its generation. Because precision is the primary concern, we set $\tau =0.5$ to sample high-confidence sentences from each model's distribution. We compare $\text{BLEU}_{\text{F}}$ of each model on both seen and unseen completion tasks and over a range of prefix lengths. By definition, a model with exposure bias should perform worse in completing sentences with unfamiliar prefix. The sentence completion quality should decay more drastically as the the unfamiliar prefix grows longer. Experiment ::: Datasets We evaluate on two datasets: EMNLP2017 WMT News and Google-small, a subset of Google One Billion Words . EMNLP2017 WMT News is provided in BIBREF21, a benchmarking platform for text generation models. We split the entire dataset into a training set of 195,010 sentences, a validation set of 83,576 sentences, and a test set of 10,000 sentences. The vocabulary size is 5,254 and the average sentence length is 27. Google-small is sampled and pre-processed from its the Google One Billion Words. It contains a training set of 699,967 sentences, a validation set of 200,000 sentences, and a test set of 99,985 sentences. The vocabulary size is 61,458 and the average sentence length is 29. Experiment ::: Implementation Details Experiment ::: Implementation Details ::: Network Architecture: We implement a standard single-layer LSTM as the generator (actor) and a eight-layer CNN as the discriminator (critic). The LSTM has embedding dimension 32 and hidden dimension 256. The CNN consists of 8 layers with filter size 3, where the 3rd, 5th, and 8th layers are directly connected to the output layer for multi-range supervision. Other parameters are consistent with BIBREF21. Experiment ::: Implementation Details ::: Training Settings: Adam optimizer is deployed for both critic and actor with learning rate $10^{-4}$ and $5 \cdot 10^{-3}$ respectively. The target values for the critic network are set to [0, 0.2, 0.4, 0.6, 0.8] for samples generated by the RNN with softmax temperatures [0.5, 0.75, 1.0, 1.25, 1.5]. Experiment ::: Discussion Table TABREF9 and Table TABREF10 compare models on EMNLP2017 WMT News and Google-small. Our model outperforms the others in $\text{BLEU}_{\text{F5}}$, $\text{BLEU}_{\text{B5}}$, and $\text{BLEU}_{\text{HA5}}$, indicating a high diversity and quality in its sample distribution. It is noteworthy that, LeakGAN and our model are the only two models to demonstrate improvements on $\text{BLEU}_{\text{B5}}$ over the teacher forcing baseline. The distinctive increment in recall indicates less mode collapse, which is a common problem in language GANs and ACs. Figure FIGREF16 demonstrates the road exam results on EMWT News. All models decrease in sampling precision (reflected via $\text{BLEU}_{\text{F4}}$) as the fed-in prefix length ($K$) increases, but the effect is stronger on the unseen test data, revealing the existence of exposure bias. Nonetheless, our model trained under ME and MR yields the best sentence quality and a relatively moderate performance decline. Although TF and SS demonstrate higher $\text{BLEU}_{\text{F5}}$ performance with shorter prefixes, their sentence qualities drop drastically on the test dataset with longer prefixes. On the other hand, GANs begin with lower $\text{BLEU}_{\text{F4}}$ precision scores but demonstrate less performance decay as the prefix grows longer and gradually out-perform TF. This robustness against unseen prefixes exhibits that supervision from a learned critic can boost a model's stability in completing unseen sequences. The better generative quality in TF and the stronger robustness against exposure bias in GANs are two different objectives in language modeling, but they can be pursued at the same time. Our model's improvement in both perspectives exhibit one possibility to achieve the goal. Conclusion We have presented multi-range reinforcing and multi-entropy sampling as two training strategies built upon deeply supervised nets BIBREF19 and multi-entropy samplingBIBREF20. The two easy-to-implement strategies help alleviate the reward sparseness in RL training and tackle the exposure bias problem. Conclusion ::: Acknowledgments The authors are grateful for the supports by NSF IIS-1618477, NSF IIS-1717431, and a grant from Samsung Research America.
a new metric to reveal a model's robustness against exposure bias
12ac76b77f22ed3bcb6430bcd0b909441d79751b
12ac76b77f22ed3bcb6430bcd0b909441d79751b_0
Q: What are the competing models? Text: Introduction Likelihood-based language models with deep neural networks have been widely adopted to tackle language tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. By far, one of the most popular training strategies is teacher forcing, which derives from the general maximum likelihood estimation (MLE) principle BIBREF4. Under the teacher forcing schema, a model is trained to make predictions conditioned on ground-truth inputs. Although this strategy enables effective training of large neural networks, it is susceptible to aggravate exposure bias: a model may perform poorly at the inference stage, once its self-generated prefix diverges from the previously learned ground-truth data BIBREF5. A common approach to mitigate this problem is to impose supervision upon the model's own exploration. To this objective, existing literature have introduced REINFORCE BIBREF6 and actor-critic (AC) methods BIBREF7 (including language GANs BIBREF8), which offer direct feedback on a model's self-generated sequences, so the model can later, at the inference stage, deal with previously unseen exploratory paths. However, due to the well-known issue of reward sparseness and the potential noises in the critic's feedback, these methods are reported to risk compromising the generation quality, specifically in terms of precision. In this paper, we adopt two simple strategies, multi-range reinforcing and multi-entropy sampling to overcome the reward sparseness during training. With the tricks applied, our model demonstrates a significant improvement over competing models. In addition, we propose road exam as a new metric to reveal a model's robustness against exposure bias. Related Works As an early work to address exposure bias, BIBREF5 proposed a curriculum learning approach called scheduled sampling, which gradually replaces the ground-truth tokens with the model's own predictions while training. Later, BIBREF9 criticized this approach for pushing the model towards overfitting onto the corpus distribution based on the position of each token in the sequence, instead of learning about the prefix. In recent RL-inspired works, BIBREF10 built on the REINFORCE algorithm to directly optimize the test-time evaluation metric score. BIBREF11 employed a similar approach by training a critic network to predict the metric score that the actor's generated sequence of tokens would obtain. In both cases, the reliance on a metric to accurately reflect the quality of generated samples becomes a major limitation. Such metrics are often unavailable and difficult to design by nature. In parallel, adversarial training was introduced into language modeling by SeqGAN BIBREF8. This model consists of a generator pre-trained under MLE and a discriminator pre-trained to discern the generator's distribution from the real data. Follow-up works based on SeqGAN alter their training objectives or model architectures to enhance the guidance signal's informativeness. RankGAN replaces the absolute binary reward with a relative ranking score BIBREF12. LeakGAN allows the discriminator to “leak” its internal states to the generator at intermediate steps BIBREF13. BIBREF14 models a reward function using inverse reinforcement learning (IRL). While much progress have been made, we surprisingly observed that SeqGAN BIBREF8 shows more stable results in road exam in Section SECREF20. Therefore, we aim to amplify and denoise the reward signal in a direct and simple fashion. Model Description Problem Re-Formulation: Actor-Critic methods (ACs) consider language modeling as a generalized Markov Decision Process (MDP) problem, where the actor learns to optimize its policy guided by the critic, while the critic learns to optimize its value function based on the actor's output and external reward information. As BIBREF15 points out, GAN methods can be seen as a special case of AC where the critic aims to distinguish the actor's generation from real data and the actor is optimized in an opposite direction to the critic. Actor-Critic Training: In this work, we use a standard single-layer LSTM as the actor network. The training objective is to maximize the model's expected end rewards with policy gradient BIBREF16: Then, We use a CNN as the critic to predict the expected rewards for current generated prefix: In practice, we perform a Monte-Carlo (MC) search with roll-out policy following BIBREF8 to sample complete sentences starting from each location in a predicted sequence and compute their end rewards. Empirically, we found out that the maximum, instead of average, of rewards in the MC search better represents each token's actor value and yields better results during training. Therefore, we compute the action value by: In RL and GANs training, two major factors behind the unstable performance are the large variance and the update correlation during the sampling process BIBREF17, BIBREF18. We address these problems using the following strategies: Multi-Range Reinforcing: Our idea of multi-range supervision takes inspiration from deeply-supervised nets (DSNs) BIBREF19. Under deep supervision, intermediate layers of a deep neural network have their own training objectives and receive direct supervision simultaneously with the final decision layer. By design, lower layers in a CNN have smaller receptive fields, allowing them to make better use of local patterns. Our “multi-range" modification enables the critic to focus on local n-gram information in the lower layers while attending to global structural information in the higher layers. This is a solution to the high variance problem, as the actor can receive amplified reward with more local information compared to BIBREF8. Multi-Entropy Sampling: Language GANs can be seen an online RL methods, where the actor is updated from data generated by its own policy with strong correlation. Inspired by BIBREF20, we empirically find that altering the entropy of the actor's sample distribution during training is beneficial to the AC network's robust performance. In specific, we alternate the temperature $\tau $ to generate samples under different behavior policies. During the critic's training, the ground-truth sequences are assigned a perfect target value of 1. The samples obtained with $\tau < 1$ are supposed to contain lower entropy and to diverge less from the real data, that they receive a higher target value close to 1. Those obtained with $\tau > 1$ contain higher entropy and more errors that their target values are lower and closer to 0. This mechanism decorrelates updates during sequential sampling by sampling multiple diverse entropy distributions from actor synchronously. Model Description ::: Effectiveness of Multi-Range Reinforcing and Multi-Entropy Sampling Table TABREF5 demonstrates an ablation study on the effectiveness of multi-range reinforcing (MR) and multi-entropy sampling (ME). We observe that ME improves $\text{BLEU}_{\text{F5}}$ (precision) significantly while MR further enhances $\text{BLEU}_{\text{F5}}$ (precision) and $\text{BLEU}_{\text{F5}}$ (recall). Detailed explanations of these metrics can be found in Section SECREF4. Model Evaluation ::: Modeling Capacity & Sentence Quality We adopt three variations of BLEU metric from BIBREF14 to reflect precision and recall. $\textbf {BLEU}_{\textbf {F}}$, or forward BLEU, is a metric for precision. It uses the real test dataset as references to calculate how many n-grams in the generated samples can be found in the real data. $\textbf {BLEU}_{\textbf {B}}$, or backward BLEU, is a metric for recall. This metric takes both diversity and quality into computation. A model with severe mode collapse or diverse but incorrect outputs will receive poor scores in $\text{BLEU}_{\text{B}}$. $\textbf {BLEU}_{\textbf {HA}}$ is the harmonic mean of $\text{BLEU}_{\text{F}}$ and $\text{BLEU}_{\text{B}}$, given by: Model Evaluation ::: Exposure Bias Attacks Road Exam is a novel test we propose as a direct evaluation of exposure bias. In this test, a sentence prefix of length $K$, either taken from the training or testing dataset, is fed into the model under assessment to perform a sentence completion task. Thereby, the model is directed onto either a seen or an unseen “road" to begin its generation. Because precision is the primary concern, we set $\tau =0.5$ to sample high-confidence sentences from each model's distribution. We compare $\text{BLEU}_{\text{F}}$ of each model on both seen and unseen completion tasks and over a range of prefix lengths. By definition, a model with exposure bias should perform worse in completing sentences with unfamiliar prefix. The sentence completion quality should decay more drastically as the the unfamiliar prefix grows longer. Experiment ::: Datasets We evaluate on two datasets: EMNLP2017 WMT News and Google-small, a subset of Google One Billion Words . EMNLP2017 WMT News is provided in BIBREF21, a benchmarking platform for text generation models. We split the entire dataset into a training set of 195,010 sentences, a validation set of 83,576 sentences, and a test set of 10,000 sentences. The vocabulary size is 5,254 and the average sentence length is 27. Google-small is sampled and pre-processed from its the Google One Billion Words. It contains a training set of 699,967 sentences, a validation set of 200,000 sentences, and a test set of 99,985 sentences. The vocabulary size is 61,458 and the average sentence length is 29. Experiment ::: Implementation Details Experiment ::: Implementation Details ::: Network Architecture: We implement a standard single-layer LSTM as the generator (actor) and a eight-layer CNN as the discriminator (critic). The LSTM has embedding dimension 32 and hidden dimension 256. The CNN consists of 8 layers with filter size 3, where the 3rd, 5th, and 8th layers are directly connected to the output layer for multi-range supervision. Other parameters are consistent with BIBREF21. Experiment ::: Implementation Details ::: Training Settings: Adam optimizer is deployed for both critic and actor with learning rate $10^{-4}$ and $5 \cdot 10^{-3}$ respectively. The target values for the critic network are set to [0, 0.2, 0.4, 0.6, 0.8] for samples generated by the RNN with softmax temperatures [0.5, 0.75, 1.0, 1.25, 1.5]. Experiment ::: Discussion Table TABREF9 and Table TABREF10 compare models on EMNLP2017 WMT News and Google-small. Our model outperforms the others in $\text{BLEU}_{\text{F5}}$, $\text{BLEU}_{\text{B5}}$, and $\text{BLEU}_{\text{HA5}}$, indicating a high diversity and quality in its sample distribution. It is noteworthy that, LeakGAN and our model are the only two models to demonstrate improvements on $\text{BLEU}_{\text{B5}}$ over the teacher forcing baseline. The distinctive increment in recall indicates less mode collapse, which is a common problem in language GANs and ACs. Figure FIGREF16 demonstrates the road exam results on EMWT News. All models decrease in sampling precision (reflected via $\text{BLEU}_{\text{F4}}$) as the fed-in prefix length ($K$) increases, but the effect is stronger on the unseen test data, revealing the existence of exposure bias. Nonetheless, our model trained under ME and MR yields the best sentence quality and a relatively moderate performance decline. Although TF and SS demonstrate higher $\text{BLEU}_{\text{F5}}$ performance with shorter prefixes, their sentence qualities drop drastically on the test dataset with longer prefixes. On the other hand, GANs begin with lower $\text{BLEU}_{\text{F4}}$ precision scores but demonstrate less performance decay as the prefix grows longer and gradually out-perform TF. This robustness against unseen prefixes exhibits that supervision from a learned critic can boost a model's stability in completing unseen sequences. The better generative quality in TF and the stronger robustness against exposure bias in GANs are two different objectives in language modeling, but they can be pursued at the same time. Our model's improvement in both perspectives exhibit one possibility to achieve the goal. Conclusion We have presented multi-range reinforcing and multi-entropy sampling as two training strategies built upon deeply supervised nets BIBREF19 and multi-entropy samplingBIBREF20. The two easy-to-implement strategies help alleviate the reward sparseness in RL training and tackle the exposure bias problem. Conclusion ::: Acknowledgments The authors are grateful for the supports by NSF IIS-1618477, NSF IIS-1717431, and a grant from Samsung Research America.
TEACHER FORCING (TF), SCHEDULED SAMPLING (SS), SEQGAN, RANKGAN, LEAKGAN.
0038b073b7cca847033177024f9719c971692042
0038b073b7cca847033177024f9719c971692042_0
Q: How is the input triple translated to a slot-filling task? Text: Introduction Relation extraction systems populate knowledge bases with facts from an unstructured text corpus. When the type of facts (relations) are predefined, one can use crowdsourcing BIBREF0 or distant supervision BIBREF1 to collect examples and train an extraction model for each relation type. However, these approaches are incapable of extracting relations that were not specified in advance and observed during training. In this paper, we propose an alternative approach for relation extraction, which can potentially extract facts of new types that were neither specified nor observed a priori. We show that it is possible to reduce relation extraction to the problem of answering simple reading comprehension questions. We map each relation type $R(x,y)$ to at least one parametrized natural-language question $q_x$ whose answer is $y$ . For example, the relation $educated\_at(x,y)$ can be mapped to “Where did $x$ study?” and “Which university did $x$ graduate from?”. Given a particular entity $x$ (“Turing”) and a text that mentions $x$ (“Turing obtained his PhD from Princeton”), a non-null answer to any of these questions (“Princeton”) asserts the fact and also fills the slot $y$ . Figure 1 illustrates a few more examples. This reduction enables new ways of framing the learning problem. In particular, it allows us to perform zero-shot learning: define new relations “on the fly”, after the model has already been trained. More specifically, the zero-shot scenario assumes access to labeled data for $N$ relation types. This data is used to train a reading comprehension model through our reduction. However, at test time, we are asked about a previously unseen relation type $R_{N+1}$ . Rather than providing labeled data for the new relation, we simply list questions that define the relation's slot values. Assuming we learned a good reading comprehension model, the correct values should be extracted. Our zero-shot setup includes innovations both in data and models. We use distant supervision for a relatively large number of relations (120) from Wikidata BIBREF2 , which are easily gathered in practice via the WikiReading dataset BIBREF3 . We also introduce a crowdsourcing approach for gathering and verifying the questions for each relation. This process produced about 10 questions per relation on average, yielding a dataset of over 30,000,000 question-sentence-answer examples in total. Because questions are paired with relation types, not instances, this overall procedure has very modest costs. The key modeling challenge is that most existing reading-comprehension problem formulations assume the answer to the question is always present in the given text. However, for relation extraction, this premise does not hold, and the model needs to reliably determine when a question is not answerable. We show that a recent state-of-the-art neural approach for reading comprehension BIBREF4 can be directly extended to model answerability and trained on our new dataset. This modeling approach is another advantage of our reduction: as machine reading models improve with time, so should our ability to extract relations. Experiments demonstrate that our approach generalizes to new paraphrases of questions from the training set, while incurring only a minor loss in performance (4% relative F1 reduction). Furthermore, translating relation extraction to the realm of reading comprehension allows us to extract a significant portion of previously unseen relations, from virtually zero to an F1 of 41%. Our analysis suggests that our model is able to generalize to these cases by learning typing information that occurs across many relations (e.g. the answer to “Where” is a location), as well as detecting relation paraphrases to a certain extent. We also find that there are many feasible cases that our model does not quite master, providing an interesting challenge for future work. Related Work We are interested in a particularly harsh zero-shot learning scenario: given labeled examples for $N$ relation types during training, extract relations of a new type $R_{N+1}$ at test time. The only information we have about $R_{N+1}$ are parametrized questions. This setting differs from prior art in relation extraction. Bronstein2015 explore a similar zero-shot setting for event-trigger identification, in which $R_{N+1}$ is specified by a set of trigger words at test time. They generalize by measuring the similarity between potential triggers and the given seed set using unsupervised methods. We focus instead on slot filling, where questions are more suitable descriptions than trigger words. Open information extraction (open IE) BIBREF5 is a schemaless approach for extracting facts from text. While open IE systems need no relation-specific training data, they often treat different phrasings as different relations. In this work, we hope to extract a canonical slot value independent of how the original text is phrased. Universal schema BIBREF6 represents open IE extractions and knowledge-base facts in a single matrix, whose rows are entity pairs and columns are relations. The redundant schema (each knowledge-base relation may overlap with multiple natural-language relations) enables knowledge-base population via matrix completion techniques. Verga2017 predict facts for entity pairs that were not observed in the original matrix; this is equivalent to extracting seen relation types with unseen entities (see Section "Unseen Entities" ). Rocktaschel2015 and Demeester2016 use inference rules to predict hidden knowledge-base relations from observed natural-language relations. This setting is akin to generalizing across different manifestations of the same relation (see Section "Unseen Question Templates" ) since a natural-language description of each target relation appears in the training data. Moreover, the information about the unseen relations is a set of explicit inference rules, as opposed to implicit natural-language questions. Our zero-shot scenario, in which no manifestation of the test relation is observed during training, is substantially more challenging (see Section "Unseen Relations" ). In universal-schema terminology, we add a new empty column (the target knowledge-base relation), plus a few new columns with a single entry each (reflecting the textual relations in the sentence). These columns share no entities with existing columns, making the rest of the matrix irrelevant. To fill the empty column from the others, we match their descriptions. Toutanova2015 proposed a similar approach that decomposes natural-language relations and computes their similarity in a universal schema setting; however, they did not extend their method to knowledge-base relations, nor did they attempt to recover out-of-schema relations as we do. Approach We consider the slot-filling challenge in relation extraction, in which we are given a knowledge-base relation $R$ , an entity $e$ , and a sentence $s$ . For example, consider the relation $occupation$ , the entity “Steve Jobs”, and the sentence “Steve Jobs was an American businessman, inventor, and industrial designer”. Our goal is to find a set of text spans $A$ in $s$ for which $R(e,a)$ holds for each $a \in A$ . In our example, $A=\lbrace \textnormal {businessman},\textnormal {inventor}, \textnormal {industrial designer}\rbrace $ . The empty set is also a valid answer ( $A = \emptyset $ ) when $e$0 does not contain any phrase that satisfies $e$1 . We observe that given a natural-language question $e$2 that expresses $e$3 (e.g. “What did Steve Jobs do for a living?”), solving the reading comprehension problem of answering $e$4 from $e$5 is equivalent to solving the slot-filling challenge. The challenge now becomes one of querification: translating $R(e,?)$ into $q$ . Rather than querify $R(e,?)$ for every entity $e$ , we propose a method of querifying the relation $R$ . We treat $e$ as a variable $x$ , querify the parametrized query $R(x,?)$ (e.g. $occupation(x,?)$ ) as a question template $q_x$ (“What did $q$0 do for a living?”), and then instantiate this template with the relevant entities, creating a tailored natural-language question for each entity $q$1 (“What did Steve Jobs do for a living?”). This process, schema querification, is by an order of magnitude more efficient than querifying individual instances because annotating a relation type automatically annotates all of its instances. Applying schema querification to $N$ relations from a pre-existing relation-extraction dataset converts it into a reading-comprehension dataset. We then use this dataset to train a reading-comprehension model, which given a sentence $s$ and a question $q$ returns a set of text spans $A$ within $s$ that answer $q$ (to the best of its ability). In the zero-shot scenario, we are given a new relation $R_{N+1}(x,y)$ at test-time, which was neither specified nor observed beforehand. For example, the $deciphered(x,y)$ relation, as in “Turing and colleagues came up with a method for efficiently deciphering the Enigma”, is too domain-specific to exist in common knowledge-bases. We then querify $R_{N+1}(x,y)$ into $q_x$ (“Which code did $x$ break?”) or $q_y$ (“Who cracked $y$ ?”), and run our reading-comprehension model for each sentence in the document(s) of interest, while instantiating the question template with different entities that might participate in this relation. Each time the model returns a non-null answer $deciphered(x,y)$1 for a given question $deciphered(x,y)$2 , it extracts the relation $deciphered(x,y)$3 . Ultimately, all we need to do for a new relation is define our information need in the form of a question. Our approach provides a natural-language API for application developers who are interested in incorporating a relation-extraction component in their programs; no linguistic knowledge or pre-defined schema is needed. To implement our approach, we require two components: training data and a reading-comprehension model. In Section "Dataset" , we construct a large relation-extraction dataset and querify it using an efficient crowdsourcing procedure. We then adapt an existing state-of-the-art reading-comprehension model to suit our problem formulation (Section "Model" ). Dataset To collect reading-comprehension examples as in Figure 2 , we first gather labeled examples for the task of relation-slot filling. Slot-filling examples are similar to reading-comprehension examples, but contain a knowledge-base query $R(e,?)$ instead of a natural-language question; e.g. $spouse(\textnormal {Angela Merkel}, ?)$ instead of “Who is Angela Merkel married to?”. We collect many slot-filling examples via distant supervision, and then convert their queries into natural language. Model Given a sentence $s$ and a question $q$ , our algorithm either returns an answer span $a$ within $s$ , or indicates that there is no answer. The task of obtaining answer spans to natural-language questions has been recently studied on the SQuAD dataset BIBREF8 , BIBREF12 , BIBREF13 , BIBREF14 . In SQuAD, every question is answerable from the text, which is why these models assume that there exists a correct answer span. Therefore, we modify an existing model in a way that allows it to decide whether an answer exists. We first give a high-level description of the original model, and then describe our modification. We start from the BiDAF model BIBREF4 , whose input is two sequences of words: a sentence $s$ and a question $q$ . The model predicts the start and end positions ${\bf y}^{start}, {\bf y}^{end}$ of the answer span in $s$ . BiDAF uses recurrent neural networks to encode contextual information within $s$ and $q$ alongside an attention mechanism to align parts of $q$ with $s$ and vice-versa. The outputs of the BiDAF model are the confidence scores of ${\bf y}^{start}$ and ${\bf y}^{end}$ , for each potential start and end. We denote these scores as ${\bf z}^{start}, {\bf z}^{end} \in \mathbb {R}^N$ , where $N$ is the number of words in the sentence $s$ . In other words, ${\bf z}^{start}_i$ indicates how likely the answer is to start at position $i$ of the sentence (the higher the more likely); similarly, ${\bf z}^{end}_i$ indicates how likely the answer is to end at that index. Assuming the answer exists, we can transform these confidence scores into pseudo-probability distributions ${\bf p}^{start}, {\bf p}^{end}$ via softmax. The probability of each $i$ -to- ${\bf y}^{end}$0 -span of the context can therefore be defined by: $$P(a = s_{i...j}) = {\bf p}^{start}_i {\bf p}^{end}_j$$ (Eq. 13) where ${\bf p}_i$ indicates the $i$ -th element of the vector ${\bf p}_i$ , i.e. the probability of the answer starting at $i$ . Seo:16 obtain the span with the highest probability during post-processing. To allow the model to signal that there is no answer, we concatenate a trainable bias $b$ to the end of both confidences score vectors ${\bf z}^{start}, {\bf z}^{end}$ . The new score vectors ${\tilde{\bf z}}^{start}, {\tilde{\bf z}}^{end} \in \mathbb {R}^{N+1}$ are defined as ${\tilde{\bf z}}^{start} = [{\bf z}^{start}; b]$ and similarly for ${\tilde{\bf z}}^{end}$ , where $[;]$ indicates row-wise concatenation. Hence, the last elements of $i$0 and $i$1 indicate the model's confidence that the answer has no start or end, respectively. We apply softmax to these augmented vectors to obtain pseudo-probability distributions, $i$2 . This means that the probability the model assigns to a null answer is: $$P(a = \emptyset ) = {\tilde{\bf p}}^{start}_{N+1} {\tilde{\bf p}}^{end}_{N+1}.$$ (Eq. 14) If $P(a = \emptyset )$ is higher than the probability of the best span, $\arg \max _{i,j \le N} P(a = s_{i...j})$ , then the model deems that the question cannot be answered from the sentence. Conceptually, adding the bias enables the model to be sensitive to the absolute values of the raw confidence scores ${\bf z}^{start}, {\bf z}^{end}$ . We are essentially setting and learning a threshold $b$ that decides whether the model is sufficiently confident of the best candidate answer span. While this threshold provides us with a dynamic per-example decision of whether the instance is answerable, we can also set a global confidence threshold $p_{min}$ ; if the best answer's confidence is below that threshold, we infer that there is no answer. In Section "Unseen Relations" we use this global threshold to get a broader picture of the model's performance. Experiments To understand how well our method can generalize to unseen data, we design experiments for unseen entities (Section "Unseen Entities" ), unseen question templates (Section "Unseen Question Templates" ), and unseen relations (Section "Unseen Relations" ). Unseen Entities We show that our reading-comprehension approach works well in a typical relation-extraction setting by testing it on unseen entities and texts. We partitioned our dataset along entities in the question, and randomly clustered each entity into one of three groups: train, dev, or test. For instance, Alan Turing examples appear only in training, while Steve Jobs examples are exclusive to test. We then sampled 1,000,000 examples for train, 1,000 for dev, and 10,000 for test. This partition also ensures that the sentences at test time are different from those in train, since the sentences are gathered from each entity's Wikipedia article. Table 1 shows that our model generalizes well to new entities and texts, with little variance in performance between KB Relation, NL Relation, Multiple Templates, and Question Ensemble. Single Template performs significantly worse than these variants; we conjecture that simpler relation descriptions (KB Relation & NL Relation) allow for easier parameter tying across different examples, whereas learning from multiple questions allows the model to acquire important paraphrases. All variants of our model outperform off-the-shelf relation extraction systems (RNN Labeler and Miwa & Bansal) in this setting, demonstrating that reducing relation extraction to reading comprehension is indeed a viable approach for our Wikipedia slot-filling task. An analysis of 50 examples that Multiple Templates mispredicted shows that 36% of errors can be attributed to annotation errors (chiefly missing entries in Wikidata), and an additional 42% result from inaccurate span selection (e.g. “8 February 1985” instead of “1985”), for which our model is fully penalized. In total, only 18% of our sample were pure system errors, suggesting that our model is very close to the performance ceiling of this setting (slightly above 90% F1). Unseen Question Templates We test our method's ability to generalize to new descriptions of the same relation, by holding out a question template for each relation during training. We created 10 folds of train/dev/test samples of the data, in which one question template for each relation was held out for the test set, and another for the development set. For instance, “What did $x$ do for a living?” may appear only in the training set, while “What is $x$ 's job?” is exclusive to the test set. Each split was stratified by sampling $N$ examples per question template ( $N=1000,10,50$ for train, dev, test, respectively). This process created 10 training sets of 966,000 examples with matching development and test sets of 940 and 4,700 examples each. We trained and tested Multiple Templates on each one of the folds, yielding performance on unseen templates. We then replicated the existing test sets and replaced the unseen question templates with templates from the training set, yielding performance on seen templates. Revisiting our example, we convert test-set occurrences of “What is $x$ 's job?” to “What did $x$ do for a living?”. Table 2 shows that our approach is able to generalize to unseen question templates. Our system's performance on unseen questions is nearly as strong as for previously observed templates (losing roughly 3.5 points in F1). Unseen Relations We examine a pure zero-shot setting, where test-time relations are unobserved during training. We created 10 folds of train/dev/test samples, partitioned along relations: 84 relations for train, 12 dev, and 24 test. For example, when $educated\_at$ is allocated to test, no $educated\_at$ examples appear in train. Using stratified sampling of relations, we created 10 training sets of 840,000 examples each with matching dev and test sets of 600 and 12,000 examples per fold. Table 3 shows each system's performance; Figure 4 extends these results for variants of our model by applying a global threshold on the answers' confidence scores to generate precision/recall curves (see Section "Model" ). As expected, representing knowledge-base relations as indicators (KB Relation and Miwa & Bansal) is insufficient in a zero-shot setting; they must be interpreted as natural-language expressions to allow for some generalization. The difference between using a single question template (Single Template) and the relation's name (NL Relation) appears to be minor. However, training on a variety of question templates (Multiple Templates) substantially increases performance. We conjecture that multiple phrasings of the same relation allows our model to learn answer-type paraphrases that occur across many relations (see Section "Analysis" ). There is also some advantage to having multiple questions at test time (Question Ensemble). Analysis To understand how our method extracts unseen relations, we analyzed 100 random examples, of which 60 had answers in the sentence and 40 did not (negative examples). For negative examples, we checked whether a distractor – an incorrect answer of the correct answer type – appears in the sentence. For example, the question “Who is John McCain married to?” does not have an answer in “John McCain chose Sarah Palin as his running mate”, but “Sarah Palin” is of the correct answer type. We noticed that 14 negative examples (35%) contain distractors. When pairing these examples with the results from the unseen relations experiment in Section "Unseen Relations" , we found that our method answered 2/14 of the distractor examples incorrectly, compared to only 1/26 of the easier examples. It appears that while most of the negative examples are easy, a significant portion of them are not trivial. For positive examples, we observed that some instances can be solved by matching the relation in the sentence to that in the question, while others rely more on the answer's type. Moreover, we notice that each cue can be further categorized according to the type of information needed to detect it: (1) when part of the question appears verbatim in the text, (2) when the phrasing in the text deviates from the question in a way that is typical of other relations as well (e.g. syntactic variability), (3) when the phrasing in the text deviates from the question in a way that is unique to this relation (e.g. lexical variability). We name these categories verbatim, global, and specific, respectively. Figure 5 illustrates all the different types of cues we discuss in our analysis. We selected the most important cue for solving each instance. If there were two important cues, each one was counted as half. Table 4 shows their distribution. Type cues appear to be somewhat more dominant than relation cues (58% vs. 42%). Half of the cues are relation-specific, whereas global cues account for one third of the cases and verbatim cues for one sixth. This is an encouraging result, because we can potentially learn to accurately recognize verbatim and global cues from other relations. However, our method was only able to exploit these cues partially. We paired these examples with the results from the unseen relations experiment in Section "Unseen Relations" to see how well our method performs in each category. Table 5 shows the results for the Multiple Templates setting. On one hand, the model appears agnostic to whether the relation cue is verbatim, global, or specific, and is able to correctly answer these instances with similar accuracy (there is no clear trend due to the small sample size). For examples that rely on typing information, the trend is much clearer; our model is much better at detecting global type cues than specific ones. Based on these observations, we think that the primary sources of our model's ability to generalize to new relations are: global type detection, which is acquired from training on many different relations, and relation paraphrase detection (of all types), which probably relies on its pre-trained word embeddings. Conclusion We showed that relation extraction can be reduced to a reading comprehension problem, allowing us to generalize to unseen relations that are defined on-the-fly in natural language. However, the problem of zero-shot relation extraction is far from solved, and poses an interesting challenge to both the information extraction and machine reading communities. As research into machine reading progresses, we may find that more tasks can benefit from a similar approach. To support future work in this avenue, we make our code and data publicly available. Acknowledgements The research was supported in part by DARPA under the DEFT program (FA8750-13-2-0019), the ARO (W911NF-16-1-0121), the NSF (IIS-1252835, IIS-1562364), gifts from Google, Tencent, and Nvidia, and an Allen Distinguished Investigator Award. We also thank Mandar Joshi, Victoria Lin, and the UW NLP group for helpful conversations and comments on the work.
The relation R(x,y) is mapped onto a question q whose answer is y
ad6415f4351c44ffae237524696a3f76f383bfd5
ad6415f4351c44ffae237524696a3f76f383bfd5_0
Q: Is model compared against state of the art models on these datasets? Text: Introduction Deep convolutional neural networks (CNNs) with 2D convolutions and small kernels BIBREF1, have achieved state-of-the-art results for several speech recognition tasks BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. The accuracy of those models grows with their complexity, leading to redundant latent representations. Several approaches have been proposed in the literature to reduce this redundancy BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, and therefore to improve their efficiency. Octave convolutional layers BIBREF0 address the problem of spatial redundancy in feature maps by learning feature representations at high and low resolutions. The low resolution processing path increases the size of the receptive field in the original input space, which is a plausible explanation of the improved performance for image classification. We extend the octave convolution concept to multi-scale octave convolutional layers, which include lower resolution feature maps with a higher compression rate (reduction by more than one octave), and the use of more than two feature map tensor groups in order to be learn representations at multiple scales. Multi-scale processing have been previously proposed for a variety of speech recognition tasks BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. In deep CNN acoustic models, some of the feature maps may need to represent information which varies at a lower rate, such as the characteristics of the speaker or background noise, compared to the information necessary for phonetic discrimination. Spatial average pooling in a low resolution group of feature maps can be interpreted as a form of low-pass filtering, providing smoothed representations of the observed data, potentially leading to improved performance. We investigate the use of multi-scale octave convolutional layers for robust speech recognition, and attempt to shed more light on the explainability of the models by evaluating the robustness of the learned representations using an affine transformation loss to measure the similarity between clean and noisy encodings. Multi-scale octave convolutions An octave convolutional layer BIBREF0 factorizes the output feature maps of a convolutional layer into two groups. The resolution of the low-frequency feature maps is reduced by an octave – height and width dimensions are divided by 2. In this work, we explore spatial reduction by up to 3 octaves – dividing by $2^t$, where $t=1,2,3$ – and for up to 4 groups. We refer to such a layer as a multi-octave convolutional (MultiOctConv) layer, and an example with three groups and reductions of one and two octaves is depicted in Fig. FIGREF1. In a vanilla CNN the convolutions have the same spatial resolution throughout the network. An octave convolutional (OctConv) layer is divided into high- and low-frequency feature maps and a multi-octave convolutional (MultiOctConv) layer has feature maps reduced by multiple octaves. Let the input feature tensor be $X \in \mathbb {R}^{c_{in} \times h \times w}$, where $c_{in}$ denotes the number of input channels and $h$ and $w$ correspond to the spatial dimensions. In a MultiOctConv layer working at 3 resolutions, $X$ is factorized along the channel dimension into $X = \lbrace X^1, X^2, X^3\rbrace $. The first tensor group tensor, $X^1$, is a representation at the same spatial scale as $X$. The spatial dimensions of the second and third group tensors, $X^2$ and $X^3$, are reduced by one and two octaves respectively. The dimensions of the input tensors $X^1$, $X^2$ and $X^3$ are described in Fig. FIGREF1. The fraction of the channels for each group is denoted with $\alpha _{n} \in [0,1]$, where $\sum _{n=1}^{N} \alpha _{n} = 1$ for $N$ resolution groups in the MultiOctConv layer. For simplicity, we use the same $\alpha _{n}$ for input and output representations within the same scale group. Similarly, the output tensors are also factorized into $Y = \lbrace Y^1, Y^2, Y^3\rbrace $. Their dimensions are analogous to the dimensions of the input tensors and are described in Fig. FIGREF1. To compute $Y^1$, $Y^2$ and $Y^3$ we operate directly on the factorized input tensors $X^1$, $X^2$ and $X^3$. Inter-frequency information update is implemented as a sum of feature maps from different resolution groups. To be able to sum those representations for a desired output scale, the spatial dimensions of the input tensors must be the same. For this reason, two operations are employed: spatial average pooling pool($X, p$) and bilinear interpolation upsample($X, u$), where $p$ is the kernel size and stride for the the 2D pooling layer and $u$ is the upsampling factor. The output MultiOctConv representations are therefore computed as where $f(.)$ is the convolution function and $W^{n_{in}\rightarrow {n_{out}}}\in \mathbb {R}^{c_{in} \times k \times k \times c_{out}}$ is the convolution filter for a $k \times k$ kernel. We call the information update “intra-frequency” when $n_{in} = n_{out}$, and “inter-frequency” when $n_{in} \ne n_{out}$. It is important to note that the convolution $f(.)$ operates on the tensors compressed with average pooling and on the tensors before upsampling, making the design more efficient. The number of parameters in the MultiOctConv layer is the same as in a vanilla convolutional layer. Multi-scale octave convolutions ::: Robustness of learned representations To evaluate the robustness of learned representations, we compare the projections of clean and noisy Aurora-4 samples. The similarity between them is measured using the mean squared error (MSE) loss of an affine projection $y$ of $N$ clean to noisy samples (Eq. DISPLAY_FORM3), to take into account permutations of hidden representations and to ensure invariance of the metric to affine transformations of the encodings. The number of units in layer $y$ and the dimensionality $D$ of $\mathbf {x}_{h}$ is 1024. We use the Aurora-4 test sets and compare clean encodings $\mathbf {x}_{h,clean}$ with noisy encodings $\mathbf {x}_{h,noise}$, obtained as the activations from the last convolutional layer with a forward pass through a trained model. Both hidden representations were obtained for CNN and octave CNN (OctCNN) models in order to compare representations between the models. Also, for intra-model comparison, we evaluate the loss with the encodings from high and low-resolution groups (paths $Y^{1\rightarrow 1}$ and $Y^{2\rightarrow 1}$). This analysis aims to evaluate if the low-resolution groups for noisy samples are indeed more similar to the clean ones than the high-resolution encodings, suggesting more robust representations. We optimize the parameters of $y$ with back-propagation using a fixed number of 3 epochs and we report the validation loss for Aurora-4 test sets. Experimental setup Aurora-4 BIBREF17: We evaluate our models on the simulated multi-condition Aurora-4 dataset, consisting of $\sim $15h of audio for training and $\sim $9h for testing. The test set is divided into 4 subsets: A, B, C, and D. Subset A contains clean-condition recordings, subset B has 6 noise types added to the recordings (car, babble, restaurant, street, airport, train), subset C is recorded with a mismatched microphone, and subset D is recorded with a mismatched microphone and with noise added. In our experiments, we use multi-condition GMM-HMM forced alignments as targets for CNN training. The number of CD states for Aurora-4 is 3422. AMI BIBREF18: AMI contains $\sim $100h of meeting recordings, captured by an independent headset microphone (IHM), single distant microphone (SDM), and multiple distant microphones (MDM), where the mics are combined using the BeamformIt BIBREF19 toolkit. We train our models using the MDM data and evaluate the models for all 3 types of recordings to analyze the effect of mismatched training/testing conditions. We use the suggested train/dev/eval data split BIBREF20, and we evaluate the models on both dev and eval sets. The number of CD states for AMI is 3984. Features: In our experiments, we use 40-dimension mel-scaled filterbank (FBANK) features with {-5,..,5} context for splicing, resulting in a $40\times 11$ input feature map. Models: Our baseline CNN model BIBREF21 consists of 15 convolutional and one fully-connected layer. We use $3\times 3$ kernels throughout the network. We start with 64 output channels in the first layer and double them after 3 and 9 layers. We use batch normalization in every convolutional layer, and ReLU afterwards (unless a reverse order is noted). The initial learning rate is 0.001. We use early stopping for training. Results We present our results in terms of accuracy and robustness on Aurora-4 and AMI, as well as in terms of the computational cost, which is calculated as the number of multiply-accumulate operations (MACCs) performed for a single input feature map. The cost reduction when using octave convolutions stems from reduced dimensions $c_{in}$, $c_{out}$, $h$, and $w$ compared to a vanilla convolutional layer. Aurora-4: Results for Aurora-4 are presented in Table TABREF4. We replace vanilla convolutional layers of our baseline model (CNN) with OctConv and MultiOctConv layers. We first evaluate which layers can be replaced and find that all but the first layer, operating directly on the input representation, should be replaced for the best performance. This approach (L2-L15) is also the least costly. Reducing the ratio of low-resolution representations to 0.125 improves the WER for the mismatched microphone scenario C, but not for all test conditions. Applying batch normalization after ReLU is beneficial for test set C and D. For OctCNN models, the WER for test set D dropped by $\sim 0.4\%$ with a compression by one octave, and by another $\sim 0.4\%$ with a reversed batch normalization and ReLU order. The biggest differences between the MultiOctCNN models can be observed for test set D. The models with the lowest WERs are the ones with a spatial reduction by 2 or 3 octaves, and with 2 or 3 groups. This indicates that multi-scale octave convolutions seem to be an effective, as well as an efficient design for processing speech with background noise and channel mismatch. For MultiOctCNNs, batch normalization after ReLU also gives a performance boost for test set D, with a drop to $13.57\%$. To further evaluate the robustness of the latent representations we measured the MSE between the (projected) representations, described above (Fig. FIGREF5). The loss for the activations at the output of Conv15 ("all") is similar for CNN and OctCNN models for test sets B and C, but lower for test set D for OctCNN, indicating that the learned representations are more robust, contributing to lower WERs. As expected, within-model comparison of the loss show that the representations at low resolution are more similar to the clean encodings from test set A than the ones at high resolution. We believe that this effect improves the robustness of latent representations and results in a decreased WER. AMI: Results for AMI are presented in Table TABREF6. In contrast to the Aurora-4 findings, better performance was achieved with an all OctCNN model (L1-L15). This is an interesting finding, and we believe that the multi-scale processing of the input feature space is beneficial for AMI MDM because of the reverberation in the data. The reverbarated input time$\times $freq representation can be viewed as a spatially redundant one, therefore the OctConv layer applied to the input representation is effective. Unfortunately, the only MultiOctConv model superior to the baseline CNN is the one with 3 groups with a spatial reduction by 1 and 2 octaves. This result indicates that the spatial redundancy for this architecture for AMI MDM is not degrading the performance. However, in terms of the computational cost, we can reduce the #MACCs by a factor of 1.8 with only a small WER increase for a model with 4 resolution groups. Conclusions We have presented multi-scale octave CNN models for robust and efficient speech recognition. We build on Chen et al BIBREF0, applying the method to robust ASR and extending it to multiple resolution groups with a spatial reduction of more than one octave. Our experiments confirm that multi-scale processing of the hidden representations is not only more computationally efficient, but also improves the recognition. Similarity measures between clean and noisy encodings indicates that multi-scale processing in a deep CNN acoustic model improves the robustness of learned representations, especially in the additive noise and mismatched microphone scenario. The gain of the octave convolutions was also observed for AMI MDM data with significant reverberation, when applied to the input feature space. However, the model performance for AMI MDM was not improved with multi-octave convolutions. More careful tuning of the $\alpha $ hyperparameter could improve the results, as it controls the ratio of multi-scale feature maps in the model, enabling both learning of fine-grained representations preserving the details necessary for phonetic discrimination, as well as smoothed, more invariant representations improving the robustness of the model. It would also be possible to set $\alpha $ layer-by-layer to enable the fractions of channels at different resolutions to vary according to the depth of the representation. We proposed a single projection layer MSE loss to measure the affine relationship of clean and noisy hidden representations. With this approach, we evaluated the robustness of the encodings and improved the explainability of our models. More thorough analysis of the representations learned is an interesting future direction. We confirmed that the noisy lower resolution representations are more similar to the clean counterparts than high resolution representations, and thus are more robust. However, we did not investigate the reason for the increased similarity, leaving future work to ascertain if the lower resolution group corresponds to better speaker or noise characteristics, or more invariant phonetic representations.
Yes
e097c2ec6021b1c1195b953bf3e930374b74d8eb
e097c2ec6021b1c1195b953bf3e930374b74d8eb_0
Q: How is octave convolution concept extended to multiple resolutions and octaves? Text: Introduction Deep convolutional neural networks (CNNs) with 2D convolutions and small kernels BIBREF1, have achieved state-of-the-art results for several speech recognition tasks BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. The accuracy of those models grows with their complexity, leading to redundant latent representations. Several approaches have been proposed in the literature to reduce this redundancy BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, and therefore to improve their efficiency. Octave convolutional layers BIBREF0 address the problem of spatial redundancy in feature maps by learning feature representations at high and low resolutions. The low resolution processing path increases the size of the receptive field in the original input space, which is a plausible explanation of the improved performance for image classification. We extend the octave convolution concept to multi-scale octave convolutional layers, which include lower resolution feature maps with a higher compression rate (reduction by more than one octave), and the use of more than two feature map tensor groups in order to be learn representations at multiple scales. Multi-scale processing have been previously proposed for a variety of speech recognition tasks BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. In deep CNN acoustic models, some of the feature maps may need to represent information which varies at a lower rate, such as the characteristics of the speaker or background noise, compared to the information necessary for phonetic discrimination. Spatial average pooling in a low resolution group of feature maps can be interpreted as a form of low-pass filtering, providing smoothed representations of the observed data, potentially leading to improved performance. We investigate the use of multi-scale octave convolutional layers for robust speech recognition, and attempt to shed more light on the explainability of the models by evaluating the robustness of the learned representations using an affine transformation loss to measure the similarity between clean and noisy encodings. Multi-scale octave convolutions An octave convolutional layer BIBREF0 factorizes the output feature maps of a convolutional layer into two groups. The resolution of the low-frequency feature maps is reduced by an octave – height and width dimensions are divided by 2. In this work, we explore spatial reduction by up to 3 octaves – dividing by $2^t$, where $t=1,2,3$ – and for up to 4 groups. We refer to such a layer as a multi-octave convolutional (MultiOctConv) layer, and an example with three groups and reductions of one and two octaves is depicted in Fig. FIGREF1. In a vanilla CNN the convolutions have the same spatial resolution throughout the network. An octave convolutional (OctConv) layer is divided into high- and low-frequency feature maps and a multi-octave convolutional (MultiOctConv) layer has feature maps reduced by multiple octaves. Let the input feature tensor be $X \in \mathbb {R}^{c_{in} \times h \times w}$, where $c_{in}$ denotes the number of input channels and $h$ and $w$ correspond to the spatial dimensions. In a MultiOctConv layer working at 3 resolutions, $X$ is factorized along the channel dimension into $X = \lbrace X^1, X^2, X^3\rbrace $. The first tensor group tensor, $X^1$, is a representation at the same spatial scale as $X$. The spatial dimensions of the second and third group tensors, $X^2$ and $X^3$, are reduced by one and two octaves respectively. The dimensions of the input tensors $X^1$, $X^2$ and $X^3$ are described in Fig. FIGREF1. The fraction of the channels for each group is denoted with $\alpha _{n} \in [0,1]$, where $\sum _{n=1}^{N} \alpha _{n} = 1$ for $N$ resolution groups in the MultiOctConv layer. For simplicity, we use the same $\alpha _{n}$ for input and output representations within the same scale group. Similarly, the output tensors are also factorized into $Y = \lbrace Y^1, Y^2, Y^3\rbrace $. Their dimensions are analogous to the dimensions of the input tensors and are described in Fig. FIGREF1. To compute $Y^1$, $Y^2$ and $Y^3$ we operate directly on the factorized input tensors $X^1$, $X^2$ and $X^3$. Inter-frequency information update is implemented as a sum of feature maps from different resolution groups. To be able to sum those representations for a desired output scale, the spatial dimensions of the input tensors must be the same. For this reason, two operations are employed: spatial average pooling pool($X, p$) and bilinear interpolation upsample($X, u$), where $p$ is the kernel size and stride for the the 2D pooling layer and $u$ is the upsampling factor. The output MultiOctConv representations are therefore computed as where $f(.)$ is the convolution function and $W^{n_{in}\rightarrow {n_{out}}}\in \mathbb {R}^{c_{in} \times k \times k \times c_{out}}$ is the convolution filter for a $k \times k$ kernel. We call the information update “intra-frequency” when $n_{in} = n_{out}$, and “inter-frequency” when $n_{in} \ne n_{out}$. It is important to note that the convolution $f(.)$ operates on the tensors compressed with average pooling and on the tensors before upsampling, making the design more efficient. The number of parameters in the MultiOctConv layer is the same as in a vanilla convolutional layer. Multi-scale octave convolutions ::: Robustness of learned representations To evaluate the robustness of learned representations, we compare the projections of clean and noisy Aurora-4 samples. The similarity between them is measured using the mean squared error (MSE) loss of an affine projection $y$ of $N$ clean to noisy samples (Eq. DISPLAY_FORM3), to take into account permutations of hidden representations and to ensure invariance of the metric to affine transformations of the encodings. The number of units in layer $y$ and the dimensionality $D$ of $\mathbf {x}_{h}$ is 1024. We use the Aurora-4 test sets and compare clean encodings $\mathbf {x}_{h,clean}$ with noisy encodings $\mathbf {x}_{h,noise}$, obtained as the activations from the last convolutional layer with a forward pass through a trained model. Both hidden representations were obtained for CNN and octave CNN (OctCNN) models in order to compare representations between the models. Also, for intra-model comparison, we evaluate the loss with the encodings from high and low-resolution groups (paths $Y^{1\rightarrow 1}$ and $Y^{2\rightarrow 1}$). This analysis aims to evaluate if the low-resolution groups for noisy samples are indeed more similar to the clean ones than the high-resolution encodings, suggesting more robust representations. We optimize the parameters of $y$ with back-propagation using a fixed number of 3 epochs and we report the validation loss for Aurora-4 test sets. Experimental setup Aurora-4 BIBREF17: We evaluate our models on the simulated multi-condition Aurora-4 dataset, consisting of $\sim $15h of audio for training and $\sim $9h for testing. The test set is divided into 4 subsets: A, B, C, and D. Subset A contains clean-condition recordings, subset B has 6 noise types added to the recordings (car, babble, restaurant, street, airport, train), subset C is recorded with a mismatched microphone, and subset D is recorded with a mismatched microphone and with noise added. In our experiments, we use multi-condition GMM-HMM forced alignments as targets for CNN training. The number of CD states for Aurora-4 is 3422. AMI BIBREF18: AMI contains $\sim $100h of meeting recordings, captured by an independent headset microphone (IHM), single distant microphone (SDM), and multiple distant microphones (MDM), where the mics are combined using the BeamformIt BIBREF19 toolkit. We train our models using the MDM data and evaluate the models for all 3 types of recordings to analyze the effect of mismatched training/testing conditions. We use the suggested train/dev/eval data split BIBREF20, and we evaluate the models on both dev and eval sets. The number of CD states for AMI is 3984. Features: In our experiments, we use 40-dimension mel-scaled filterbank (FBANK) features with {-5,..,5} context for splicing, resulting in a $40\times 11$ input feature map. Models: Our baseline CNN model BIBREF21 consists of 15 convolutional and one fully-connected layer. We use $3\times 3$ kernels throughout the network. We start with 64 output channels in the first layer and double them after 3 and 9 layers. We use batch normalization in every convolutional layer, and ReLU afterwards (unless a reverse order is noted). The initial learning rate is 0.001. We use early stopping for training. Results We present our results in terms of accuracy and robustness on Aurora-4 and AMI, as well as in terms of the computational cost, which is calculated as the number of multiply-accumulate operations (MACCs) performed for a single input feature map. The cost reduction when using octave convolutions stems from reduced dimensions $c_{in}$, $c_{out}$, $h$, and $w$ compared to a vanilla convolutional layer. Aurora-4: Results for Aurora-4 are presented in Table TABREF4. We replace vanilla convolutional layers of our baseline model (CNN) with OctConv and MultiOctConv layers. We first evaluate which layers can be replaced and find that all but the first layer, operating directly on the input representation, should be replaced for the best performance. This approach (L2-L15) is also the least costly. Reducing the ratio of low-resolution representations to 0.125 improves the WER for the mismatched microphone scenario C, but not for all test conditions. Applying batch normalization after ReLU is beneficial for test set C and D. For OctCNN models, the WER for test set D dropped by $\sim 0.4\%$ with a compression by one octave, and by another $\sim 0.4\%$ with a reversed batch normalization and ReLU order. The biggest differences between the MultiOctCNN models can be observed for test set D. The models with the lowest WERs are the ones with a spatial reduction by 2 or 3 octaves, and with 2 or 3 groups. This indicates that multi-scale octave convolutions seem to be an effective, as well as an efficient design for processing speech with background noise and channel mismatch. For MultiOctCNNs, batch normalization after ReLU also gives a performance boost for test set D, with a drop to $13.57\%$. To further evaluate the robustness of the latent representations we measured the MSE between the (projected) representations, described above (Fig. FIGREF5). The loss for the activations at the output of Conv15 ("all") is similar for CNN and OctCNN models for test sets B and C, but lower for test set D for OctCNN, indicating that the learned representations are more robust, contributing to lower WERs. As expected, within-model comparison of the loss show that the representations at low resolution are more similar to the clean encodings from test set A than the ones at high resolution. We believe that this effect improves the robustness of latent representations and results in a decreased WER. AMI: Results for AMI are presented in Table TABREF6. In contrast to the Aurora-4 findings, better performance was achieved with an all OctCNN model (L1-L15). This is an interesting finding, and we believe that the multi-scale processing of the input feature space is beneficial for AMI MDM because of the reverberation in the data. The reverbarated input time$\times $freq representation can be viewed as a spatially redundant one, therefore the OctConv layer applied to the input representation is effective. Unfortunately, the only MultiOctConv model superior to the baseline CNN is the one with 3 groups with a spatial reduction by 1 and 2 octaves. This result indicates that the spatial redundancy for this architecture for AMI MDM is not degrading the performance. However, in terms of the computational cost, we can reduce the #MACCs by a factor of 1.8 with only a small WER increase for a model with 4 resolution groups. Conclusions We have presented multi-scale octave CNN models for robust and efficient speech recognition. We build on Chen et al BIBREF0, applying the method to robust ASR and extending it to multiple resolution groups with a spatial reduction of more than one octave. Our experiments confirm that multi-scale processing of the hidden representations is not only more computationally efficient, but also improves the recognition. Similarity measures between clean and noisy encodings indicates that multi-scale processing in a deep CNN acoustic model improves the robustness of learned representations, especially in the additive noise and mismatched microphone scenario. The gain of the octave convolutions was also observed for AMI MDM data with significant reverberation, when applied to the input feature space. However, the model performance for AMI MDM was not improved with multi-octave convolutions. More careful tuning of the $\alpha $ hyperparameter could improve the results, as it controls the ratio of multi-scale feature maps in the model, enabling both learning of fine-grained representations preserving the details necessary for phonetic discrimination, as well as smoothed, more invariant representations improving the robustness of the model. It would also be possible to set $\alpha $ layer-by-layer to enable the fractions of channels at different resolutions to vary according to the depth of the representation. We proposed a single projection layer MSE loss to measure the affine relationship of clean and noisy hidden representations. With this approach, we evaluated the robustness of the encodings and improved the explainability of our models. More thorough analysis of the representations learned is an interesting future direction. We confirmed that the noisy lower resolution representations are more similar to the clean counterparts than high resolution representations, and thus are more robust. However, we did not investigate the reason for the increased similarity, leaving future work to ascertain if the lower resolution group corresponds to better speaker or noise characteristics, or more invariant phonetic representations.
The resolution of the low-frequency feature maps is reduced by an octave – height and width dimensions are divided by 2. In this work, we explore spatial reduction by up to 3 octaves – dividing by $2^t$, where $t=1,2,3$ – and for up to 4 groups. We refer to such a layer as a multi-octave convolutional (MultiOctConv) layer,
320d72a9cd19b52c29dda9ddecd520c9938a717f
320d72a9cd19b52c29dda9ddecd520c9938a717f_0
Q: Does this paper address the variation among English dialects regarding these hedges? Text: Introduction The modelling of natural language relies on the idea that languages are compositional, i.e. that the meaning of a sentence is a function of the meanings of the words in the sentence, as proposed by BIBREF0 . Whether or not this principle tells the whole story, it is certainly important as we undoubtedly manage to create and understand novel combinations of words. Fuzzy set theory has long been considered a useful framework for the modelling of natural language expressions, as it provides a functional calculus for concept combination BIBREF1 , BIBREF2 . A simple example of compositionality is hedged concepts. Hedges are words such as `very', `quite', `more or less', `extremely'. They are usually modelled as transforming the membership function of a base concept to either narrow or broaden the extent of application of that concept. So, given a concept `short', the term `very short' applies to fewer objects than `short', and `quite short' to more. Modelling a hedge as a transformation of a concept allows us to determine membership of an object in the hedged concept as a function of its membership in the base concept, rather than building the hedged concept from scratch BIBREF3 . Linguistic hedges have been widely applied, including in fuzzy classifiers BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 and database queries BIBREF8 , BIBREF9 . Using linguistic hedges in these applications allows increased accuracy in rules or queries whilst maintaining human interpretability of results BIBREF10 , BIBREF11 . This motivates the need for a semantically grounded account of linguistic hedges: if hedged results are more interpretable then the hedges used must themselves be meaningful. In the following we provide an account of linguistic hedges that is both functional, and semantically grounded. In its most basic formulation, the operation requires no additional parameters, although we also show that the formulae can be generalised if necessary. Our account of linguistic hedges uses the label semantics framework to model concepts BIBREF12 . This is a random set approach which quantifies an agent's subjective uncertainty about the extent of application of a concept. We refer to this uncertainty as semantic uncertainty BIBREF13 to emphasise that it concerns the definition of concepts and categories, in contrast to stochastic uncertainty which concerns the state of the world. In BIBREF13 the label semantics approach is combined with conceptual spaces BIBREF14 and prototype theory BIBREF15 , to give a formalisation of concepts as based on a prototype and a threshold, located in a conceptual space. This approach is discussed in detail in section "Conceptual Spaces" . An outline of the paper is then as follows: section "Approaches to linguistic hedges" discusses different approaches to linguistic hedges from the literature, and compares these with our model. Subsequently, in section "Label semantics approach to linguistic hedges" , we give formulations of the hedges `very' and `quite'. These are formed by considering the dependence of the threshold of a hedged concept on the threshold of the original concept. We give a basic model and two generalisations, show that the models can be composed and investigate the behaviour in the limit of composition. Section "Discussion" compares our results to those in the literature and proposes further lines of research. Prototype theory and fuzzy set theory Prototype theory views concepts as being defined in terms of prototypes, rather than by a set of necessary and sufficient conditions. Elements from an underlying metric space then have graded membership in a concept depending on their similarity to a prototype for the concept. There is some evidence that humans use natural categories in this way, as shown in experiments reported in BIBREF15 . Fuzzy set theory BIBREF1 was proposed as a calculus for combining and modifying concepts with graded membership, and extended these ideas in BIBREF2 to linguistic variables as variables taking words as values, rather than numbers. For example, `height' can be viewed as a linguistic variable taking values `short,' `tall', `very tall', etc. The variable relates to an underlying universe of discourse $\Omega $ , which for the concept `tall' could be $\mathbb {R}^+$ . Then each value $L$ of the variable is associated with a fuzzy subset of $\Omega $ , and a function $\mu _L:\Omega \rightarrow [0,1]$ associates with each $x \in \Omega $ the value of its membership in $L$ . Prototype theory gives a semantic basis to fuzzy sets through the notion of similarity to a prototype, as described in BIBREF16 . In this context, concepts are represented by fuzzy sets and membership of an element in a concept is quantified by its similarity to the prototype. In this situation the fuzziness of the concept is seen as inherent to the concept. An alternative interpretation for fuzzy sets is random set theory, see BIBREF16 for an exposition. Here, the fuzziness of a set comes from uncertainty about a crisp set, i.e. semantic uncertainty, rather than fuzziness inherent in the world. This second approach is the stance taken by BIBREF13 , and which we now adopt in this paper. Conceptual Spaces Conceptual spaces are proposed by Gärdenfors in BIBREF14 as a framework for representing information at the conceptual level. Gärdenfors contrasts his theory with both a symbolic, logical approach to concepts, and an associationist approach where concepts are represented as associations between different kinds of basic information elements. Rather, conceptual spaces are geometrical structures based on quality dimensions such as weight, height, hue, brightness, etc. It is assumed that conceptual spaces are metric spaces, with an associated distance measure. This might be Euclidean distance, or any other appropriate metric. The distance measure can be used to formulate a measure of similarity, as needed for prototype theory - similar objects are close together in the conceptual space, very different objects are far apart. To develop the conceptual space framework, Gärdenfors also introduces the notion of integral and separable dimensions. Dimensions are integral if assignment of a value in one dimension implies assignment of a value in another, such as depth and breadth. Conversely, separable dimensions are those where there is no such implication, such as height and sweetness. A domain is then defined as a set of quality dimensions that are separable from all other dimensions, and a conceptual space is defined as a collection of one or more domains. Gärdenfors goes on to define a property as a convex region of a domain in a conceptual space. A concept is defined as a set of such regions that are related via a set of salience weights. This casting of (at least) properties as convex regions of a domain sits very well with prototype theory, as Gärdenfors points out. If properties are convex regions of a space, then it is possible to say that an object is more or less central to that region. Because the region is convex, its centroid will lie within the region, and this centroid can be seen as the prototype of the property. Label Semantics The label semantics framework was proposed by BIBREF12 and related to prototype theory and conceptual spaces in BIBREF13 . In this framework, agents use a set of labels $LA = \lbrace L_1, L_2, ..., L_n\rbrace $ to describe an underlying conceptual space $\Omega $ which has a distance metric $d(x,y)$ between points. In fact, it is sufficient that $d(x,y)$ be a pseudo-distance. When $x$ or $y$ is a set, say $Y$ , we take $d(x,Y) = \text{min}\lbrace d(x,y): y \in Y\rbrace $ . In this case, the set $Y$ is seen as an ontic set, i.e., a set where all elements are jointly prototypes, as opposed to an epistemic set describing a precise but unknown prototype, as described in BIBREF17 . Each label $L_i$ is associated with firstly a set of prototype values $\Omega $0 , and secondly a threshold $\Omega $1 , about which the agents are uncertain. The thresholds $\Omega $2 are drawn from probability distributions $\Omega $3 . Labels $\Omega $4 are associated with neighbourhoods $\Omega $5 . The neighbourhood can be seen as the extension of the concept $\Omega $6 . The intuition here is that $\Omega $7 captures the idea of being sufficiently close to prototypes $\Omega $8 . In other words, $\Omega $9 is sufficiently close to $d(x,y)$0 to be appropriately labelled as $d(x,y)$1 providing that $d(x,y)$2 . Given an element $x \in \Omega $ , we can ask how appropriate a given label is to describe it. This is quantified by an appropriateness measure, denoted $\mu _{L_i}(x)$ . We are intentionally using the same notation as for the membership function of a fuzzy set. This quantity is the probability that the distance from $x$ to $P_i$ , the prototype of $L_i$ , is less than the threshold $\varepsilon _i$ , as given by: $ \mu _{L_i}(x) = P(\varepsilon _i : x \in \mathcal {N}^{\varepsilon _i}_{L_i}) = P(\varepsilon _i : d(x, P_i) \le \varepsilon _i) = \int _{d(x, P_i)}^\infty \delta _{\varepsilon _i}(\varepsilon _i) \mathrm {d}\varepsilon _i $ We also use the notation $\int _{d}^\infty \delta _{\varepsilon _i} (\varepsilon _i)\mathrm {d}\varepsilon _i = \Delta _i(d)$ , according to which $\mu _{L_i}(x) = \Delta _i(d(x, P_i))$ . The above formulation provides a link to the random set interpretation of fuzzy sets. Random sets are random variables taking sets as values. If we view $\mathcal {N}^{\varepsilon _i}_{L_i}$ as a random set from $\mathbb {R}^+$ into $2^\Omega $ , then $\mu _{L_i}(x)$ is the single point coverage function of $\mathcal {N}^{\varepsilon _i}_{L_i}$ , as defined in BIBREF18 , and also commonly called a contour function BIBREF19 . Labels can often be semantically related to each other. For example, the label `pet fish' is semantically related to the labels `pet' and `fish', and the label `very tall' related to the label `tall'. This prompts two questions: firstly, how the prototypes of each concept are related to each other, and secondly, how the thresholds of each concept are related. Two simple models for the relationships between the thresholds are given in BIBREF13 . The consonant model takes all thresholds as being dependent on one common underlying threshold. So, all thresholds have the same distance metric $d$ and are related to a base threshold $\varepsilon $ by the dependency that $\varepsilon _i = f_i(\varepsilon )$ for increasing functions $f_i$ . In contrast, the independence model takes all thresholds as being independent of each other. This might hold when labels are taken from different conceptual spaces. Between these two extremes, we model dependencies between thresholds as a Bayesian network - i.e., a directed acyclic graph whose edges encode conditional dependence between variables. The key property of this type of network is that the joint distribution of all variables can be broken into factors that depend only on each individual variable and its parents. So, for example, the network in figure 1 can be factorised as $\delta (\varepsilon _1,\varepsilon _2, \varepsilon _3, \varepsilon _4, \varepsilon _5) = \delta _{\varepsilon _1}(\varepsilon _1)\delta _{\varepsilon _2}(\varepsilon _2)\delta _{\varepsilon _3|\varepsilon _1, \varepsilon _2}(\varepsilon _3|\varepsilon _1,\varepsilon _2)\delta _{\varepsilon _4|\varepsilon _2}(\varepsilon _4|\varepsilon _2)\delta _{\varepsilon _5|\varepsilon _3}(\varepsilon _5|\varepsilon _3)$ . This enables calculation of the joint distribution and therefore marginal distributions in an efficient manner. One intuitively easy example is where the dependency of one threshold $\varepsilon _2$ on another $\varepsilon _1$ is that $\varepsilon _2 \le \varepsilon _1$ . This could be taken to model the dependency of the threshold of the concept `very tall' on the threshold of `tall'. The label `very tall' should be appropriate to describe fewer people than the label `tall'. Therefore, the threshold for describing someone as `very tall' will be narrower than the threshold for describing someone as `tall', i.e. $\varepsilon _{\text{very tall}} \le \varepsilon _{\text{tall}}$ . This simple model will form part of the approach to modelling linguistic hedges, as outlined in the sequel. Approaches to linguistic hedges Linguistic hedges have been given varying treatments in the literature. In this section we summarise these different approaches and state the approach that we wish to take, discussing properties that hedge modifiers may need. We give two specific approaches from the literature with which we will compare our results. In BIBREF3 the idea of linguistic hedges as operators modifying fuzzy sets was introduced, so that the membership function $\mu _{hL}(x)$ of a hedged concept, $hL$ , is a function of the membership of the base concept $L$ , i.e. $\mu _{hL}(x) = f(\mu _L(x))$ . Furthermore, truth can be considered as a linguistic variable and hence a fuzzy set BIBREF2 , so that the application of a hedge can be seen as modifying the truth value of a sentence using that concept BIBREF20 , BIBREF21 , BIBREF2 . This second view is useful in approximate reasoning, and allows for an algebraic approach to investigating the properties of linguistic hedges, as introduced in BIBREF2 , and expanded upon in BIBREF22 , BIBREF20 , BIBREF21 . The approach we take, however, is to view a hedge as modifying the fuzzy set associated with a concept directly, as taken by BIBREF23 , BIBREF10 , BIBREF24 , BIBREF25 . Rather than examining the algebraic properties of hedges or their role in reasoning, we look at how hedges are semantically grounded and argue that our approach provides a particularly clear semantics. We will propose a set of operations that may be used for both expansion and refinement of single concepts. This is in contrast to the work presented in BIBREF26 in which information coarsening is effected by taking disjunctions of labels. The idea of a hedged concept has some similarities to that of the bipolar model of concepts described in BIBREF27 , since if it is appropriate to describe someone as `very tall', it must be appropriate to describe them as `tall', and similarly describing someone as `quite tall' implies that it is not entirely inappropriate to describe them as `tall'. However, we see the concepts derived by application of hedges as labels in their own right which can be used to describe data or objects. Zadeh divides hedges into two types. A type 1 hedge can be seen as an operator acting on a single fuzzy set. Examples are `very', `more or less', `quite', or `extremely' BIBREF3 . Type 2 hedges are more complicated and include modifiers such as `technically' or `practically'. In BIBREF3 concepts are considered as made up of various different components, with the membership function a weighted sum of the memberships of the individual components. Type 1 hedges operate on all components equally, whereas type 2 hedges differentiate between components. For example, the hedge `essentially' might give more weight to the most important components in a concept. Type 2 hedges are further explored in BIBREF28 , BIBREF29 , where components of a concept are categorised as definitional, primary or secondary, and the hedges `technically', `strictly speaking' and `loosely speaking' are analysed in terms of these categories. Although in the following we restrict ourselves to consideration of type 1 hedges only, the treatment of concepts as having different components is mirrored by the conceptual spaces view, where each component might be seen as a dimension in the conceptual space. Further development of the framework may therefore allow a treatment of type 2 hedges. A further distinction between types of hedge lies in the difference between powering or shifting modifiers. Powering modifiers are of the form $\mu _{hL}(x) = (\mu _L(x))^k$ , where $hL$ refers to the hedged concept and $k$ is some real value, and shifting modifiers are of the form $\mu _{hL}(x) = (\mu _L(x-a))$ . Zadeh introduces both types of modifier in his discussion of type 1 hedges BIBREF3 , however his powering modifiers are most frequently cited. These are the concentration operator $CON(\mu _{\text{tall}}(x)) = (\mu _{\text{tall}}(x))^2$ , and the dilation operator $DIL(\mu _{\text{tall}}(x)) = (\mu _{\text{tall}}(x))^\frac{1}{2}$ , which are often taken to implement the hedges `very' and `quite', (alternatively `more or less'), respectively. The operators $CON$ and $DIL$ leave the core, $\lbrace x \in \Omega : \mu _L(x) = 1\rbrace $ , and support $\lbrace x \in \Omega : \mu _L(x) \ne 0\rbrace $ , of the fuzzy sets unchanged, which is often argued to be undesirable BIBREF9 , BIBREF23 , BIBREF25 , BIBREF30 . In particular, BIBREF9 argue that in a fuzzy database, if a concentrating hedge is being used to refine a query that is returning too many objects, the hedge needs to reduce the number of objects returned, and hence narrow down the core. Furthermore, BIBREF7 find that classifiers using the $CON$ and $DIL$ operators (classical hedges) do not perform as well as those with hedges that modify the core and support of the fuzzy sets. In contrast, Zadeh himself argues that the core should not be altered. The application of a modifier `very' to a property given by a crisp set should leave that property unchanged: `very square' is the same as `square'. A fuzzy set is made up of a non-fuzzy part, the core, and a fuzzy part, $\lbrace x \in \Omega : 0 < \mu _L(x) <1\rbrace $ . Since the core of a fuzzy set is a crisp set, it should be left unchanged. The use of classical hedges does improve performance over non-hedged fuzzy rules in expert systems BIBREF4 , BIBREF5 , BIBREF6 , so the argument against classical hedges is a matter of degree. The use of the $CON$ and $DIL$ operators to model the hedges `very' and `quite' is further criticised on the basis that the modifiers are arbitrary and semantically ungrounded. No justification is given for these modifiers other than that they have what seem to be intuitively the right properties BIBREF23 , BIBREF31 , BIBREF25 . Grounding hedges semantically is important for a theoretical account of what happens when we use terms like `very' and also for retaining interpretability in fuzzy systems. BIBREF23 , BIBREF31 both ground modifiers using a resemblance relation which takes into account how objects in the universe are similar to each other. BIBREF25 takes a horizon shifting approach. In BIBREF25 the class of finite numbers is used as an example of the horizon shifting approach. Some numbers are certainly finite, however as numbers get larger, finiteness becomes impossible to verify. Mapping this idea onto the concept `small', we can say that there is a class of numbers that are definitely small, say $[0, c]$ . As numbers get larger than $c$ we approach the horizon past which the concept `small' no longer applies, expressed as $1 - \epsilon (x)(x-c)$ . So: $ \mu _{\text{small}}(x) = {\left\lbrace \begin{array}{ll} 1 & \text{if } x \in [0, c] \\ 1 - \epsilon (x)(x-c) & \text{if } x \ge c \end{array}\right.} $ Now, to implement the hedge `very', the horizon $c$ is shifted by a factor $\sigma $ and the membership function altered thus: $ \mu _{\text{very small}}(x) = {\left\lbrace \begin{array}{ll} 1 & \text{if } x \in [0, \sigma c] \\ 1 - \epsilon (x, \sigma )(x- \sigma c) & \text{if } x \ge \sigma c \end{array}\right.} $ In BIBREF25 , examples of different kinds of membership functions that might be used to implement this idea are given. A linear membership function gives $\epsilon (x) = \frac{1}{a-c}$ where $a$ is the upper limit of the membership function. To implement the hedge, the function $\epsilon (x, \sigma ) = \frac{1}{\sigma (a-c)}$ is introduced, giving $ \mu _{\text{small}}(x) = {\left\lbrace \begin{array}{ll} 1 & \text{if } x \in [0, c] \\ 1 - \frac{x-c}{a-c} & \text{if } x \in [c, a]\\ 0 & \text{otherwise} \end{array}\right.} $ and $ \mu _{\text{very small}}(x) = {\left\lbrace \begin{array}{ll} 1 & \text{if } x \in [0, \sigma c] \\ 1 - \frac{x- \sigma c}{\sigma (a-c)} & \text{if } x \in [\sigma c, \sigma a]\\ 0 & \text{otherwise} \end{array}\right.} $ BIBREF23 , BIBREF31 both ground their approaches in the idea of looking at the elements near a fuzzy set in order to contract or dilate the set. The two approaches are similar, so we restrict ourselves to that of BIBREF23 . This approach introduces a fuzzy resemblance relation on the universe of discourse, and either a $T$ -norm in the case of dilation, or a fuzzy implicator for concentration. The modifier is then implemented as follows. Consider a fuzzy set $F$ and a proximity relation $E^Z$ which is approximate equality, parametrised by a fuzzy set $Z$ . As described in BIBREF23 , $E$ is modelled by $(u,v) \rightarrow E(u,v) = Z(u - v)$ , where $Z$ is a fuzzy interval centred on 0 with finite support. In terms of a trapezoidal membership function, $Z$ can be expressed as $(-z - a, -z, z, z+a)$ . Therefore, if $|u -v| \le z$ , $F$0 and $F$1 are judged to be approximately equal, i.e. $F$2 . The set $F$3 is dilated by $F$4 , where $F$5 is any $F$6 -norm, $F$7 being the standard. To understand the effect that this has on a fuzzy set $F$ , suppose that $F$ has a trapezoidal membership function $(A, B, C, D)$ where $[B, C]$ is the core of $F$ and $[A,B]$ , $[C,D]$ the support, and that $Z$ similarly is $(-z - a, -z, z, z+a)$ , with the T-norm $min$ used. Then $F$0 . Concentration is effected in a similar way: $E_Z(F)(s) = \text{inf}_{r \in \Omega } I(F(r), E^Z(s,r))$ , where I is a fuzzy implication. If $F$ and $Z$ are as above with the condition that $C - B \ge 2z$ , and $I$ is the Gödel implication, then $E_Z(F)$ = $(A + z + a, B + z, C - z, D - z - a)$ . For example, suppose we start with a set $F$ described in trapezoidal notation as $F = (A, B, C, D) = (2,4,6,8)$ , and an approximate equality function parametrised by $Z =(-z-a, -z, z, z + a) = (-1, -0.5, 0.5, 1)$ . The dilation of the set $F$ using T-norm $min$ is then: $ E^Z(F) = (A - z - a, B - z, C +z, D +z +a) = (1, 3.5, 6.5, 9) $ The concentration of the set $F$ using the Gödel implication is: $ E_Z(F) = (A + z + a, B + z, C - z, D - z - a) = (3, 4.5, 5.5, 7) $ These effects are illustrated in figure 2 . The intuitive idea behind this approach is that if an object $x_1$ resembles another object $x_2$ that is $L$ , then $x_1$ can be said to be `quite $L$ '. Conversely, object $x_2$ that is $L$ can be said to be `very $L$ ' only if all the objects $x$ that resemble it can be said to be $L$ . This formulation alters both the core and support of the fuzzy set $x_2$0 , which has been argued to be a desirable effect. Following BIBREF23 , BIBREF31 , BIBREF25 , we will propose linguistic modifiers that are semantically grounded rather than attempting to show their utility in classifiers, reasoning or to examine the algebra of modifiers. Our approach to linguistic modifiers arises very naturally from the label semantics framework, and the primary result does not require any parameters additional to the original membership function of the concept. We also show similarities between our model and the two detailed above. Label semantics approach to linguistic hedges We present three formulations of linguistic hedges with increasing levels of generality. The first assumes that prototypes are equal. Secondly, we show that an analogue holds where prototypes are not equal, and thirdly that these hold in the case where the second threshold is a function of the first. We go on to show similarities between our model and those of BIBREF23 , BIBREF31 , BIBREF25 . Furthermore, we show that hedges are compositional, and look at their behaviour in the limit of composition. As described in section "Approaches to linguistic hedges" , $LA$ denotes a finite set of labels $\lbrace L_i\rbrace $ that agents use to describe basic categories. $\Omega $ is the underlying domain of discourse, with prototypes $P_i \in \Omega $ and thresholds $\varepsilon _i$ , drawn from a distribution $\delta _{\varepsilon _i}$ . As before, the appropriateness $\mu _{L_i}(x) = \Delta _i(d(x, P_i)) = \int _{d(x, P_i)}^\infty \delta _{\varepsilon _i}(\varepsilon _i)\mathrm {d}\varepsilon _i$ . We use the notation $L_i = <P_i, d, \delta _{\varepsilon _i}>$ . A concept $L_1$ can be narrowed or broadened to a second concept $L_2$ using the linguistic hedges `very' and `quite' respectively, i.e. $L_2$ is defined as `quite $L_1$ '. The directed acyclic graph illustrating this dependency is given in figure 3 . In this case, the threshold $\varepsilon _2$ associated with $L_2$ is dependent on $\varepsilon _1$ in that $\varepsilon _2 \ge \varepsilon _1$ . In the case of `very', we have that $\varepsilon _2 \le \varepsilon _1$ . Essentially, for `quite', we are saying that however wide a margin of certainty we apply the label `tall' with, the margin for `quite tall' will be wider, and conversely for `very'. Hedges with unmodified prototypes Definition 1 (Dilation and Concentration) A label $L_2 = <P_2, d, \delta _{\varepsilon _2}>$ is a dilation of a label $L_1 = <P_1, d, \delta _{\varepsilon _1}>$ when $\varepsilon _2$ is dependent on $\varepsilon _1$ such that $\varepsilon _2 \ge \varepsilon _1$ . $L_2$ is a concentration of $L_1$ when $\varepsilon _2$ is dependent on $\varepsilon _1$ such that $\varepsilon _2 \le \varepsilon _1$ . Theorem 2 ( $L_2 = $ quite $L_1$ ) Suppose $L_2 = <P_2, d, \delta _{\varepsilon _2}>$ is a dilation of $L_1 = <P_1, d, \delta _{\varepsilon _1}>$ , so that $\varepsilon _2 \ge \varepsilon _1$ . Suppose also that $P_1 = P_2 = P$ , and that the marginal (unconditional) distribution of $\varepsilon _2$ , before conditioning on the knowledge that $\varepsilon _2 \ge \varepsilon _1$ , is identical to $\delta _{\varepsilon _1}$ , since $L_2$ is a dilation of $L_1$0 . Then $L_1$1 , $L_1$2 . $L_1$3 and hence, $ \delta (\varepsilon _1, \varepsilon _2) &= \delta _{\varepsilon _1}(\varepsilon _1)\delta _{\varepsilon _2|\varepsilon _1}(\varepsilon _2|\varepsilon _1) = {\left\lbrace \begin{array}{ll} \frac{\delta _{\varepsilon _1}(\varepsilon _1)\delta _{\varepsilon _1}(\varepsilon _2)}{\Delta _1(\varepsilon _1)} & \text{if } \varepsilon _2 \ge \varepsilon _1\\ 0 & \text{otherwise} \end{array}\right.} $ Then since $\varepsilon _2 \ge \varepsilon _1$ we have that $ \mu _{L_2}(x) &= \int _0^\infty \int _{max(\varepsilon _1, d(x,P))}^\infty \delta (\varepsilon _1, \varepsilon _2) d\varepsilon _2 d\varepsilon _1 = \int _0^\infty \int _{max(\varepsilon _1, d(x,P))}^\infty \frac{\delta _{\varepsilon _1}(\varepsilon _1)\delta _{\varepsilon _1}(\varepsilon _2)}{\Delta _1(\varepsilon _1)} d\varepsilon _2 d\varepsilon _1\\ &= \int _0^{d(x,P)}\frac{\delta _{\varepsilon _1}(\varepsilon _1)}{\Delta _1(\varepsilon _1)}\int _{d(x,P)}^\infty \delta _{\varepsilon _1}(\varepsilon _2) d\varepsilon _2 d\varepsilon _1 + \int _{d(x,P)}^\infty \frac{\delta _{\varepsilon _1}(\varepsilon _1)}{\Delta _1(\varepsilon _1)}\int _{\varepsilon _1}^\infty \delta _{\varepsilon _1}(\varepsilon _2) d\varepsilon _2 d\varepsilon _1\\ &= \mu _{L_1}(x)\int _0^{d(x,P)}\frac{\delta _{\varepsilon _1}(\varepsilon _1)}{\Delta (\varepsilon _1)} d\varepsilon _1 + \int _{d(x,P)}^\infty \delta _{\varepsilon _1}(\varepsilon _1) d\varepsilon _1 = \mu _{L_1}(x) - \mu _{L_1}(x)\ln (\mu _{L_1}(x))\nonumber $ The following example gives an illustration of the effect of applying this hedge, in comparison with the standard dilation hedge $(\mu _L(x))^{1/2}$ . Example 3 Suppose our conceptual space $\Omega = \mathbb {R}$ with Euclidean distance and that a label $L$ has prototype $P = 5$ , and threshold $\varepsilon \sim $ Uniform $[0,3]$ . Then $ \mu _L(x) = {\left\lbrace \begin{array}{ll} 1 - \frac{|x - 5|}{3} & \text{ if } x \in [2, 8]\\ 0 & otherwise \end{array}\right.} $ We can then form a new label $qL$ with prototype $P_q = P = 5$ and threshold $\varepsilon _q \ge \varepsilon $ . Then, according to theorem "Theorem 2 (L 2 =L_2 = quite L 1 L_1)" , $\mu _{qL}(x) = \mu _L(x) - \mu _L(x)\ln \mu _L(x)$ . The effect of applying a dilation hedge to $L$ can be seen in figure 4 . The dilation hedge given above is contrasted with Zadeh's dilation hedge $(\mu _L(x))^{1/2}$ . Theorem 4 ( $L_2 = $ very $L_1$ ) Suppose $L_2 = <P_2, d, \delta _{\varepsilon _2}>$ is a concentration of $L_1 = <P_1, d, \delta _{\varepsilon _1}>$ , so that $\varepsilon _2 \le \varepsilon _1$ . Suppose also that $P_1 = P_2 = P$ , and that the marginal (unconditional) distribution of $\varepsilon _2$ , before conditioning on the knowledge that $\varepsilon _2 \le \varepsilon _1$ , is identical to $\delta _{\varepsilon _1}$ , since $L_2$ is a concentration of $L_1$0 . Then $L_1$1 , $L_1$2 . $L_1$3 and hence, $ \delta (\varepsilon _1, \varepsilon _2) &= \delta _{\varepsilon _1}(\varepsilon _1)\delta _{\varepsilon _2|\varepsilon _1}(\varepsilon _2|\varepsilon _1) = {\left\lbrace \begin{array}{ll} \frac{\delta _{\varepsilon _1}(\varepsilon _1)\delta _{\varepsilon _1}(\varepsilon _2)}{1-\Delta _1(\varepsilon _1)} & \text{if } \varepsilon _2 \le \varepsilon _1\\ 0 & \text{otherwise} \end{array}\right.} $ So since $\varepsilon _2 \le \varepsilon _1$ we have that: $ \mu _{L_2}(x) &= \int _0^\infty \int _{min(\varepsilon _1, d(x,P))}^{\varepsilon _1} \delta (\varepsilon _1, \varepsilon _2) d\varepsilon _2 d\varepsilon _1 = \int _0^\infty \int _{min(\varepsilon _1, d(x,P))}^{\varepsilon _1} \frac{\delta _{\varepsilon _1}(\varepsilon _1)\delta _{\varepsilon _1}(\varepsilon _2)}{1-\Delta _1(\varepsilon _1)} d\varepsilon _2 d\varepsilon _1\\ &= \int _{d(x,P)}^\infty \frac{\delta _{\varepsilon _1}(\varepsilon _1)}{1 - \Delta _1(\varepsilon _1)} \int _{d(x,P)}^{\varepsilon _1} \delta _{\varepsilon _1}(\varepsilon _2) d\varepsilon _2 d\varepsilon _1\\ &= \int _{d(x,P)}^\infty \frac{\delta _{\varepsilon _1}(\varepsilon _1)}{1 - \Delta _1(\varepsilon _1)} \left( \int _0^{\varepsilon _1} \delta _{\varepsilon _1}(\varepsilon _2) d\varepsilon _2 - \int _0^{d(x,P)} \delta _{\varepsilon _1}(\varepsilon _2) d\varepsilon _2 \right) d\varepsilon _1\\ &= \mu _{L_1}(x) - (1-\mu _{L_1}(x))\int _{d(x,P)}^\infty \frac{\delta _{\varepsilon _1}(\varepsilon _1)}{1 - \Delta _1(\varepsilon _1)} d\varepsilon _1 = \mu _{L_1}(x) + (1-\mu _{L_1}(x))\ln (1 - \mu _{L_1}(x)) $ The effect of these hedges are illustrated in the following example. Example 5 Suppose the label $L$ is as described in example "Example 3" . We can form a new label $vL$ with prototype $P_v = P = 5$ and threshold $\varepsilon _v \le \varepsilon $ . Then, according to theorem "Theorem 4 (L 2 =L_2 = very L 1 L_1)" , $\mu _{vL}(x) = \mu _L(x) + (1 - \mu _L(x))\ln (1- \mu _L(x))$ The effect of applying this contraction hedge is seen in figure 5 , and again, this concentration hedge is contrasted with Zadeh's concentration hedge $(\mu _L(x))^2$ . These hedges can also be applied across multiple dimensions, demonstrated in the example below. Example 6 Suppose we have two labels `tall' and `thin'. `Tall' has prototype $P_{\text{tall}} = 6.5$ ft and `thin' has prototype $P_{\text{thin}} = 24$ in. The appropriateness of each label is defined by: $ \mu _{\text{tall}}(x_1) = {\left\lbrace \begin{array}{ll} 1 & if x_1 > 6.5\\ 1 - (P_{\text{tall}} - x_1) & if x_1 \in [5.5, 6.5]\\ 0 & \text{otherwise} \end{array}\right.} $ where the variable $x_1$ measures height $ \mu _{\text{thin}}(x_2) = {\left\lbrace \begin{array}{ll} 1 & if x_2 < 24\\ 1 - \frac{(x_2 - P_{\text{thin}})}{4} & if x_2 \in [24, 28]\\ 0 & \text{otherwise} \end{array}\right.} $ where $x_2$ measures waist size. Suppose further that being tall and being thin are independent of each other. The appropriateness of the label `tall and thin' could then be defined by: $ \mu _{\text{tall and thin}}(x_1, x_2) = \mu _{\text{tall}}(x_1)\mu _{\text{thin}}(x_2) $ If `tall' and `thin' are independent, we can treat their hedges separately, so the appropriateness of a label `very tall and quite thin' is: $ \mu _{\text{very tall and quite thin}}(x_1, x_2) &= \mu _{\text{very tall}}(x_1)\mu _{\text{quite thin}}(x_2)\\ & = (\mu _{\text{tall}}(x_1)+ (1 - \mu _{\text{tall}}(x_1)\ln (1 - \mu _{\text{tall}}(x_1))) (\mu _{\text{thin}}(x_2) - \mu _{\text{thin}}(x_2)\ln (\mu _{\text{thin}}(x_2))) $ This is illustrated in figure 6 Hedges with differing prototypes As they stand, the hedges proposed leave the core and support of the fuzzy sets unchanged, which is often argued to be undesirable BIBREF23 , BIBREF9 , BIBREF25 , BIBREF30 . A slight modification yields models of hedges in which the core, or prototype, of the concept has been changed. Theorem 7 (Dilation) Suppose that $L_2 = $ quite $L_1$ , as in theorem "Theorem 2 (L 2 =L_2 = quite L 1 L_1)" , but that $P_2 \ne P_1$ . Then $\mu _{L_2}(x) = \Delta _1(d(x,P_2)) - \Delta _1(d(x, P_2))\ln (\Delta _1(d(x, P_2)))$ . Substitute $\Delta _1(d(x, P_2))$ for $\mu _{L_1}(x)$ throughout proof of theorem "Theorem 2 (L 2 =L_2 = quite L 1 L_1)" Theorem 8 (Concentration) Suppose that $L_2 = $ very $L_1$ , as in theorem "Theorem 4 (L 2 =L_2 = very L 1 L_1)" , but that $P_2 \ne P_1$ . Then $\mu _{L_2}(x) = \Delta _1(d(x,P_2)) + (1 - \Delta _1(d(x, P_2)))\ln (1 - \Delta _1(d(x, P_2)))$ . As above. Corollary 9 If $\varepsilon _2 \ge \varepsilon _1$ and $P_2 \supseteq P_1$ then $\mu _{L_2}(x) \ge \mu _{L_1}(x) - \mu _{L_1}(x)\ln (\mu _{L_1}(x))$ , and if $\varepsilon _2 \le \varepsilon _1$ and $P_2 \subseteq P_1$ , then $\mu _{L_2}(x) \le \mu _{L_1}(x) + (1-\mu _{L_1}(x))\ln (1-\mu _{L_1}(x))$ . $\mu _{L_2}(x) = \Delta _1(d(x,P_2)) - \Delta _1(d(x, P_2))\ln (\Delta _1(d(x, P_2)))$ , but since $P_2 \supseteq P_1$ , $d(x, P_2) \le d(x, P_1)$ $\forall x \in \Omega $ , and so $\Delta _1(d(x, P_2)) \ge \Delta _1(d(x, P_1)) = \mu _{L_1}(x)$ $\forall x \in \Omega $ . Hence, $\mu _{L_2}(x) \ge \mu _{L_1}(x) - \mu _{L_1}(x)\ln (\mu _{L_1}(x))$ . A similar argument shows that $\mu _{L_2}(x) \le \mu _{L_1}(x) + (1-\mu _{L_1}(x))\ln (1-\mu _{L_1}(x))$ . Example 10 Suppose our conceptual space $\Omega = \mathbb {R}$ with Euclidean distance and that a label $L$ has prototype $P = [4.5,5.5]$ , and threshold $\varepsilon \sim $ Uniform $[0,3]$ . Then $ \mu _L(x) = {\left\lbrace \begin{array}{ll} 1 & \text{ if } x \in [4.5,5.5]\\ 1 - \frac{|x - 5|}{3} & \text{ if } x \in [1.5, 4.5] \text{ or } x \in [5.5, 8.5] \\ 0 & otherwise \end{array}\right.}• $ We form the concept $qL$ by setting the prototype to be $P_q = [4,6]$ , and $\varepsilon _q \ge \varepsilon $ . The effect of applying our dilation hedge is illustrated in figure 7 . Conversely, suppose that a label $L$ has prototype $P = [4, 6]$ and threshold $\varepsilon \sim $ Uniform $[0,3]$ . Then $ \mu _L(x) = {\left\lbrace \begin{array}{ll} 1 & \text{ if } x \in [4,6]\\ 1 - \frac{|x - 5|}{3} & \text{ if } x \in [1, 4] \text{ or } x \in [6, 9]\\ 0 & otherwise \end{array}\right.}• $ We now form the concept $vL$ by contracting the prototype to $P_v = [4.5,5.5]$ and setting $\varepsilon _v \le \varepsilon $ . The effect of applying the contraction hedge is illustrated in figure 8 . Functions of thresholds It may be the case that the threshold of a given concept is greater than or less than a function of the original threshold. This could hold when a hedged concept is a very expanded or restricted version of the original concept, such as when the hedge `loosely' or `extremely' is used. Our formulae can also take account of this. Theorem 11 Suppose $L_2 = <P_2, d, \delta _{\varepsilon _2}>$ is a dilation of $L_1 = <P_1, d, \delta _{\varepsilon _1}>$ with $P_2 \ne P_1$ and $\varepsilon _2 \ge f(\varepsilon _1)$ , where $f: \mathbb {R} \rightarrow \mathbb {R}$ is strictly increasing or decreasing. Then $ \mu _{L_2}(x) = \Delta _1(f^{-1}(d(x,P_2))) - \Delta _1(f^{-1}(d(x,P_2)))\ln (\Delta _1(f^{-1}(d(x,P_2)))) $ Rewrite $\varepsilon _2 \ge f(\varepsilon _1)$ as $\varepsilon _2 \ge \varepsilon = f(\varepsilon _1)$ , where $\varepsilon \sim \delta $ and is associated with a label $L$ with prototype $P$ . Then: $ \mu _{L_2}(x) = \Delta (d(x,P_2)) - \Delta (d(x, P_2))\ln (\Delta (d(x, P_2))) $ as above Since $f: \mathbb {R} \rightarrow \mathbb {R}$ is strictly monotone, $f^{-1}$ exists, and $\Delta (d(x, P)) = P(d(x,P) \le \varepsilon ) = P(f^{-1}(d(x,P)) \le \varepsilon _1) = \Delta _1(f^{-1}(d(x,P)))$ . So $ \mu _{L_2}(x) = \Delta _1(f^{-1}(d(x,P_2))) - \Delta _1(f^{-1}(d(x,P_2)))\ln (\Delta _1(f^{-1}(d(x,P_2)))) $ as required. Theorem 12 Suppose $L_2 = <P_2, d, \delta _{\varepsilon _2}>$ is a concentration of $L_1 = <P_1, d, \delta _{\varepsilon _1}>$ with $P_2 \ne P_1$ and $\varepsilon _2 \le f(\varepsilon _1)$ , where $f: \mathbb {R} \rightarrow \mathbb {R}$ is strictly increasing or decreasing. Then $ \mu _{L_2}(x) = \Delta _1(f^{-1}(d(x,P_2))) +(1- \Delta _1(f^{-1}(d(x,P_2))))\ln (1 -\Delta _1(f^{-1}(d(x,P_2)))) $ The proof is entirely similar to that of theorem "Theorem 11" Links to other models of hedges It is possible to specify the dependence of the threshold of the hedged concept on the threshold of the unhedged concept purely deterministically, i.e. by $\varepsilon _2 = f(\varepsilon _1)$ , rather than $\varepsilon _2 \le f(\varepsilon _1)$ . In this case, we can show links to other models of hedges from the literature. A simple example of a deterministic dependency is given below. Example 13 Suppose $\Omega = \mathbb {R}$ , $d$ is Euclidean distance and that $L_1$ has prototype $P = 5$ and $\varepsilon \sim $ Uniform $[0,3]$ . Then as before, $ \mu _L(x) = {\left\lbrace \begin{array}{ll} 1 - \frac{|x - 5|}{3} & \text{ if } x \in [2, 8]\\ 0 & otherwise \end{array}\right.}• $ To implement a dilation hedge, we would form a new label $qL$ with $P_q = P = 5$ and $\varepsilon _q = k_q\varepsilon $ with $k_q > 1$ . For a contraction hedge, we would form the label $vL$ by setting $P_v = P = 5$ and $\varepsilon _v = k_v\varepsilon $ with $k_v <1$ . Then, $ \mu _{hL}(x) = {\left\lbrace \begin{array}{ll} 1 - \frac{|x - 5|}{3k} & \text{ if } x \in [5 - 3k, 5+ 3k]\\ 0 & otherwise \end{array}\right.}• $ where $h = q$ or $v$ , $k = k_q$ or $k_v$ respectively. The effect of implementing these hedges is illustrated in figure 9 . Using this approach, we can also create an effect similar to that of changing the prototype. Suppose that a label $L$ in a conceptual space $\Omega $ has a single point $P$ as a prototype, but that the minimum value of the threshold $\varepsilon $ is greater than 0, for example, $\varepsilon \sim $ Uniform $[c,a]$ . Then $ \mu _{L}(x) = {\left\lbrace \begin{array}{ll} 1 & \text{ if } d(x,P) < c\\ \frac{a}{a - c} - \frac{|x - P|}{a - c} & \text{ if } d(x, P) \in [a,c]\\ 0 & otherwise \end{array}\right.}• $ Suppose that a hedged concept $hL$ is formed from $L$ by the dependency $\varepsilon _h = k\varepsilon $ where $k$ is a constant. Then $ \mu _{L}(x) = \Delta (\frac{|x-P|}{k}) = {\left\lbrace \begin{array}{ll} 1 & \text{ if } \frac{|x-P|}{k} < c\\ \frac{a}{(a - c)} - \frac{|x - P|}{k(a - c)} & \text{ if } \frac{|x-P|}{k} \in [a,c]\\ 0 & otherwise \end{array}\right.}• = {\left\lbrace \begin{array}{ll} 1 & \text{ if } |x - P| < kc\\ \frac{ka - |x - P|}{k(a - c)} & \text{ if } |x-P| \in [ka,kc]\\ 0 & otherwise \end{array}\right.}• $ This effect is illustrated in the example below. Example 14 Suppose that the conceptual space $\Omega = \mathbb {R}$ and that a label $L$ has prototype $P = 5$ and threshold $\varepsilon \sim $ Uniform $[1, 2]$ . Then $ \mu _{L}(x) = {\left\lbrace \begin{array}{ll} 1 & \text{ if } |x - 5| < 1\\ 2 - |x - 5| & \text{ if } |x - 5| \in [1,2]\\ 0 & otherwise \end{array}\right.}• $ Forming a new label $qL$ by applying the hedge $\varepsilon _q = 2\varepsilon $ gives appropriateness measure $ \mu _{qL}(x) = {\left\lbrace \begin{array}{ll} 1 & \text{ if } |x - 5| < 2\\ \frac{4 - |x - 5|}{2} & \text{ if } |x - 5| \in [2,4]\\ 0 & otherwise \end{array}\right.}• $ Forming a new label $vL$ by applying the hedge $\varepsilon _v = 0.5\varepsilon $ gives appropriateness measure $ \mu _{vL}(x) = {\left\lbrace \begin{array}{ll} 1 & \text{ if } |x - 5| < 0.5\\ \frac{1 - |x - 5|}{0.5} & \text{ if } |x - 5| \in [0.5,1]\\ 0 & otherwise \end{array}\right.}• $ These are illustrated in figure 10 . Notice that if we set $\Omega = \mathbb {R}^+$ and label $L$ specified by $P = 0$ , $\varepsilon \sim $ Uniform $[c, a]$ , this is identical to the linear membership model given in BIBREF25 . Specifically, we have $ \mu _{L}(x) = {\left\lbrace \begin{array}{ll} 1 & \text{ if } x < c\\ \frac{a - x}{a-c} & \text{ if } x \in [c,a]\\ 0 & \text{otherwise} \end{array}\right.} $ Forming a hedged concept $hL$ by setting $P_{hL}=P=0$ and $\varepsilon _{hL} = k\varepsilon $ gives $ \mu _{hL}(x) = {\left\lbrace \begin{array}{ll} 1 & \text{ if } x < kc\\ \frac{ka - x}{ka-kc} & \text{ if } x \in [kc,ka]\\ 0 & \text{otherwise} \end{array}\right.} $ Comparing this with the model given in section "Approaches to linguistic hedges" , we see that this is precisely the model proposed by BIBREF25 in the linear case. Similarity between the hedging effects illustrated in figure 9 and the effects implemented in the model proposed in BIBREF23 , illustrated in figure 2 , can clearly be seen. To derive the model given in BIBREF23 , we describe the fuzzy sets associated with labels $L$ and $hL$ in trapezoidal notation. Notice that $L= (P-a, P - c, P+c, P+a)$ and $hL = (P-ka, P- kc, P+kc, P+ka)$ . We can render this transformation in the terms employed by BIBREF23 . Consider labels $L$ and $hL$ as fuzzy sets characterised by the appropriateness measure $\mu _{L}(x)$ and $\mu _{hL}(x)$ . Then, in the case of dilation, we have: $ qL = E^Z(L)(s) = \text{sup}_{r \in \Omega }T(\mu _{L}(s), E^Z(s,r)) $ and for contraction, $ vL = E_Z(L)(s) = \text{inf}_{r \in \Omega }I(\mu _{L}(s), E^Z(s,r)) $ When $T$ is the $T$ -norm $min$ , I is the Gödel implication and $Z = (-z-\alpha , -z, z, z+\alpha )$ (with the restriction that $c < z$ to ensure a well-defined set), the approach in BIBREF23 gives $qL = (P-a - z - \alpha , P - c - z, P+c + z, P+a + z + \alpha )$ . If we set $z = (k - 1)c$ and $\alpha = (k - 1)(a - c)$ , this is equal to $qL = (P-ka, P- kc, P+kc, P+ka)$ . However, we also require that $vL = (P-ka, P- kc, P+kc, P+ka)$ . The approach in BIBREF23 gives $vL = (P-a + z + \alpha , P - c + z, P+c - z, P+a - z - \alpha )$ , and we therefore need to set $z = (1-k)c$ and $\alpha = (1-k)(a-c)$ This formulation is not as general as given in BIBREF23 , however, note that it only uses one additional parameter and no additional operators, rather than the two parameters and either a $T$ -norm or implication used by BIBREF23 . Two more key models from the literature are the powering and shifting modifiers proposed in BIBREF3 . Recall that powering modifiers are of the form $\mu _{hL}(x) = (\mu _L(x))^k$ and shifting modifiers are of the form $\mu _{hL}(x) = (\mu _L(x-a))$ . Shifting modifiers are easy to implement within our model, simply by shifting the prototype by the quantity $a$ . Powering modifiers can be expressed as a function of the threshold $\varepsilon $ given a particular distribution of the threshold $\delta $ . Suppose $\Omega = \mathbb {R}$ , $\varepsilon \sim U[0,c]$ , giving $ \mu _L(x) = {\left\lbrace \begin{array}{ll} 1 - \frac{d(x, P)}{b} & \text{ if } x \in [P-b, P+b]\\ 0 & otherwise \end{array}\right.}• $ and suppose a new label $hL$ is formed with prototype $P$ and threshold $\varepsilon _h = f(\varepsilon )$ such that $\mu _{hL}(x) = \mu _L(x)^k$ . Then $\mu _{hL}(x) = \Delta (f^{-1}(d(x,P))) = (\Delta (d(x,P)))^k$ , so $ f^{-1}(d(x,P)) = \Delta ^{-1}((\Delta (d(x,P)))^k) = b - b(\frac{(b - d(x, P))^k}{b^k}) = b - \frac{(b - d(x,P))^k}{b^{k-1}} $ and hence $ \varepsilon _{hL} = f(\varepsilon ) = b - (b^{k-1}(b - \varepsilon ))^{1/k} $ This expression seems surprisingly complicated, and there may be better ways of deriving the powering hedges that are not as a function of the threshold $\varepsilon $ . In this section we have shown that our general model can capture some of the many approaches found in the literature as special cases. We now go on to look at the property of compositionality that is exhibited by a number of models. Compositionality One of the features of hedges seen in BIBREF23 , BIBREF31 , BIBREF25 , BIBREF3 is that they can be applied multiple times. Within the label semantics framework, this consists in expanding or reducing the threshold of a concept a number of times. The directed acyclic graph corresponding to this is shown in figure 11 . We show below that expressions for `very' and `quite' as given in theorems "Theorem 2 (L 2 =L_2 = quite L 1 L_1)" and "Theorem 4 (L 2 =L_2 = very L 1 L_1)" are compositional, and that the appropriateness of a concept after $n$ applications of a hedge can be expressed purely in terms of the appropriateness after $n -1$ applications. We also derive expressions for the composition of deterministic hedges as described in section "Links to other models of hedges" . Theorem 15 Suppose that labels $L_1, L_2, ... , L_n$ are defined by prototypes $P_1 = P_2 = ... = P_n = P$ , thresholds $\varepsilon _1 \ge \varepsilon _2 \ge ... \ge \varepsilon _n$ and with a distance metric $d$ common to all labels. Then $\mu _{L_n}(x) = \mu _{L_{n-1}}(x) + (1-\mu _{L_{n-1}}(x))\ln (1 - \mu _{L_{n-1}}(x))$ We proceed by induction on $n$ . Theorem "Theorem 2 (L 2 =L_2 = quite L 1 L_1)" proves this for $n=2$ . Assuming true for $n=k$ , we have $ \mu _{L_{k+1}}(x) &= \int _0^\infty \int _0^\infty ... \int _{max(d(x,P), \varepsilon _k)}^\infty \delta (\varepsilon _1, \varepsilon _2,...,\varepsilon _{k+1}) \mathrm {d}\varepsilon _{k+1}...\mathrm {d}\varepsilon _1 \\ &= \int _0^\infty \frac{\delta _{\varepsilon _1}(\varepsilon _1)}{\Delta _1(\varepsilon _1)} \int _0^\infty \frac{\delta _{\varepsilon _1}(\varepsilon _2)}{\Delta _2(\varepsilon _2)} ... \int _0^\infty \frac{\delta _{\varepsilon _{k-1}}(\varepsilon _k)}{\Delta _k(\varepsilon _k)} \int _{max(d(x,P), \varepsilon _{k})}^\infty \delta _{\varepsilon _k}(\varepsilon _{k+1}) \mathrm {d}\varepsilon _{k+1}...\mathrm {d}\varepsilon _1 \\ &=\int _0^\infty \frac{\delta _{\varepsilon _1}(\varepsilon _1)}{\Delta _1(\varepsilon _1)} \int _0^\infty \frac{\delta _{\varepsilon _1}(\varepsilon _2)}{\Delta _2(\varepsilon _2)} ... \int _{max(d(x, P), \varepsilon _{k-1})}^\infty \frac{\delta _{\varepsilon _{k-1}}(\varepsilon _k)}{\Delta _k(\varepsilon _k)} \overbrace{\int _{\varepsilon _k}^\infty \delta _{\varepsilon _k}(\varepsilon _{k+1}) \mathrm {d}\varepsilon _{k+1}}^{= \Delta _k(\varepsilon _k)} \mathrm {d}\varepsilon _k...\mathrm {d}\varepsilon _1 \\ & \quad +\int _0^{d(x,P)} \frac{\delta _{\varepsilon _1}(\varepsilon _1)}{\Delta _1(\varepsilon _1)} \int _{\varepsilon _1}^{d(x,P)} \frac{\delta _{\varepsilon _1}(\varepsilon _2)}{\Delta _2(\varepsilon _2)} ... \int _{\varepsilon _{k-1}}^{d(x,P)} \frac{\delta _{\varepsilon _{k-1}}(\varepsilon _k)}{\Delta _k(\varepsilon _k)} \overbrace{\int _{d(x,P)}^\infty \delta _{\varepsilon _k}(\varepsilon _{k+1}) \mathrm {d}\varepsilon _{k+1}}^{= \mu _{L_k}(x)} \mathrm {d}\varepsilon _k...\mathrm {d}\varepsilon _1 \\ &= \overbrace{\int _0^\infty \frac{\delta _{\varepsilon _1}(\varepsilon _1)}{\Delta _1(\varepsilon _1)} \int _0^\infty \frac{\delta _{\varepsilon _1}(\varepsilon _2)}{\Delta _2(\varepsilon _2)} ... \int _{max(d(x, P), \varepsilon _k)}^\infty \delta _{\varepsilon _{k-1}}(\varepsilon _k)\mathrm {d}\varepsilon _k...\mathrm {d}\varepsilon _1 }^{= \mu _{L_k}(x) \text{ by ind. hyp.}}\\ & \quad + \mu _{L_k}(x) \int _0^{d(x,P)} \frac{\delta _{\varepsilon _1}(\varepsilon _1)}{\Delta _1(\varepsilon _1)} \int _{\varepsilon _1}^{d(x,P)} \frac{\delta _{\varepsilon _1}(\varepsilon _2)}{\Delta _2(\varepsilon _2)} ... \int _{\varepsilon _{k-1}}^{d(x,P)} \frac{\delta _{\varepsilon _{k-1}}(\varepsilon _k)}{\Delta _k(\varepsilon _k)} \mathrm {d}\varepsilon _k...\mathrm {d}\varepsilon _1\\ &= \mu _{L_k}(x) + \mu _{L_k}(x) \int _0^{d(x,P)} \frac{\delta _{\varepsilon _{k-1}}(\varepsilon _k)}{\Delta _k(\varepsilon _k)} ... \int _0^{\varepsilon _3} \frac{\delta _{\varepsilon _1}(\varepsilon _2)}{\Delta _2(\varepsilon _2)} \int _0^{\varepsilon _2} \frac{\delta _{\varepsilon _1}(\varepsilon _1)}{\Delta _1(\varepsilon _1)} \mathrm {d}\varepsilon _1...\mathrm {d}\varepsilon _k\\ &= \mu _{L_k}(x) + \mu _{L_k}(x) \overbrace{\int _0^{d(x,P)} \frac{\delta _{\varepsilon _{k-1}}(\varepsilon _k)}{\Delta _k(\varepsilon _k)} ... \int _0^{\varepsilon _3} \frac{-\delta _{\varepsilon _1}(\varepsilon _2)\ln (\Delta _1(\varepsilon _2))}{\Delta _2(\varepsilon _2)} \mathrm {d}\varepsilon _2...\mathrm {d}\varepsilon _k}^{=A} $ By the inductive hypothesis, $\forall i = 0...k$ $ \delta _{\varepsilon _i}(\varepsilon _i) &= -\frac{\mathrm {d}}{\mathrm {d\varepsilon _i}}\Delta _i(\varepsilon _i)\\ \nonumber &= -\frac{\mathrm {d}}{\mathrm {d\varepsilon _i}}(\Delta _{i-1}(\varepsilon _i) - \Delta _{i-1}(\varepsilon _i)\ln (\Delta _{i-1}(\varepsilon _i)))\\ &= -\delta _{\varepsilon _{i-1}}(\varepsilon _i)\ln (\Delta _{i-1}(\varepsilon _i)) $ Recursively substituting in $A$ , we obtain $ \mu _{L_{k+1}}(x) &= \mu _{L_k}(x) + \mu _{L_k}(x) \int _0^{d(x,P)} \frac{\delta _{\varepsilon _{k-1}}(\varepsilon _k)}{\Delta _k(\varepsilon _k)} ... \int _0^{\varepsilon _3} \frac{\delta _{\varepsilon _2}(\varepsilon _2)}{\Delta _2(\varepsilon _2)} \mathrm {d}\varepsilon _2...\mathrm {d}\varepsilon _k\\ &= \mu _{L_k}(x) +\mu _{L_k}(x) \int _0^{d(x, P)}\frac{\delta _{\varepsilon _k}(\varepsilon _k)}{\Delta _k(\varepsilon _k)} \mathrm {d}\varepsilon _k\\ &= \mu _{L_k}(x) -\mu _{L_k}(x)\ln (\mu _{L_k}(x)) $ Theorem 16 Suppose labels $L_1, L_2, ..., L_n$ are defined by prototypes $P_1 = P_2 = ... = P_n = P$ , thresholds $\varepsilon _1 \le \varepsilon _2 \le ... \le \varepsilon _n$ , and that distance metric $d$ is common to all. Then $\mu _{L_n}(x) = \mu _{L_{n-1}}(x) - \mu _{L_{n-1}}(x)\ln (\mu _{L_{n-1}}(x))$ Similar to proof of theorem "Theorem 15" . We can also derive expressions for the composition of deterministic hedges. Theorem 17 Suppose labels $L_1, L_2, ..., L_n$ are defined by prototypes $P_1 = P_2 = ... = P_n = P$ , thresholds $\varepsilon _n = f(\varepsilon _{n-1}), \varepsilon _{n-1} = f(\varepsilon _{n-2}), ..., \varepsilon _2 = f(\varepsilon _1)$ , where $f$ is monotone increasing or decreasing, and that distance metric $d$ is common to all. Then $\mu _{L_n}(x) = \Delta _1(f^{-(n-1)}(d(x,P))$ , where $f^{-k}$ signifies $f^{-1}$ composed $k$ times. $\mu _{L_2}(x) = \Delta _1(f^{-1}(d(x,P))$ . Suppose that $\mu _{L_k}(x) = \Delta _k(d(x,P)) = \Delta _1(f^{-(k-1)}(d(x,P))$ . Since $\varepsilon _{k+1} = f(\varepsilon _k)$ , we have $\mu _{L_{k+1}}(x) = \Delta _k(f^{-1}(d(x,P)) = \Delta _1(f^{-k}(d(x,P)))$ . Therefore $\mu _{L_n}(x) = \Delta _1(f^{-(n-1)}(d(x,P))$ by induction. Since labels can be composed in this way, we can model different degrees of emphasis corresponding to the composition of multiple hedges. So, for example, we could model `extremely L' as `very, very L'. This is illustrated in example "Example 18" . Example 18 Suppose the label $L$ is as described in example "Example 3" , i.e. $L$ has prototype $P = 5$ , and threshold $\varepsilon \sim $ Uniform $[0,3]$ . We can form a new label $vL$ with prototype $P_v = P = 5$ and threshold $\varepsilon _v \le \varepsilon $ , which has appropriateness $\mu _{vL}(x) = \mu _L(x) + (1 - \mu _L(x))\ln (1- \mu _L(x))$ as shown in theorem "Theorem 4 (L 2 =L_2 = very L 1 L_1)" . We may then form another new label $vvL$ with prototype $L$0 and threshold $L$1 with appropriateness $L$2 as described in theorem "Theorem 15" . The effect of applying this contraction hedge is seen in figure 12 . We have contrasted the effect of the composed hedges with $L$3 . Since these hedges can be composed only an integral number of times, we cannot obtain the differences in grade that could be achieved with using various powers in a powering modifier, e.g. $\mu _L(x)^{1.73}$ . However, in section "Functions of thresholds" we discuss how to tune the intensity of hedges by using dependencies on functions of thresholds. We have further shown in section "Links to other models of hedges" how to derive powering and shifting modifiers within our framework. It would be interesting to explore how other examples of hedges can be expressed in this framework. We have shown that when multiple hedges of the forms seen in theorems "Theorem 2 (L 2 =L_2 = quite L 1 L_1)" and "Theorem 4 (L 2 =L_2 = very L 1 L_1)" are used, $\mu _{L_n}(x)$ can be expressed purely in terms of the appropriateness of the label directly preceding it. We have not been able to find a closed form solution for this recurrence, however, we can investigate the fixed points of the recurrence and examine what happens to the values of $\mu _{L_n}(x)$ as $n \rightarrow \infty $ . We have also shown that deterministic hedges can be composed, and we go on to look at their behaviour in the limit of composition. Limits of Compositions. The following results examine the behaviour of $\mu _{L_n}(x)$ as $n \rightarrow \infty $ Theorem 19 Suppose $L_1, ..., L_n$ are labels obtained by repeated application of the dilation operator. Then $\mu _{L_n}$ has a limit $M^+$ and $M^+ = 1$ $\forall x \in \Omega $ such that $\mu _{L_1}(x) \ne 0$ , and $M^+ = 0$ otherwise. $\mu _{L_{i+1}}(x) = \mu _{L_i}(x) - \mu _{L_i}(x)\ln (\mu _{L_i}(x))$ , $i = 1,.., n-1$ . If $\mu _{L_1}(x) = 1$ then $\mu _{L_i}(x) = 1$ $\forall i = 1,..., n$ . Also, if $\mu _{L_1}(x) = 0$ then $\mu _{L_i}(x) = 0$ $\forall i = 1,..., n$ . Suppose $\mu _{L_i}(x) \in (0,1)$ . Then $\mu _{L_{i+1}}(x) > \mu _{L_i}(x)$ , and so for $i = 1,.., n-1$0 , $i = 1,.., n-1$1 is a strictly increasing sequence. If a limit $M^+$ exists, then we will have $M^+ = M^+ - M^+\ln (M^+)$ , so either $M^+ = 0$ or $\ln (M^+) = 0$ . We can't have $M^+ = 0$ , since we assume that $\mu _{L_1}(x) \in (0,1)$ and the sequence is strictly increasing. Therefore, we must have $\ln (M^+) = 0$ and therefore $M^+ = 1$ . So $ \mu _{L_\infty }(x) = \left\lbrace \begin{array}{l l} 1 & \quad \mu _{L_1}(x) \in (0,1]\\ 0 & \quad \mu _{L_1}(x) = 0\\ \end{array} \right. $ Theorem 20 Suppose $L_1,..., L_n$ are labels obtained by repeated application of the contraction operator. Then $\mu _{L_n}$ has a limit $M^-$ and $M^- = 0$ $\forall x \in \Omega $ such that $\mu _{L_1}(x) \ne 1$ , and $M^- = 1$ otherwise. $\mu _{L_{i+1}}(x) = \mu _{L_i}(x) + (1- \mu _{L_i}(x))\ln (1 - \mu _{L_i}(x))$ , $i = 1,..., n-1$ . Again, if $\mu _{L_1}(x) = 1$ then $\mu _{L_i}(x) = 1$ $\forall i = 1,..., n$ . Also, if $\mu _{L_1}(x) = 0$ then $\mu _{L_i}(x) = 0$ $\forall i = 1,..., n$ , and for $\mu _{L_1}(x) \in (0,1)$ , $\mu _{L_1}(x) >...> \mu _{L_n}(x)$ is a strictly decreasing sequence. If a limit $M^-$ exists, then $ M^- &= M^- + (1 - M^-)\ln (1- M^-)\\ \ln (1 - M^-) &= M^-\ln (1- M^-) $ So either $M^- = 0$ or $\ln (1- M^-) = 0$ . If $\ln (1- M^-) = 0$ then $M^- = 1$ , which is impossible since $\mu _{L_1}(x) \in (0,1)$ and the sequence of $\mu _{L_i}(x)$ is strictly decreasing. Therefore $ \mu _{L_\infty }(x) = \left\lbrace \begin{array}{l l} 0 & \quad \mu _{L_1}(x) \in [0,1)\\ 1 & \quad \mu _{L_1}(x) = 1\\ \end{array} \right. $ We have shown here that in the limit, the result of applying dilation or contraction modifiers multiple times is to create a crisp set. In the case of dilation, the crisp set includes the whole support of the fuzzy set associated with the original label, whereas in the case of contraction, the concept reduces to include only its prototype. When deterministic hedges are used, i.e. $\varepsilon _2 = f(\varepsilon _1)$ , the behaviour of the limit depends on the behaviour of the function $f$ and its properties in the limit as $n \rightarrow \infty $ of $f^{-n}$ . Example 21 Suppose $f(\varepsilon ) = 0.5\varepsilon $ . Applying this hedge multiple times will result in $\mu _{L_n}(x) = \Delta _1(2^n d(x,P))$ . As $n\rightarrow \infty $ , $2^n d(x,P) \rightarrow \infty $ , except where $d(x,P)=0$ . Therefore, $ \mu _{L_\infty }(x) = \left\lbrace \begin{array}{l l} 0 & \quad d(x,P) > 0\\ 1 & \quad d(x,P) = 0\\ \end{array} \right. $ On the other hand, if $f(\varepsilon ) = 2\varepsilon $ , $\mu _{L_n}(x) = \Delta _1(2^{-n}d(x,P))$ . As $n\rightarrow \infty $ , $2^{-n} d(x,P) \rightarrow 0$ , and hence $\mu _{L_\infty }(x) = 1$ $\forall x \in \Omega $ . The behaviour of the hedges given in example "Example 21" is therefore different from those in theorems "Theorem 19" and "Theorem 20" , since the concept either shrinks to a single point, in the case of contraction, or, in the case of dilation, expands to fill the entire space $\Omega $ . Discussion We have presented formulae for linguistic hedges which are both functional and semantically grounded. The modifiers presented arise naturally from the label semantics framework, in which concepts are represented by a prototype and threshold. Our hedges have an intuitive meaning: if I think that the threshold for a concept `small' is of a certain width, then the threshold for the concept `very small' will be narrower. On the other hand, the threshold for the concept `quite small' will be broader. The hedges proposed are examples of `type 1' hedges, i.e. they operate equally across all dimensions of the fuzzy set associated with a concept. The first result presented is somewhat similar to a powering modifier since the core and support of the set remain the same. In BIBREF9 , BIBREF7 , it is argued that this property is undesirable for hedges used in fuzzy expert systems, since if a query is returning too large a set of answers, this type of contraction hedge does not reduce this overabundance. However, although the hedges we propose do not at their simplest address the overabundance issue, we argue that they address another problem associated with powering hedges, in that they have a clear semantic grounding that the powering modifiers lack. BIBREF23 , BIBREF31 also propose modifiers that are semantically grounded, using the idea of resemblance to nearby objects. Their formulations have the properties that the core and support of the fuzzy set are both changed, thereby addressing the issue of overabundant answers BIBREF9 , BIBREF7 . In our most specific case, since the prototype is not altered, the core and support of the fuzzy set representing the concept remain the same. However, our initial proposal can be generalised, as in section "Hedges with differing prototypes" , to apply to the case where $P_1 \ne P_2$ . Specifying a semantically meaningful way of altering the boundaries of the prototype would answer the objection that the core and support of a set should change under a linguistic hedge. The most general result (section "Functions of thresholds" ) shows that the formula still applies when $\varepsilon _2 \le f(\varepsilon _1)$ , or $\varepsilon _2 \ge f(\varepsilon _1)$ . Combined with a distribution $\delta $ such that the lower bound of the distribution is not zero, the core and support of the fuzzy set are modified. With the condition $\varepsilon _2 = f(\varepsilon _1)$ , we are able to recreate the result given in BIBREF25 for linear membership functions, and show how the model proposed by BIBREF23 has strong similarities to our own. In this case we have introduced additional parameters, so the simplicity of the original result is lost. However, the further parameters introduced are no more than those introduced by BIBREF25 , and arguably fewer than those introduced by BIBREF23 , who require that a resemblance relation be specified, using two additional parameters, and also, that a $T$ -norm or fuzzy implication need to be specified. There are various choices of operator that could be used for either of these, and it is not obvious that any one is better than the others. We have also shown that the basic case operators `very' and `quite' can be composed, which is not immediately obvious from the formulae (section "Compositionality" ). Further, we show that in the limit of composition the membership of any object in the fuzzy part of $L$ , i.e. $\lbrace x: 0 < \mu _L(x) < 1\rbrace $ , increases to 1 in the case of `quite' or decreases to 0 in the case of `very' (section "Limits of Compositions." ). This is similar to the limit of applying the powering modifiers, but differs from what would happen with the modifiers proposed by BIBREF23 . In that case, the limit of `very' would shrink to a single point and the limit of `quite' would expand to encompass the whole universe of discourse. This can be modelled using the deterministic hedges described in section "Links to other models of hedges" . Although behaviour differs slightly, in fact human discourse does not apply modifiers infinitely, so the difference in behaviour is arguably not important. Our formulation has the benefit that it can be applied in more situations than simply linguistic hedges. For example, the concept `apple green' has a prototype different to that of just green, and the threshold for `apple green' is likely to be smaller than the threshold for simply `green'. Our model can take account of this. Conclusions and further work We have presented formulae for two simple linguistic hedges, `very' and `quite'. These formulae are functional, hence easy to compute, but also semantically grounded, in that they arise naturally from the conceptual framework of label semantics combined with prototype theory and conceptual spaces theory, and in the most specific case require no additional parameters. We have also shown that two other formulations BIBREF23 , BIBREF31 , BIBREF25 , can be derived from this framework with equal or fewer parameters. We have shown that the hedges can be composed and have described their behaviour in the limit of composition. Further work could look at testing the utility of these hedges in particular classifiers to compare their performance with the classical hedges and with the hedges used by e.g. BIBREF9 , BIBREF7 , and also to examine a trade-off between accuracy and the number of parameters used. Alternatively, investigating semantically grounded ways of expanding or reducing prototypes could have a similar impact. The model could also be extended to the more complicated type 2 hedges such as `essentially', or `technically', by treating dimensions of the conceptual space heterogeneously. This requires using some type of weighting or necessity measure on the dimensions, work which is currently ongoing. Acknowledgements Martha Lewis gratefully acknowledges support from EPSRC Grant No. EP/E501214/1
No
21cbcd24863211b02b436f21deaf02125f34da4c
21cbcd24863211b02b436f21deaf02125f34da4c_0
Q: On which dataset is model trained? Text: Introduction Recurrent neural network language models (RNNLM) can theoretically model the word history over an arbitrarily long length of time and thus have been shown to perform better than traditional n-gram models BIBREF0. Recent prior work has continuously improved the performance of RNNLMs through hyper-parameter tuning, training optimization methods, and development of new network architectures BIBREF1, BIBREF2, BIBREF3, BIBREF4. On the other hand, many work have proposed the use of domain knowledge and additional information such as topics or parts-of-speech to improve language models. While syntactic tendencies can be inferred from a few preceding words, semantic coherence may require longer context and high level understanding of natural language, both of which are difficult to learn through purely statistical methods. This problem can be overcome by exploiting external information to capture long-range semantic dependencies. One common way of achieving this is by incorporating part-of-speech (POS) tags into the RNNLM as an additional feature to predict the next word BIBREF5, BIBREF6. Other useful linguistic features include conversation-type, which was shown to improve language modeling when combined with POS tags BIBREF7. Further improvements were achieved through the addition of socio-situational setting information and other linguistic features such as lemmas and topic BIBREF8. The use of topic information to provide semantic context to language models has also been studied extensively BIBREF9, BIBREF10, BIBREF11, BIBREF12. Topic models are useful for extracting high level semantic structure via latent topics which can aid in better modeling of longer documents. Recently, however, empirical studies involving investigation of different network architectures, hyper-parameter tuning, and optimization techniques have yielded better performance than the addition of contextual information BIBREF13, BIBREF14. In contrast to the majority of work that focus on improving the neural network aspects of RNNLM, we introduce psycholinguistic signals along with linguistic units to improve the fundamental language model. In this work, we utilize behavioral information embedded in the language to aid the language model. We hypothesize that different psychological behavior states incite differences in the use of language BIBREF15, BIBREF16, and thus modeling these tendencies can provide useful information in statistical language modeling. And although not directly related, behavioral information may also correlate with conversation-type and topic. Thus, we propose the use of psycholinguistic behavior signals as a gating mechanism to augment typical language models. Effectively inferring behaviors from sources like spoken text, written articles can lead to personification of the language models in the speaker-writer arena. Methodology In this section, we first describe a typical RNN based language model which serves as a baseline for this study. Second, we introduce the proposed behavior prediction model for extracting behavioral information. Finally, the proposed architecture of the language model which incorporates the behavioral information through a gating mechanism is presented. Methodology ::: Language Model The basic RNNLM consists of a vanilla unidirectional LSTM which predicts the next word given the current and its word history at each time step. In other words, given a sequence of words $ \mathbf {x} \hspace{2.77771pt}{=}\hspace{2.77771pt}x_1, x_2, \ldots x_n$ as input, the network predicts a probability distribution of the next word $ y $ as $ P(y \mid \mathbf {x}) $. Figure FIGREF2 illustrates the basic architecture of the RNNLM. Since our contribution is towards introducing behavior as a psycholinguistic feature for aiding the language modeling process, we stick with a reliable and simple LSTM-based RNN model and follow the recommendations from BIBREF1 for our baseline model. Methodology ::: Behavior Model The analysis and processing of human behavior informatics is crucial in many psychotherapy settings such as observational studies and patient therapy BIBREF17. Prior work has proposed the application of neural networks in modeling human behavior in a variety of clinical settings BIBREF18, BIBREF19, BIBREF20. In this work we adopt a behavior model that predicts the likelihood of occurrence of various behaviors based on input text. Our model is based on the RNN architecture in Figure FIGREF2, but instead of the next word we predict the joint probability of behavior occurrences $ P(\mathbf {B} \mid \mathbf {x}) $ where $ \mathbf {B} \hspace{2.77771pt}{=}\hspace{2.77771pt}\lbrace b_{i}\rbrace $ and $ b_{i} $ is the occurrence of behavior $i$. In this work we apply the behaviors: Acceptance, Blame, Negativity, Positivity, and Sadness. This is elaborated more on in Section SECREF3. Methodology ::: Behavior Gated Language Model ::: Motivation Behavior understanding encapsulates a long-term trajectory of a person's psychological state. Through the course of communication, these states may manifest as short-term instances of emotion or sentiment. Previous work has studied the links between these psychological states and their effect on vocabulary and choice of words BIBREF15 as well as use of language BIBREF16. Motivated from these studies, we hypothesize that due to the duality of behavior and language we can improve language models by capturing variability in language use caused by different psychological states through the inclusion of behavioral information. Methodology ::: Behavior Gated Language Model ::: Proposed Model We propose to augment RNN language models with a behavior model that provides information relating to a speaker's psychological state. This behavioral information is combined with hidden layers of the RNNLM through a gating mechanism prior to output prediction of the next word. In contrast to typical language models, we propose to model $ P(\mathbf {y} \mid \mathbf {x}, \mathbf {z}) $ where $ \mathbf {z} \equiv f( P(\mathbf {B}\mid \mathbf {x}))$ for an RNN function $f(\cdot )$. The behavior model is implemented with a multi-layered RNN over the input sequence of words. The first recurrent layer of the behavior model is initialized with pre-trained weights from the model described in Section SECREF3 and fixed during language modeling training. An overview of the proposed behavior gated language model is shown in Figure FIGREF6. The RNN units shaded in green (lower section) denote the pre-trained weights from the behavior classification model which are fixed during the entirety of training. The abstract behavior outputs $ b_t $ of the pre-trained model are fed into a time-synced RNN, denoted in blue (upper section), which is subsequently used for gating the RNNLM predictions. The un-shaded RNN units correspond to typical RNNLM and operate in parallel to the former. Experimental Setup ::: Data ::: Behavior Related Corpora For evaluating the proposed model on behavior related data, we employ the Couples Therapy Corpus (CoupTher) BIBREF21 and Cancer Couples Interaction Dataset (Cancer) BIBREF22. These are the targeted conditions under which a behavior-gated language model can offer improved performance. Couples Therapy Corpus: This corpus comprises of dyadic conversations between real couples seeking marital counseling. The dataset consists of audio, video recordings along with their transcriptions. Each speaker is rated by multiple annotators over 33 behaviors. The dataset comprises of approximately 0.83 million words with 10,000 unique entries of which 0.5 million is used for training (0.24m for dev and 88k for test). Cancer Couples Interaction Dataset: This dataset was gathered as part of a observational study of couples coping with advanced cancer. Advanced cancer patients and their spouse caregivers were recruited from clinics and asked to interact with each other in two structured discussions: neutral discussion and cancer related. Interactions were audio-recorded using small digital recorders worn by each participant. Manually transcribed audio has approximately 230,000 word tokens with a vocabulary size of 8173. Experimental Setup ::: Data ::: Penn Tree Bank Corpus In order to evaluate our proposed model on more generic language modeling tasks, we employ Penn Tree bank (PTB) BIBREF23, as preprocessed by BIBREF24. Since Penn Tree bank mainly comprises of articles from Wall Street Journal it is not expected to contain substantial expressions of behavior. Experimental Setup ::: Behavior Model The behavior model was implemented using an RNN with LSTM units and trained with the Couples Therapy Corpus. Out of the 33 behavioral codes included in the corpus we applied the behaviors Acceptance, Blame, Negativity, Positivity, and Sadness to train our models. This is motivated from previous works which showed good separability in these behaviors as well as being easy to interpret. The behavior model is pre-trained to identify the presence of each behavior from a sequence of words using a multi-label classification scheme. The pre-trained portion of the behavior model was implemented using a single layer RNN with LSTM units with dimension size 50. Experimental Setup ::: Hyperparameters We augmented previous RNN language model architectures by BIBREF1 and BIBREF2 with our proposed behavior gates. We used the same architecture as in each work to maintain similar number of parameters and performed a grid search of hyperparameters such as learning rate, dropout, and batch size. The number of layers and size of the final layers of the behavior model was also optimized. We report the results of models based on the best validation result. Results We split the results into two parts. We first validate the proposed technique on behavior related language modeling tasks and then apply it on more generic domain Penn Tree bank dataset. Results ::: Behavior Related Corpora ::: Couple's Therapy Corpus We utilize the Couple's Therapy Corpus as an in-domain experimental corpus since our behavior classification model is also trained on the same. The RNNLM architecture is similar to BIBREF1, but with hyperparameters optimized for the couple's corpus. The results are tabulated in Table TABREF16 in terms of perplexity. We find that the behavior gated language models yield lower perplexity compared to vanilla LSTM language model. A relative improvement of 2.43% is obtained with behavior gating on the couple's data. Results ::: Behavior Related Corpora ::: Cancer Couples Interaction Dataset To evaluate the validity of the proposed method on an out-of-domain but behavior related task, we utilize the Cancer Couples Interaction Dataset. Here both the language and the behavior models are trained on the Couple's Therapy Corpus. The Cancer dataset is used only for development (hyper-parameter tuning) and testing. We observe that the behavior gating helps achieve lower perplexity values with a relative improvement of 6.81%. The performance improvements on an out-of-domain task emphasizes the effectiveness of behavior gated language models. Results ::: Penn Tree Bank Corpus Although the proposed model is motivated and targeted towards behavior related datasets, the hypothesis should theoretically extend towards any human generated corpora. To assess this, we also train models on a non-behavior-rich database, the Penn Tree Bank Corpus. We experiment with both the medium and large architectures proposed by BIBREF1. The perplexity results on PTB are presented in Table TABREF17. All language models showed an improvement in perplexity through the addition of behavior gates. It can also be observed that LSTM-Medium with behavior gating gives similar performance to baseline LSTM-Large even though the latter has more than three times the number of parameters. Results ::: Penn Tree Bank Corpus ::: Previous state-of-the-art architectures Finally we apply behavior gating on a previous state-of-the-art architecture, one that is most often used as a benchmark over various recent works. Specifically, we employ the AWD-LSTM proposed by BIBREF2 with QRNN BIBREF25 instead of LSTM. We observe positive results with AWD-LSTM augmented with behavior-gating providing a relative improvement of (1.42% on valid) 0.66% in perplexity (Table TABREF17). Conclusion & Future Work In this study, we introduce the state of the speaker/author into language modeling in the form of behavior signals. We track 5 behaviors namely acceptance, blame, negativity, positivity and sadness using a 5 class multi-label behavior classification model. The behavior states are used as gating mechanism for a typical RNN based language model. We show through our experiments that the proposed technique improves language modeling perplexity specifically in the case of behavior-rich scenarios. Finally, we show improvements on the previous state-of-the-art benchmark model with Penn Tree Bank Corpus to underline the affect of behavior states in language modeling. In future, we plan to incorporate the behavior-gated language model into the task of automatic speech recognition (ASR). In such scenario, we could derive both the past and the future behavior states from the ASR which could then be used to gate the language model using two pass re-scoring strategies. We expect the behavior states to be less prone to errors made by ASR over a sufficiently long context and hence believe the future behavior states to provide further improvements.
Couples Therapy Corpus (CoupTher) BIBREF21
37bc8763eb604c14871af71cba904b7b77b6e089
37bc8763eb604c14871af71cba904b7b77b6e089_0
Q: How is module that analyzes behavioral state trained? Text: Introduction Recurrent neural network language models (RNNLM) can theoretically model the word history over an arbitrarily long length of time and thus have been shown to perform better than traditional n-gram models BIBREF0. Recent prior work has continuously improved the performance of RNNLMs through hyper-parameter tuning, training optimization methods, and development of new network architectures BIBREF1, BIBREF2, BIBREF3, BIBREF4. On the other hand, many work have proposed the use of domain knowledge and additional information such as topics or parts-of-speech to improve language models. While syntactic tendencies can be inferred from a few preceding words, semantic coherence may require longer context and high level understanding of natural language, both of which are difficult to learn through purely statistical methods. This problem can be overcome by exploiting external information to capture long-range semantic dependencies. One common way of achieving this is by incorporating part-of-speech (POS) tags into the RNNLM as an additional feature to predict the next word BIBREF5, BIBREF6. Other useful linguistic features include conversation-type, which was shown to improve language modeling when combined with POS tags BIBREF7. Further improvements were achieved through the addition of socio-situational setting information and other linguistic features such as lemmas and topic BIBREF8. The use of topic information to provide semantic context to language models has also been studied extensively BIBREF9, BIBREF10, BIBREF11, BIBREF12. Topic models are useful for extracting high level semantic structure via latent topics which can aid in better modeling of longer documents. Recently, however, empirical studies involving investigation of different network architectures, hyper-parameter tuning, and optimization techniques have yielded better performance than the addition of contextual information BIBREF13, BIBREF14. In contrast to the majority of work that focus on improving the neural network aspects of RNNLM, we introduce psycholinguistic signals along with linguistic units to improve the fundamental language model. In this work, we utilize behavioral information embedded in the language to aid the language model. We hypothesize that different psychological behavior states incite differences in the use of language BIBREF15, BIBREF16, and thus modeling these tendencies can provide useful information in statistical language modeling. And although not directly related, behavioral information may also correlate with conversation-type and topic. Thus, we propose the use of psycholinguistic behavior signals as a gating mechanism to augment typical language models. Effectively inferring behaviors from sources like spoken text, written articles can lead to personification of the language models in the speaker-writer arena. Methodology In this section, we first describe a typical RNN based language model which serves as a baseline for this study. Second, we introduce the proposed behavior prediction model for extracting behavioral information. Finally, the proposed architecture of the language model which incorporates the behavioral information through a gating mechanism is presented. Methodology ::: Language Model The basic RNNLM consists of a vanilla unidirectional LSTM which predicts the next word given the current and its word history at each time step. In other words, given a sequence of words $ \mathbf {x} \hspace{2.77771pt}{=}\hspace{2.77771pt}x_1, x_2, \ldots x_n$ as input, the network predicts a probability distribution of the next word $ y $ as $ P(y \mid \mathbf {x}) $. Figure FIGREF2 illustrates the basic architecture of the RNNLM. Since our contribution is towards introducing behavior as a psycholinguistic feature for aiding the language modeling process, we stick with a reliable and simple LSTM-based RNN model and follow the recommendations from BIBREF1 for our baseline model. Methodology ::: Behavior Model The analysis and processing of human behavior informatics is crucial in many psychotherapy settings such as observational studies and patient therapy BIBREF17. Prior work has proposed the application of neural networks in modeling human behavior in a variety of clinical settings BIBREF18, BIBREF19, BIBREF20. In this work we adopt a behavior model that predicts the likelihood of occurrence of various behaviors based on input text. Our model is based on the RNN architecture in Figure FIGREF2, but instead of the next word we predict the joint probability of behavior occurrences $ P(\mathbf {B} \mid \mathbf {x}) $ where $ \mathbf {B} \hspace{2.77771pt}{=}\hspace{2.77771pt}\lbrace b_{i}\rbrace $ and $ b_{i} $ is the occurrence of behavior $i$. In this work we apply the behaviors: Acceptance, Blame, Negativity, Positivity, and Sadness. This is elaborated more on in Section SECREF3. Methodology ::: Behavior Gated Language Model ::: Motivation Behavior understanding encapsulates a long-term trajectory of a person's psychological state. Through the course of communication, these states may manifest as short-term instances of emotion or sentiment. Previous work has studied the links between these psychological states and their effect on vocabulary and choice of words BIBREF15 as well as use of language BIBREF16. Motivated from these studies, we hypothesize that due to the duality of behavior and language we can improve language models by capturing variability in language use caused by different psychological states through the inclusion of behavioral information. Methodology ::: Behavior Gated Language Model ::: Proposed Model We propose to augment RNN language models with a behavior model that provides information relating to a speaker's psychological state. This behavioral information is combined with hidden layers of the RNNLM through a gating mechanism prior to output prediction of the next word. In contrast to typical language models, we propose to model $ P(\mathbf {y} \mid \mathbf {x}, \mathbf {z}) $ where $ \mathbf {z} \equiv f( P(\mathbf {B}\mid \mathbf {x}))$ for an RNN function $f(\cdot )$. The behavior model is implemented with a multi-layered RNN over the input sequence of words. The first recurrent layer of the behavior model is initialized with pre-trained weights from the model described in Section SECREF3 and fixed during language modeling training. An overview of the proposed behavior gated language model is shown in Figure FIGREF6. The RNN units shaded in green (lower section) denote the pre-trained weights from the behavior classification model which are fixed during the entirety of training. The abstract behavior outputs $ b_t $ of the pre-trained model are fed into a time-synced RNN, denoted in blue (upper section), which is subsequently used for gating the RNNLM predictions. The un-shaded RNN units correspond to typical RNNLM and operate in parallel to the former. Experimental Setup ::: Data ::: Behavior Related Corpora For evaluating the proposed model on behavior related data, we employ the Couples Therapy Corpus (CoupTher) BIBREF21 and Cancer Couples Interaction Dataset (Cancer) BIBREF22. These are the targeted conditions under which a behavior-gated language model can offer improved performance. Couples Therapy Corpus: This corpus comprises of dyadic conversations between real couples seeking marital counseling. The dataset consists of audio, video recordings along with their transcriptions. Each speaker is rated by multiple annotators over 33 behaviors. The dataset comprises of approximately 0.83 million words with 10,000 unique entries of which 0.5 million is used for training (0.24m for dev and 88k for test). Cancer Couples Interaction Dataset: This dataset was gathered as part of a observational study of couples coping with advanced cancer. Advanced cancer patients and their spouse caregivers were recruited from clinics and asked to interact with each other in two structured discussions: neutral discussion and cancer related. Interactions were audio-recorded using small digital recorders worn by each participant. Manually transcribed audio has approximately 230,000 word tokens with a vocabulary size of 8173. Experimental Setup ::: Data ::: Penn Tree Bank Corpus In order to evaluate our proposed model on more generic language modeling tasks, we employ Penn Tree bank (PTB) BIBREF23, as preprocessed by BIBREF24. Since Penn Tree bank mainly comprises of articles from Wall Street Journal it is not expected to contain substantial expressions of behavior. Experimental Setup ::: Behavior Model The behavior model was implemented using an RNN with LSTM units and trained with the Couples Therapy Corpus. Out of the 33 behavioral codes included in the corpus we applied the behaviors Acceptance, Blame, Negativity, Positivity, and Sadness to train our models. This is motivated from previous works which showed good separability in these behaviors as well as being easy to interpret. The behavior model is pre-trained to identify the presence of each behavior from a sequence of words using a multi-label classification scheme. The pre-trained portion of the behavior model was implemented using a single layer RNN with LSTM units with dimension size 50. Experimental Setup ::: Hyperparameters We augmented previous RNN language model architectures by BIBREF1 and BIBREF2 with our proposed behavior gates. We used the same architecture as in each work to maintain similar number of parameters and performed a grid search of hyperparameters such as learning rate, dropout, and batch size. The number of layers and size of the final layers of the behavior model was also optimized. We report the results of models based on the best validation result. Results We split the results into two parts. We first validate the proposed technique on behavior related language modeling tasks and then apply it on more generic domain Penn Tree bank dataset. Results ::: Behavior Related Corpora ::: Couple's Therapy Corpus We utilize the Couple's Therapy Corpus as an in-domain experimental corpus since our behavior classification model is also trained on the same. The RNNLM architecture is similar to BIBREF1, but with hyperparameters optimized for the couple's corpus. The results are tabulated in Table TABREF16 in terms of perplexity. We find that the behavior gated language models yield lower perplexity compared to vanilla LSTM language model. A relative improvement of 2.43% is obtained with behavior gating on the couple's data. Results ::: Behavior Related Corpora ::: Cancer Couples Interaction Dataset To evaluate the validity of the proposed method on an out-of-domain but behavior related task, we utilize the Cancer Couples Interaction Dataset. Here both the language and the behavior models are trained on the Couple's Therapy Corpus. The Cancer dataset is used only for development (hyper-parameter tuning) and testing. We observe that the behavior gating helps achieve lower perplexity values with a relative improvement of 6.81%. The performance improvements on an out-of-domain task emphasizes the effectiveness of behavior gated language models. Results ::: Penn Tree Bank Corpus Although the proposed model is motivated and targeted towards behavior related datasets, the hypothesis should theoretically extend towards any human generated corpora. To assess this, we also train models on a non-behavior-rich database, the Penn Tree Bank Corpus. We experiment with both the medium and large architectures proposed by BIBREF1. The perplexity results on PTB are presented in Table TABREF17. All language models showed an improvement in perplexity through the addition of behavior gates. It can also be observed that LSTM-Medium with behavior gating gives similar performance to baseline LSTM-Large even though the latter has more than three times the number of parameters. Results ::: Penn Tree Bank Corpus ::: Previous state-of-the-art architectures Finally we apply behavior gating on a previous state-of-the-art architecture, one that is most often used as a benchmark over various recent works. Specifically, we employ the AWD-LSTM proposed by BIBREF2 with QRNN BIBREF25 instead of LSTM. We observe positive results with AWD-LSTM augmented with behavior-gating providing a relative improvement of (1.42% on valid) 0.66% in perplexity (Table TABREF17). Conclusion & Future Work In this study, we introduce the state of the speaker/author into language modeling in the form of behavior signals. We track 5 behaviors namely acceptance, blame, negativity, positivity and sadness using a 5 class multi-label behavior classification model. The behavior states are used as gating mechanism for a typical RNN based language model. We show through our experiments that the proposed technique improves language modeling perplexity specifically in the case of behavior-rich scenarios. Finally, we show improvements on the previous state-of-the-art benchmark model with Penn Tree Bank Corpus to underline the affect of behavior states in language modeling. In future, we plan to incorporate the behavior-gated language model into the task of automatic speech recognition (ASR). In such scenario, we could derive both the past and the future behavior states from the ASR which could then be used to gate the language model using two pass re-scoring strategies. We expect the behavior states to be less prone to errors made by ASR over a sufficiently long context and hence believe the future behavior states to provide further improvements.
pre-trained to identify the presence of behavior from a sequence of word using the Couples Therapy Corpus
a81941f933907e4eb848f8aa896c78c1157bff20
a81941f933907e4eb848f8aa896c78c1157bff20_0
Q: Can the model add new relations to the knowledge graph, or just new entities? Text: Introduction Knowledge Graphs (KGs) are a special type of information network that represents knowledge using RDF-style triples $\langle h$ , $r$ , $t\rangle $ , where $h$ represents some head entity and $r$ represents some relationship that connects $h$ to some tail entity $t$ . In this formalism a statement like “Springfield is the capital of Illinois” can be represented as $\langle $ Springfield, capitalOf, Illinois $\rangle $ . Recently, a variety of KGs, such as DBPedia BIBREF0 , and ConceptNet BIBREF1 , have been curated in the service of fact checking BIBREF2 , question answering BIBREF3 , entity linking BIBREF4 , and for many other tasks BIBREF5 . Despite their usefulness and popularity, KGs are often noisy and incomplete. For example, DBPedia, which is generated from Wikipedia's infoboxes, contains $4.6$ million entities, but half of these entities contain less than 5 relationships. Based on this observation, researchers aim to improve the accuracy and reliability of KGs by predicting the existence (or probability) of relationships. This task is often called Knowledge Graph Completion (KGC). Continuing the example from above, suppose the relationship capitalOf is missing between Indianapolis and Indiana; the KGC task might predict this missing relationship based on the topological similarity between this part of the KG and the part containing Springfield and Illinois. Progress in vector embeddings originating with word2vec has produced major advancements in the KGC task. Typical embedding-based KGC algorithms like TransE BIBREF6 and others learn low-dimensional representations (i.e., embeddings) for entities and relationships using topological features. These models are able to predict the existence of missing relationships thereby “completing” the KG. Existing KGC models implicitly operate under the Closed-World Assumption BIBREF7 in which all entities and relationships in the KG cannot be changed – only discovered. We formally define the Closed-word KGC task as follows: Definition 1 Given an incomplete Knowledge Graph $\mathcal {G}=(\mathbf {E},\mathbf {R},\mathbf {T})$ , where $\mathbf {E}$ , $\mathbf {R}$ , and $\mathbf {T}$ are the entity set, relationship set, and triple set respectively, Closed-World Knowledge Graph Completion completes $\mathcal {G}$ by finding a set of missing triples $\mathbf {T^\prime }=\lbrace \langle h,r,t\rangle | h\in \mathbf {E}, r \in \mathbf {R}, t \in \mathbf {E}, \langle h,r,t\rangle \notin \mathbf {T}\rbrace $ in the incomplete Knowledge Graph $\mathcal {G}$ . Closed-world KGC models heavily rely on the connectivity of the existing KG and are best able to predict relationships between existing, well-connected entities. Unfortunately, because of their strict reliance on the connectivity of the existing KG, closed-world KGC models are unable to predict the relationships of poorly connected or new entities. Therefore, we assess that closed-world KGC is most suitable for fixed or slowly evolving KGs. However, most real-world KGs evolve quickly with new entities and relationships being added by the minute. For example, in the 6 months between DBPedia's October 2015 release and its April 2016 release $36,340$ new English entities were added – a rate of 200 new entities per day. Recall that DBPedia merely tracks changes to Wikipedia infoboxes, so these updates do not include newly added articles without valid infobox data. Because of the accelerated growth of online information, repeatedly re-training closed-world models every day (or hour) has become impractical. In the present work we borrow the idea of open-world assumption from probabilistic database literature BIBREF8 and relax the closed-world assumption to develop an Open-World Knowledge Graph Completion model capable of predicting relationships involving unseen entities or those entities that have only a few connections. Formally we define the open-world KGC task as follows: Definition 2 Given an incomplete Knowledge Graph $\mathcal {G}=(\mathbf {E},\mathbf {R},\mathbf {T})$ , where $\mathbf {E}$ , $\mathbf {R}$ , and $\mathbf {T}$ are the entity set, relationship set, and triple set respectively, Open-World Knowledge Graph Completion completes $\mathcal {G}$ by finding a set of missing triples $\mathbf {T^\prime }=\lbrace \langle h,r,t\rangle | \langle h,r,t\rangle \notin \mathbf {T}, h \in \mathbf {E}^i, t\in \mathbf {E}^i, r \in \mathbf {R} \rbrace $ in the incomplete Knowledge Graph $\mathcal {G}$ where $\mathbf {E}^i$ is an entity superset. In Defn. "Closed-World Knowledge Graph Completion" we relax the constraint on the triple set $\mathbf {T^\prime }$ so that triples in $\mathbf {T^\prime }$ can contain entities that are absent from the original entity set $\mathbf {E}$ . Closed-world KGC models learn entity and relationship embedding vectors by updating an initially random vector based on the KG's topology. Therefore, any triple $\langle h,r,t\rangle \in \mathbf {T^\prime }$ such that $h\notin \mathbf {E}$ or $t\notin \mathbf {E}$ will only ever be represented by its initial random vector because its absence does not permit updates from any inference function. In order to predict the missing connections for unseen entities, it is necessary to develop alternative features to replace the topological features used by closed-world models. Text content is a natural substitute for the missing topological features of disconnected or newly added entities. Indeed, most KGs such as FreeBase BIBREF9 , DBPedia BIBREF0 , and SemMedDB BIBREF10 were either directly extracted from BIBREF11 , BIBREF12 , or are built in parallel to some underlying textual descriptions. However, open-world KGC differs from the standard information extraction task because 1) Rather than extracting triples from a large text corpus, the goal of open-world KGC is to discover missing relationships; and 2) Rather than a pipeline of independent subtasks like Entity Linking BIBREF13 and Slotfilling BIBREF14 , etc., open-world KGC is a holistic task that operates as a single model. Although it may seem intuitive to simply include an entity's description into an existing KGC model, we find that learning useful vector embeddings from unstructured text is much more challenging than learning topology-embeddings as in the closed-world task. First, in closed-world KGC models, each entity will have a unique embedding, which is learned from its directly connected neighbors; whereas open-world KGC models must fuse entity embeddings with the word embeddings of the entity's description. These word embeddings must be updated by entities sharing the same words regardless of their connectivity status. Second, because of the inclusion of unstructured content, open-world models are likely to include noisy or redundant information. With respect to these challenges, the present work makes the following contributions: Before introduce the ConMask model, we first present preliminary material by describing relevant KGC models. Then we describe the methodology, data sets, and a robust case study of closed-world and open-world KGC tasks. Finally, we draw conclusions and offer suggestions for future work. Closed-World Knowledge Graph Completion A variety of models have been developed to solve the closed-world KGC task. The most fundamental and widely used model is a translation-based Representation Learning (RL) model called TransE BIBREF6 . TransE assumes there exists a simple function that can translate the embedding of the head entity to the embedding of some tail entity via some relationship: $$\mathbf {h} + \mathbf {r} = \mathbf {t},$$ (Eq. 5) where $\mathbf {h}$ , $\mathbf {r}$ and $\mathbf {t}$ are embeddings of head entity, relationship, and tail entity respectively. Based on this function, many other KGC models improve the expressive power of Eq. 5 by introducing more relationship-dependent parameters. TransR BIBREF15 , for example, augments Eq. 5 to $\mathbf {h}\mathbf {M}_{r} + \mathbf {r} = \mathbf {t}\mathbf {M}_{r}$ where $\mathbf {M}_{r}$ is a relationship-dependent entity embedding transformation. In order to train the KGC models, TransE defines an energy-based loss function as $$\mathcal {L}(\mathbf {T}) = \Sigma _{\langle h,r,t\rangle \in \mathbf {T}}[\gamma + E(\langle h,r,t\rangle ) - E(\langle h^\prime , r^\prime , t^\prime \rangle )]_{+},$$ (Eq. 6) where the energy function $E(\langle h,r,t\rangle ) = \parallel \mathbf {h} + \mathbf {r} - \mathbf {t}\parallel _{L_{n}}$ measures the closeness of the given triple, $\langle h,r,t\rangle $ is some triple that exists in the triple set $\mathbf {T}$ of an incomplete KG $\mathcal {G}$ , and $\langle h^\prime , r^\prime , t^\prime \rangle $ is a “corrupted” triple derived by randomly replacing one part of $\langle h,r,t\rangle $ so that it does not exist in $\mathbf {T}$ . In other recent work, ProjE BIBREF16 considered closed-world KGC to be a type of ranking task and applied a list-wise ranking loss instead of Eq. 6 . Other closed-world models such as PTransE BIBREF17 and dORC BIBREF18 maintain a simple translation function and use complex topological features like extended-length paths and “one-relation-circle” structures to improve predictive performance. Unlike topology-based models, which have been studied extensively, there has been little work that utilizes text information for KGC. Neural Tensor Networks (NTN) BIBREF19 uses the averaged word embedding of an entity to initialize the entity representations. DKRL BIBREF20 uses the combined distance between topology-embeddings and text-embeddings as its energy function. Jointly BIBREF21 combines the topology-embeddings and text-embeddings first using a weighted sum and then calculates the $L_{n}$ distance between the translated head entity and tail entity. However, gains in predictive performance from these joint-learning models are rather small compared to advances in topology-based models. Furthermore, the aforementioned models are all closed-world KGC models, which can only learn meaningful representations for entities that are present during training and are well connected within the KG. These models have no mechanism by which new entities can be connected with the existing KG as required in open-world KGC. In the present work, we present an open-world KGC model called ConMask that uses primarily text features to learn entity and relationship embeddings. Compared to topology-based and joint-learning models, ConMask can generate representations for unseen entities if they share the same vocabulary with entities seen during training. To properly handle one-to-many and many-to-one relationships, we also apply a relationship-dependent content masking layer to generate entity embeddings. ConMask: A Content Masking Model for Open-World KGC In this section we describe the architecture and the modelling decisions of the ConMask model. To illustrate how this model works, we begin by presenting an actual example as well as the top-ranked target entity inferred by the ConMask model: Example Task: Complete triple $\langle $ Ameen Sayani, residence, ? $\rangle $ , where Ameen Sayani is absent from the KG. Snippet of Entity Description: “... Ameen Sayani was introduced to All India Radio, Bombay, by his brother Hamid Sayani. Ameen participated in English programmes there for ten years ...”. Predicted Target Entity: Mumbai. In this example, if a human reader were asked to find the residence of Ameen Sayani, a popular radio personality in India, from the entity description, then the human reader is unlikely to read the entire text from beginning to end. Instead, the reader might skim the description looking for contextual clues such as family or work-related information. Here, Ameen's workplace All India Radio is located in Bombay, so the human reader may infer that Ameen is a resident of Bombay. A human reader may further reason that because Bombay has recently changed its name to Mumbai, then Mumbai would be the (correct) target entity. Here and throughout the present work, we denote the missing entity as the target entity, which can be either the head or the tail of a triple. We decompose the reasoning process described above into three steps: 1) Locating information relevant to the task, 2) Implicit reasoning based on the context and the relevant text, and 3) Resolving the relevant text to the proper target entity. The ConMask model is designed to mimic this process. Thus, ConMask consists of three components: ConMask selects words that are related to the given relationship to mitigate the inclusion of irrelevant and noisy words. From the relevant text, ConMask then uses fully convolutional network (FCN) to extract word-based embeddings. Finally, it compares the extracted embeddings to existing entities in the KG to resolve a ranked list of target entities. The overall structure of ConMask is illustrated in Fig. 1 . Later subsections describe the model in detail. Relationship-Dependent Content Masking In open-world KGC, we cannot rely solely on the topology of the KG to guide our model. Instead, it is natural to consider extracting useful information from text in order to infer new relationships in the KG. The task of extracting relationships among entities from text is often called relation extraction BIBREF22 . Recent work in this area tends to employ neural networks such as CNN BIBREF21 or abstract meaning representations (AMRs) BIBREF23 to learn a unified kernel to remove noise and extract the relationship-agnostic entity representations. For open-world KGC, it may be possible to create a model with relationship-dependent CNN kernels. But this type of model would significantly increase the number of parameters and may overfit on rare relationships. In the proposed ConMask model we developed an alternative approach called relationship-dependent content masking. The goal is to pre-process the input text in order to select small relevant snippets based on the given relationship – thereby masking irrelevant text. The idea of content masking is inspired by the attention mechanism used by recurrent neural network (RNN) models BIBREF24 , which is widely applied to NLP tasks. In a typical attention-based RNN model, each output stage of a recurrent cell is assigned an attention score. In ConMask, we use a similar idea to select the most related words given some relationship and mask irrelevant words by assigning a relationship-dependent similarity score to words in the given entity description. We formally define relationship-dependent content masking as: $$\tau (\phi (e), \psi (r)) = \mathbf {W}_{\phi (e)} \circ f_{w}(\mathbf {W}_{\phi (e)}, \mathbf {W}_{\psi (r)}),$$ (Eq. 13) where $e$ is an entity, $r$ is some relationship, $\phi $ and $\psi $ are the description and name mapping functions respectively that return a word vector representing the description or the name of an entity or relationship. $\mathbf {W}_{\phi (e)} \in \mathbb {R}^{|\phi (e)|\times k}$ is the description matrix of $e$ in which each row represents a $k$ dimensional embedding for a word in $\phi (e)$ in order, $\mathbf {W}_{\psi (r)} \in \mathbb {R}^{|\psi (r)|\times k}$ is the name matrix of $r$ in which each row represents a $r$0 dimensional embedding for a word in the title of relationship $r$1 , $r$2 is row-wise product, and $r$3 calculates the masking weight for each row, i.e., the embedding of each word, in $r$4 . The simplest way to generate these weights is by calculating a similarity score between each word in entity description $\phi (e)$ and the words in relationship name $\psi (r)$ . We call this simple function Maximal Word-Relationship Weights (MWRW) and define it as: $$\begin{adjustbox}{max width=0.92} f_{w}^{\textrm {MWRW}}\left(\mathbf {W}_{\phi (e)}, \mathbf {W}_{\psi (r)}\right)_{[i]} = \mathsf {max}_j\left(\frac{\sum \limits _m^k \mathbf {W}_{\phi (e)[i,m]} \mathbf {W}_{\psi (r)[j,m]}}{\sqrt{\sum \limits _m^k \mathbf {W}_{\phi (e)[i,m]}^2}\sqrt{\sum \limits _m^k \mathbf {W}_{\psi (r)[j,m]}^2}}\right), \end{adjustbox}$$ (Eq. 14) where the weight of the $i^{\textrm {th}}$ word in $\phi (e)$ is the largest cosine similarity score between the $i^{\textrm {th}}$ word embedding in $\mathbf {W}_{\phi (e)}$ and the word embedding matrix of $\psi (r)$ in $\mathbf {W}_{\psi (r)}$ . This function assigns a lower weight to words that are not relevant to the given relationship and assigns higher scores to the words that appear in the relationship or are semantically similar to the relationship. For example, when inferring the target of the partial triple $\langle $ Michelle Obama, AlmaMater, ? $\rangle $ , MWRW will assign high weights to words like Princeton, Harvard, and University, which include the words that describe the target of the relationship. However the words that have the highest scores do not always represent the actual target but, instead, often represent words that are similar to the relationship name itself. A counter-example is shown in Fig. 2 , where, given the relationship spouse, the word with the highest MWRW score is married. Although spouse is semantically similar to married, it does not answer the question posed by the partial triple. Instead, we call words with high MWRW weights indicator words because the correct target-words are usually located nearby. In the example-case, we can see that the correct target Barack Obama appears after the indicator word married. In order to assign the correct weights to the target words, we improve the content masking by using Maximal Context-Relationship Weights (MCRW) to adjust the weights of each word based on its context: $$\begin{adjustbox}{max width=0.92} f_{w}\left(\mathbf {W}_{\phi (e)}, \mathbf {W}_{\psi (r)}\right)_{[i]} = \max \left(f_{w}^{\textrm {MWRW}}\left(\mathbf {W}_{\phi (e)}, \mathbf {W}_{\psi (r)}\right)_{[i-k_m:i]}\right), \end{adjustbox}$$ (Eq. 15) in which the weight of the $i^{th}$ word in $\phi (e)$ equals the maximum MWRW score of the $i^{th}$ word itself and previous $k_m$ words. From a neural network perspective, the re-weighting function $f_w$ can also be viewed as applying a row-wise max reduction followed by a 1-D max-pooling with a window size of $k_m$ on the matrix product of $\mathbf {W}_{\phi (e)}$ and $\mathbf {W}_{\psi (r)}^{T}$ . To recap, the relationship-dependent content masking process described here assigns importance weights to words in an entity's description based on the similarity between each word's context and the given relationship. After non-relevant content is masked, the model needs to learn a single embedding vector from the masked content matrix to compare with the embeddings of candidate target entities. Target Fusion Here we describe how ConMask extracts word-based entity embeddings. We call this process the target fusion function $\xi $ , which distills an embedding using the output of Eq. 13 . Initially, we looked for solutions to this problem in recurrent neural networks (RNNs) of various forms. Despite their popularity in NLP-related tasks, recent research has found that RNNs are not good at performing “extractive” tasks BIBREF25 . RNNs do not work well in our specific setting because the input of the Target Fusion is a masked content matrix, which means most of the stage inputs would be zero and hence hard to train. In this work we decide to use fully convolutional neural network (FCN) as the target fusion structure. A CNN-based structure is well known for its ability to capture peak values using convolution and pooling. Therefore FCN is well suited to extract useful information from the weighted content matrix. Our adaptation of FCNs yields the target fusion function $\xi $ , which generates a $k$ -dimensional embedding using the output of content masking $\tau (\phi (e),$ $\psi (r))$ where $e$ is either a head or tail entity from a partial triple. Figure 3 shows the overall architecture of the target fusion process and its dependent content masking process. The target fusion process has three FCN layers. In each layer, we first use two 1-D convolution operators to perform affine transformation, then we apply $sigmoid$ as the activation function to the convoluted output followed by batch normalization BIBREF26 and max-pooling. The last FCN layer uses mean-pooling instead of max-pooling to ensure the output of the target fusion layer always return a single $k$ -dimensional embedding. Note that the FCN used here is different from the one that typically used in computer vision tasks BIBREF27 . Rather than reconstructing the input, as is typical in CV, the goal of target fusion is to extract the embedding w.r.t given relationship, therefore we do not have the de-convolution operations. Another difference is that we reduce the number of embeddings by half after each FCN layer but do not increase the number of channels, i.e., the embedding size. This is because the input weighted matrix is a sparse matrix with a large portion of zero values, so we are essentially fusing peak values from the input matrix into a single embedding representing the target entity. Semantic Averaging Although it is possible to use target fusion to generate all entity embeddings used in ConMask, such a process would result in a large number of parameters. Furthermore, because the target fusion function is an extraction function it would be odd to apply it to entity names where no extraction is necessary. So, we also employ a simple semantic averaging function $\eta (\mathbf {W}) = \frac{1}{k_{l}}\Sigma _{i}^{k_{l}}\mathbf {W}_{[i,:]}$ that combines word embeddings to represent entity names and for generating background representations of other textual features, where $\mathbf {W} \in \mathcal {R}^{k_l\times k}$ is the input embedding matrix from the entity description $\phi (\cdot )$ or the entity or relationship name $\psi (\cdot )$ . To recap: at this point in the model we have generated entity embeddings through the content masking and target fusion operations. The next step is to define a loss function that finds one or more entities in the KG that most closely match the generated embedding. Loss Function To speed up the training and take to advantage of the performance boost associated with a list-wise ranking loss function BIBREF16 , we designed a partial list-wise ranking loss function that has both positive and negative target sampling: $$\mathcal {L}(h,r,t)={\left\lbrace \begin{array}{ll} \sum \limits _{h_+ \in E^+} -\frac{\log (S(h_+, r, t, E^+\cup E^-))}{|E^+|}, p_c > 0.5\\ \sum \limits _{t_+ \in E^+} -\frac{\log (S(h, r, t_+, E^+\cup E^-))}{|E^+|}, p_c \le 0.5\\ \end{array}\right.},$$ (Eq. 21) where $p_c$ is the corruption probability drawn from an uniform distribution $U[0,1]$ such that when $p_c > 0.5$ we keep the input tail entity $t$ but do positive and negative sampling on the head entity and when $p_c \le 0.5$ we keep the input head entity $h$ intact and do sampling on the tail entity. $E^+$ and $E^-$ are the sampled positive and negative entity sets drawn from the positive and negative target distribution $P_+$ and $P_-$ respectively. Although a type-constraint or frequency-based distribution may yield better results, here we follow the convention and simply apply a simple uniform distribution for both $U[0,1]$0 and $U[0,1]$1 . When $U[0,1]$2 , $U[0,1]$3 is a uniform distribution of entities in $U[0,1]$4 and $U[0,1]$5 is an uniform distribution of entities in $U[0,1]$6 . On the other hand when $U[0,1]$7 , $U[0,1]$8 is an uniform distribution of entities in $U[0,1]$9 and $p_c > 0.5$0 is an uniform distribution of entities in $p_c > 0.5$1 . The function $p_c > 0.5$2 in Eq. 21 is the softmax normalized output of ConMask: $$S(h,r,t,E^\pm ) = {\left\lbrace \begin{array}{ll} \frac{\exp (\textrm {ConMask}(h,r,t))}{\sum \limits _{e\in E^\pm }\exp (\textrm {ConMask}(e, r, t))}, p_c > 0.5 \\ \frac{\exp (\textrm {ConMask}(h,r,t))}{ \sum \limits _{e\in E^\pm }\exp (\textrm {ConMask}(h, r, e))}, p_c \le 0.5 \\ \end{array}\right.}.$$ (Eq. 22) Note that Eq. 21 is actually a generalized form of the sampling process used by most existing KGC models. When $|E_+|=1$ and $|E_-|=1$ , the sampling method described in Eq. 21 is the same as the triple corruption used by TransE BIBREF6 , TransR BIBREF15 , TransH BIBREF28 , and many other closed-world KGC models. When $|E_+| = |\lbrace t|\langle h,r,t\rangle \in \mathbf {T}\rbrace |$ , which is the number of all true triples given a partial triple $\langle h$ , $r$ , ? $\rangle $ , Eq. 21 is the same as ProjE_listwise BIBREF16 . Experiments The previous section described the design decisions and modelling assumptions of ConMask. In this section we present the results of experiments performed on old and new data sets in both open-world and closed-world KGC tasks. Settings Training parameters were set empirically but without fine-tuning. We set the word embedding size $k=200$ , maximum entity content and name length $k_c=k_n=512$ . The word embeddings are from the publicly available pre-trained 200-dimensional GloVe embeddings BIBREF29 . The content masking window size $k_m=6$ , number of FCN layers $k_{fcn}=3$ where each layer has 2 convolutional layers and a BN layer with a moving average decay of $0.9$ followed by a dropout with a keep probability $p=0.5$ . Max-pooling in each FCN layer has a pool size and stride size of 2. The mini-batch size used by ConMask is $k_b=200$ . We use Adam as the optimizer with a learning rate of $10^{-2}$ . The target sampling set sizes for $|E_+|$ and $|E_-|$ are 1 and 4 respectively. All open-world KGC models were run for at most 200 epochs. All compared models used their default parameters. ConMask is implemented in TensorFlow. The source code is available at https://github.com/bxshi/ConMask. Data Sets The Freebase 15K (FB15k) data set is widely used in KGC. But FB15k is fraught with reversed- or synonym-triples BIBREF30 and does not provide sufficient textual information for content-based KGC methods to use. Due to the limited text content and the redundancy found in the FB15K data set, we introduce two new data sets DBPedia50k and DBPedia500k for both open-world and closed-world KGC tasks. Statistics of all data sets are shown in Tab. 2 . The methodology used to evaluate the open-world and closed-world KGC tasks is similar to the related work. Specifically, we randomly selected $90\%$ of the entities in the KG and induced a KG subgraph using the selected entities, and from this reduced KG, we further removed $10\%$ of the relationships, i.e., graph-edges, to create KG $_\textrm {train}$ . All other triples not included in KG $_\textrm {train}$ are held out for the test set. Open-World Entity Prediction For the open-world KGC task, we generated a test set from the $10\%$ of entities that were held out of KG $_\textrm {train}$ . This held out set has relationships that connect the test entities to the entities in KG $_\textrm {train}$ . So, given a held out entity-relationship partial triple (that was not seen during training), our goal is to predict the correct target entity within KG $_\textrm {train}$ . To mitigate the excessive cost involved in computing scores for all entities in the KG, we applied a target filtering method to all KGC models. Namely, for a given partial triple $\langle h$ , $r$ , ? $\rangle $ or $\langle $ ?, $r$ , $t \rangle $ , if a target entity candidate has not been connected via relationship $r$ before in the training set, then it is skipped, otherwise we use the KGC model to calculate the actual ranking score. Simply put, this removes relationship-entity combinations that have never before been seen and are likely to represent nonsensical statements. The experiment results are shown in Tab. 1 . As a naive baseline we include the target filtering baseline method in Tab. 1 , which assigns random scores to all the entities that pass the target filtering. Semantic Averaging is a simplified model which uses contextual features only. DKRL is a two-layer CNN model that generates entity embeddings with entity description BIBREF20 . We implemented DKRL ourselves and removed the structural-related features so it can work under open-world KGC settings. We find that the extraction features in ConMask do boost mean rank performance by at least $60\%$ on both data sets compared to the extraction-free Semantic Averaging. Interestingly, the performance boost on the larger DBPedia500k data set is more significant than the smaller DBPedia50k, which indicates that the extraction features are able to find useful textual information from the entity descriptions. Closed-World Entity Prediction Because the open-world assumption is less restrictive than the closed-world assumption, it is possible for ConMask to perform closed-world tasks, even though it was not designed to do so. So in Tab. 3 we also compare the ConMask model with other closed-world methods on the standard FB15k data set as well as the two new data sets. Results from TransR are missing from the DBPedia500k data set because the model did not complete training after 5 days. We find that ConMask sometimes outperforms closed-world methods on the closed-world task. ConMask especially shows improvement on the DBPedia50k data set; this is probably because the random sampling procedure used to create DBPedia50k generates a sparse graph. closed-world KGC models, which rely exclusively on structural features, have a more difficult time with sub-sampled KGs. Discussion In this section we elaborate on some actual prediction results and show examples that highlight the strengths and limitations of the ConMask model. Table 4 shows 4 KGC examples. In each case, ConMask was provided the head and the relationship and asked to predict the tail entity. In most cases ConMask successfully ranks the correct entities within the top-3 results. Gabrielle Stanton's notableWork is an exception. Although Stanton did work on Star Trek, DBPedia indicates that her most notable work is actually The Vampire Diaries, which ranked $4^{\textrm {th}}$ . The reason for this error is because the indicator word for The Vampire Diaries was “consulting producer”, which was not highly correlated to the relationship name “notable work” from the model's perspective. Another interesting result was the prediction given from the partial triple $\langle $ The Time Machine, writer, ? $\rangle $ . The ConMask model ranked the correct screenwriter David Duncan as the $2^{\textrm {nd}}$ candidate, but the name “David Duncan” does not actually appear in the film's description. Nevertheless, the ConMask model was able to capture the correct relationship because the words “The Time Machine” appeared in the description of David Duncan as one of his major works. Although ConMask outperforms other KGC models on metrics such as Mean Rank and MRR, it still has some limitations and room for improvement. First, due to the nature of the relationship-dependent content masking, some entities with names that are similar to the given relationships, such as the Language-entity in the results of the languageFamily-relationship and the Writer-entity in the results of the writer-relationship, are ranked with a very high score. In most cases the correct target entity will be ranked above relationship-related entities. Yet, these entities still hurt the overall performance. It may be easy to apply a filter to modify the list of predicted target entities so that entities that are same as the relationship will be rearranged. We leave this task as a matter for future work. Conclusion and Future Work In the present work we introduced a new open-world Knowledge Graph Completion model ConMask that uses relationship-dependent content masking, fully convolutional neural networks, and semantic averaging to extract relationship-dependent embeddings from the textual features of entities and relationships in KGs. Experiments on both open-world and closed-world KGC tasks show that the ConMask model has good performance in both tasks. Because of problems found in the standard KGC data sets, we also released two new DBPedia data sets for KGC research and development. The ConMask model is an extraction model which currently can only predict relationships if the requisite information is expressed in the entity's description. The goal for future work is to extend ConMask with the ability to find new or implicit relationships.
The model does not add new relations to the knowledge graph.
252677c93feb2cb0379009b680f0b4562b064270
252677c93feb2cb0379009b680f0b4562b064270_0
Q: How large is the dataset? Text: 1.1em ::: 1.1.1em ::: ::: 1.1.1.1em Jennifer D'Souza, Anett Hoppe, Arthur Brack, Mohamad Yaser Jaradeh, Sören Auer, Ralph Ewerth TIB Leibniz Information Centre for Science and Technology, Hannover, Germany {jennifer.dsouza,anett.hoppe,arthur.brack,yaser.jaradeh,auer,ralph.ewerth}@tib.eu We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable. Entity Recognition, Entity Classification, Entity Resolution, Entity Linking, Word Sense Disambiguation, Evaluation Corpus, Language Resource Scientific Entity Annotations By starting with a STEM corpus of scholarly abstracts for annotating with scientific entities, we differ from existing work addressing this task since we are going beyond the domain restriction that so far seems to encompass scientific IE. For entity annotations, we rely on existing scientific concept formalisms BIBREF0, BIBREF1, BIBREF2 that appear to propose generic scientific concept types that can bridge the domains we consider, thereby offering a uniform entity selection framework. In the following subsections, we describe our annotation task in detail, after which we conclude with benchmark results. Scientific Entity Annotations ::: Our Annotation Process The corpus for computing inter-annotator agreement was annotated by two postdoctoral researchers in Computer Science. To develop annotation guidelines, a small pilot annotation exercise was performed on 10 abstracts (one per domain) with a set of surmised generically applicable scientific concepts such as Task, Process, Material, Object, Method, Data, Model, Results, etc., taken from existing work. Over the course of three annotation trials, these concepts were iteratively pruned where concepts that did not cover all domains were dropped, resulting in four finalized concepts, viz. Process, Method, Material, and Data as our resultant set of generic scientific concepts (see Table TABREF3 for their definitions). The subsequent annotation task entailed linguistic considerations for the precise selection of entities as one of the four scientific concepts based on their part-of-speech tag or phrase type. Process entities were verbs (e.g., “prune” in Agr), verb phrases (e.g., “integrating results” in Mat), or noun phrases (e.g. “this transport process” in Bio); Method entities comprised noun phrases containing phrase endings such as simulation, method, algorithm, scheme, technique, system, etc.; Material were nouns or noun phrases (e.g., “forest trees” in Agr, “electrons” in Ast or Che, “tephra” in ES); and majority of the Data entities were numbers otherwise noun phrases (e.g., “(2.5$\pm $1.5)kms$^{-1}$” representing a velocity value in Ast, “plant available P status” in Agr). Summarily, the resulting annotation guidelines hinged upon the following five considerations: To ensure consistent scientific entity spans, entities were annotated as definite noun phrases where possible. In later stages, the extraneous determiners and articles could be dropped as deemed appropriate. Coreferring lexical units for scientific entities in the context of a single abstract were annotated with the same concept type. Quantifiable lexical units such as numbers (e.g., years 1999, measurements 4km) or even as phrases (e.g., vascular risk) were annotated as Data. Where possible, the most precise text reference (i.e., phrases with qualifiers) regarding materials used in the experiment were annotated. For instance, “carbon atoms in graphene” was annotated as a single Material entity and not separately as “carbon atoms,” “graphene.” Any confusion in classifying scientific entities as one of four types was resolved using the following concept precedence: Method $>$ Process $>$ Data $>$ Material, where the concept appearing earlier in the list was preferred. After finalizing the concepts and updating the guidelines, the final annotation task proceeded in two phases In phase I, five abstracts per domain (i.e. 50 abstracts) were annotated by both annotators and the inter-annotator agreement was computed using Cohen's $\kappa $ BIBREF4. Results showed a moderate inter-annotator agreement at 0.52 $\kappa $. Next, in phase II, one of the annotators interviewed subject specialists in each of the ten domains about the choice of concepts and her annotation decisions on their respective domain corpus. The feedback from the interviews were systematically categorized into error types and these errors were discussed by both annotators. Following these discussions, the 50 abstracts from phase I were independently reannotated. The annotators could obtain substantial overall agreement of 0.76 $\kappa $ after phase II. In Table TABREF16, we report the IAA scores obtained per domain and overall. The scores show that the annotators had a substantial agreement in seven domains, while only a moderate agreement was reached in three domains, viz. Agr, Mat, and Ast. Scientific Entity Annotations ::: Our Annotation Process ::: Annotation Error Analysis We discuss some of the changes the interviewer annotator made in phase II after consultation with the subject experts. In total, 21% of the phase I annotations were changed: Process accounted for a major proportion (nearly 54%) of the changes. Considerable inconsistency was found in annotating verbs like “increasing”, “decreasing”, “enhancing”, etc., as Process or not. Interviews with subject experts confirmed that they were a relevant detail to the research investigation and hence should be annotated. So 61% of the Process changes came from additionally annotating these verbs. Material was the second predominantly changed concept in phase II, accounting for 23% of the overall changes. Nearly 32% of the changes under Material came from consistently reannotating phrases about models, tools, and systems; accounting for another 22% of its changes, where spatial locations were an essential part of the investigation such as in the Ast and ES domains, they were decided to be included in the phase II set as Material. Finally, there were some changes that emerged from lack of domain expertise. This was mainly in the medical domain (4.3% of the overall changes) in resolving confusion in annotating Process and Method concept types. Most of the remaining changes were based on the treatment of conjunctive spans or lists. Subsequently, the remaining 60 abstracts (six per domain) were annotated by one annotator. This last phase also involved reconciliation of the earlier annotated 50 abstracts to obtain a gold standard corpus. Scientific Entity Annotations ::: Our Annotation Process ::: Annotated Corpus Characteristics Table TABREF17 shows our annotated corpus characteristics. Our corpus comprises a total of 6,127 scientific entities, including 2,112 Process, 258 Method, 2,099 Material, and 1,658 Data entities. The number of entities per abstract directly correlates with the length of the abstracts (Pearson's R 0.97). Among the concepts, Process and Material directly correlate with abstract length (R 0.8 and 0.83, respectively), while Data has only a slight correlation (R 0.35) and Method has no correlation (R 0.02). In Figure FIGREF18, we show an example instance of a manually created text graph from the scientific entities in one abstract. The graph highlights that linguistic relations such as synonymy, hypernymy, meronymy, as well as OpenIE relations are poignant even between scientific entities. Scientific Entity Annotations ::: Performance Benchmark In the second stage of the study, we perform word sense disambiguation and link our entities to authoritative sources. Scientific Entity Resolution Aside from the four scientific concepts facilitating a common understanding of scientific entities in a multidisciplinary setting, the fact that they are just four made the human annotation task feasible. Utilizing additional concepts would have resulted in a prohibitively expensive human annotation task. Nevertheless, there are existing datasets (particularly in the biomedical domain, e.g., GENIA BIBREF6) that have adopted the conceptual framework in rich domain-specific semantic ontologies. Our work, while related, is different since we target the annotation of multidisciplinary scientific entities that facilitates a low annotation entrance barrier to producing such data. This is beneficial since it enables the task to be performed in a domain-independent manner by researchers, but perhaps not crowdworkers, unless screening tests for a certain level of scientific expertise are created. Nonetheless, we recognize that the four categories might be too limiting for real-world usage. Further, the scientific entities from stage 1 remain susceptible to subjective interpretation without additional information. Therefore, in a similar vein to adopting domain-specific ontologies, we now perform entity linking (EL) to the Wikipedia and word sense disambiguation (WSD) to Wiktionary. Scientific Entity Resolution ::: Our Annotation Process The same pair of annotators as before were involved in this stage of the study to determine the annotation agreement. Scientific Entity Resolution ::: Our Annotation Process ::: Annotation Task Tools During the annotation procedure, each annotator was shown the entities, grouped by domain and file name, in Google Excel Sheet columns alongside a view of the current abstract of entities being annotated in the BRAT interface stenetorp2012brat for context information about the entities. For entity resolution, i.e. linking and disambiguation, the annotators had local installations of specific time-stamped Wikipedia and Wiktionary dumps to enable future persistent references to the links since the Wiki sources are actively revised. They queried the local dumps using the DKPro JWPL tool BIBREF8 for Wikipedia and the DKPro JWKTL tool BIBREF9 for Wiktionary, where both tools enable optimized search through the large Wiki data volume. Scientific Entity Resolution ::: Our Annotation Process ::: Annotation Procedure for Entity Resolution Through iterative pilot annotation trials on the same pilot dataset as before, the annotators delineated an ordered annotation procedure depicted in the flowchart in Figure FIGREF28. There are two main annotation phases, viz. a preprocessing phase (determining linkability, determining whether an entity is decomposable into shorter collocations), and the entity resolution phase. The actual annotation task then proceeded, in which to compute agreement scores, the annotators worked on the same set of 50 scholarly abstracts that they had used earlier to compute the scores for the scientific entity annotations. Scientific Entity Resolution ::: Our Annotation Process ::: Annotation Procedure for Entity Resolution ::: Linkability. In this first step, entities that conveyed a sense of scientific jargon were deemed linkable. A natural question that arises, in the context of the Linkability criteria, is: Which stage 1 annotated scientific entities were now deemed unlinkable? They were 1) Data entities that are numbers; 2) entities that are coreference mentions which, as isolated units, lost their precise sense (e.g., “development”); and 3) Process verbs (e.g., “decreasing”, “reconstruct”, etc.). Still, having identified these cases, a caveat remained: except for entities of type Data, the remaining decisions made in this step involved a certain degree of subjectivity because, for instance, not all Process verbs were unlinkable (e.g., “flooding”). Nonetheless, at the end of this step, the annotators obtained a high IAA score at 0.89 $\kappa $. From the agreement scores, we found that the Linkability decisions could be made reliably and consistently on the data. Scientific Entity Resolution ::: Our Annotation Process ::: Annotation Procedure for Entity Resolution ::: Splitting phrases into shorter collocations. While preference was given to annotating non-compositional noun phrases as scientific entities in stage 1, consecutive occurrences of entities of the same concept type separated only by prepositions or conjunctions were merged into longer spans. As examples, consider the phrases “geysers on south polar region,” and “plume of water ice molecules and dust” in Figure FIGREF18. These phrases, respectively, can be meaningfully split as “geysers” and “south polar region” for the first example, and “plume”, “water ice molecules”, and “dust” for the second. As demonstrated in these examples, the stage 1 entities we split in this step are syntactically-flexible multi-word expressions which did not have a strict constraint on composition BIBREF10. For such expressions, we query Wikipedia or Google to identify their splits judging from the number of results returned and whether, in the results, the phrases appeared in authoritative sources (e.g., as overview topics in publishing platforms such as ScienceDirect). Since search engines operate on a vast amount of data, they are a reliable source for determining phrases with a strong statistical regularity, i.e. determining collocations. With a focus on obtaining agreement scores for entity resolution, the annotators bypass this stage for computing independent agreement and attempted it mutually as follows. One annotator determined all splits, wherever required, first. The second annotator acted as judge by going through all the splits and proposed new splits in case of disagreement. The disagreements were discussed by both annotators and the previous steps were repeated iteratively until the dataset was uniformly split. After this stage, both annotators have the same set of entities for resolution. Scientific Entity Resolution ::: Our Annotation Process ::: Annotation Procedure for Entity Resolution ::: Entity Resolution (ER) Annotation. In this stage, the annotators resolved each entity from the previous step to encyclopedic and lexicographic knowledge bases. While, in principle, multiple knowledge sources can be leveraged, this study only examines scientific entities in the context of their Wiki-linkability. Wikipedia, as the largest online encyclopedia (with nearly 5.9 million English articles) offers a wide coverage of real-world entities, and based on its vast community of editors with editing patterns at the rate of 1.8 edits per second, is considered a reliable source of information. It is pervasively adopted in automatic EL tasks BIBREF11, BIBREF12, BIBREF13 to disambiguate the names of people, places, organizations, etc., to their real-world identities. We shift from this focus on proper names as the traditional Wikification EL purpose has been, to its, thus far, seemingly less tapped-in conceptual encyclopedic knowledge of nominal scientific entities. Wiktionary is the largest freely available dictionary resource. Owing to its vast community of curators, it rivals the traditional expert-curated lexicographic resource WordNet BIBREF14 in terms of coverage and updates, where the latter evolves more slowly. For English, Wiktionary has nine times as many entries and at least five times as many senses compared to WordNet. As a more pertinent neologism in the context of our STEM data, consider the sense of term “dropout” as a method for regularizing the neural network algorithms which is already present in Wiktionary. While WSD has been traditionally used WordNet for its high-quality semantic network and longer prevalence in the linguistics community (c.f Navigli navigli2009word for a comprehensive survey), we adopt Wiktionary thus maintaining our focus on collaboratively curated resources. In WSD, entities from all parts-of-speech are enriched w.r.t. language and wordsmithing. But it excludes in-depth factual and encyclopedic information, which otherwise is contained in Wikipedia. Thus, Wikipedia and Wiktionary are viewed as largely complementary. Scientific Entity Resolution ::: Our Annotation Process ::: Annotation Procedure for Entity Resolution ::: ER Annotation Task formalism. Given a scholarly abstract $A$ comprising a set of entities $E = \lbrace e_{1}, ... ,e_{N}\rbrace $, the annotation goal is to produce a mapping from $E$ to a set of Wikipedia pages ($p_1,...,p_N$) and Wiktionary senses ($s_1,...,s_N$) as $R = \lbrace (p_1,s_1), ... , (p_N,s_N)\rbrace $. For entities without a mapping, the corresponding $p$ or $s$ refers to Nil. The annotators followed comprehensive guidelines for ER including exceptions. E.g., the conjunctive phrase “acid/alkaline phosphatase activity” was semantically treated as the following two phrases “acid phosphatase activity” or “alkaline phosphatase activity” for EL, however, in the text it was retained as “acid” and “alkaline phosphatase activity.” Since WSD is performed over exact word-forms without assuming any semantic extension, it was not performed for “acid.” Annotations were also made for complex forms of reference such as meronymy (e.g., space instrument “CAPS” to spacecraft “wiki:Cassini Huygens” of which it is a part), or hypernymy (e.g., “parents” in “genepool parents” to “wiki:Ancestor”). As a result of the annotation task, the annotators obtained 82.87% rate of agreement in the EL task and a $\kappa $ score of 0.86 in the WSD task. Contrary to WSD expectations as a challenging linguistics task BIBREF15, we show high agreement; this we attribute to the entities' direct scientific sense and availability in Wiktionary (e.g., “dropout”). Subsequently, the ER annotation for the remaining 60 abstracts (six per domain) were performed by one annotator. This last phase also involved reconciliation of the earlier annotated 50 abstracts to obtain a gold standard corpus. Scientific Entity Resolution ::: Our Annotation Process ::: Annotated Corpus Characteristics In this stage 2 corpus, linkability of the scientific entities was determined at 74.6%. Of these, 61.7% were split into shorter collocations, at 1.74 splits per split entity. Detailed statistics are presented in Table TABREF36. In the table, the domains are ranked by the total number of their linkable entities (fourth column). Ast has the highest proportion of linked entities at 87.3% which comprises 10.4% of all the linked entities and disambiguated entities at 71.4% forming 8.5% of the overall disambiguated entities. From an EL perspective, we surmize that articles on space topics are well represented in Wikipedia. For WSD, Bio, ES, and Med predictably have the least proportion of disambiguated entities at 52.3%, 54.6%, and 55.5%, respectively, since of all our domains these especially rely on high degree scientific jargon, while WSD generally tends to be linguistically oriented in a generic sense. As a summary, linked and disambiguated entities had a high correlation with the total linkable entities ($R$ 0.98 and 0.89, respectively). In Table TABREF37, the ER annotation results are shown as POS tag distributions. The POS tags were obtained from Wiktionary, where entities that couldn't be disambiguated are tagged as SW (Single Word) or MWE (Multi-Word Expression). These tags have a coarser granularity compared to the traditionally followed Penn Treebank tags with some unconventional tagging patterns (e.g., “North Sea” as NNP, “in vivo” as ADJ). From the distributions, except for nouns being the most EL and WSD instances, the rest of the table differs significantly between the two tasks in a sense reflecting the nature of the tasks. While MWE are the second highest EL instances, its corresponding PHRASE type is least represented in WSD. In contrast, while adverbs are the second highest in WSD, they are least in EL. Scientific Entity Resolution ::: Evaluation We do not observe a significant impact of the long-tailed list phenomenon of unresolved entities in our data (c.f Table TABREF36 only 17% did not have EL annotations). Results on more recent publications should perhaps serve more conclusive in this respect for new concepts introduced–the abstracts in our dataset were published between 2012 and 2014. Conclusion The STEM-ECR v1.0 corpus of scientific abstracts offers multidisciplinary Process, Method, Material, and Data entities that are disambiguated using Wiki-based encyclopedic and lexicographic sources thus facilitating links between scientific publications and real-world knowledge (see the concepts enrichment we obtain from Wikipedia for our entities in Figure ). We have found that these Wikipedia categories do enable a semantic enrichment of our entities over our generic four concept formalism as Process, Material, Method, and Data (as an illustration, the top 30 Wiki categories for each of our four generic concept types are shown in the Appendix). Further, considering the various domains in our multidisciplinary STEM corpus, notably, the inclusion of understudied domains like Mathematics, Astronomy, Earth Science, and Material Science makes our corpus particularly unique w.r.t. the investigation of their scientific entities. This is a step toward exploring domain independence in scientific IE. Our corpus can be leveraged for machine learning experiments in several settings: as a vital active-learning test-bed for curating more varied entity representations BIBREF16; to explore domain-independence versus domain-dependence aspects in scientific IE; for EL and WSD extensions to other ontologies or lexicographic sources; and as a knowledge resource to train a reading machine (such as PIKES BIBREF17 or FRED BIBREF18) that generate more knowledge from massive streams of interdisciplinary scientific articles. We plan to extend this corpus with relations to enable building knowledge representation models such as knowledge graphs in a domain-independent manner. Acknowledgements We thank the anonymous reviewers for their comments and suggestions. We also thank the subject specialists at TIB for their helpful feedback in the first part of this study. This work was co-funded by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536) and by the TIB Leibniz Information Centre for Science and Technology. Appendix: Supplemental Material A.1. Proportion of the Generic Scientific Entities To offer better insights to our STEM corpus for its scientific entity annotations made in part 1, in Figure FIGREF40 below, we visually depict the proportion of Process, Method, Material, and Data entities per domain. The Figure serves a complementary view to our corpus compared with the dataset statistics shown in Table TABREF17. It shows that the Ast domain has the highest proportion of scientific entities overall. On the other hand, per generic type, Bio has the most Process entities, CS has the most Method entities, Ast has the most Material closely followed by Agr, and Eng has the most Data. A.2. Cohen's $\kappa $ Computation Setup in Section 4.1.2 Linkability. Given the stage 1 scientific entities, the annotators could make one of two decisions: a) an entity is linkable; or b) an entity is unlinkable. These decisions were assigned numeric indexes, i.e. 1 for decision (a) and -1 for decision (b) and can take on one of four possible combinations based on the two annotators decisions: (1,1), (1,-1), (-1,1), and (-1,-1). The $\kappa $ scores were then computed on this data representation. WSD Agreement. In order to compute the WSD agreement, the Wiktionary structure for organizing the words needed to be taken into account. It's structure is as follows. Each word in the Wiktionary lexicographic resource is categorized based on etymology, and within each etymological category, by the various part-of-speech tags the word can take. Finally, within each POS type, is a gloss list where each gloss corresponds to a unique word sense. Given the above-mentioned Wiktionary structure, the initial setup for the blind WSD annotation task entailed that the annotators were given the same reference POS tags within an etymology for the split single-word entities in the corpus. Next, as data to compute $\kappa $ scores, each annotator-assigned gloss sense was given a numeric index and agreement was computed based on matches or non-matches between indexes. A.3. Per-domain Inter-annotator Agreement for Entity Resolution To supplement the overall Inter-Annotator Agreement (IAA) scores reported in Section 4.1.2 `Entity Resolution (ER) Annotation' for the EL and WSD tasks, in Table TABREF43 below, we additionally report the IAA scores for our ER tasks (i.e., EL and WSD) per domain in the STEM-ECR corpus. First, considering the domains where the highest ER agreement scores were obtained. For EL, the IAA score was highest in the MS domain. While for WSD, the IAA score was highest in the Bio domain. Next, considering the domains where the agreement was least for the two tasks. We found the the EL agreement was least for CS and the WSD agreement was least for Mat. In the case of low EL agreement, it can be attributed to two main cases: only one of the annotators found a link; or the annotators linked to related pages on the same theme as the entity (e.g., wiki:Rule-based_modeling versus wiki:Rule-based_machine_learning for “rule-based system”). And in the case of the low WSD agreement obtained on Mat, we see that owing to broad terms like “set,” “matrix,” “groups,” etc., in the domain which could be disambiguated to more than one Wiktionary sense correctly, the IAA agreement was low. A.4. Babelfy's Precision ($P$) and Recall ($R$) Computation for Entity Resolution in Figure For the $P$ and $R$ scores reported in Figure , the true positives (TP), false negatives (FN), true negatives (TN), and false positives (FP) were computed as follows: TP = human-annotated entities that have a EL/WSD match with Babelfy results (for Nil, a match is considered as no result from the automatic system); FN = human-annotated entities that have no EL/WSD match with Babelfy results; TN = spurious Babelfy-created strings as entities that do not have a EL/WSD result; and FP = spurious Babelfy-created entities that have a EL/WSD result. A.5. Top 30 Wikipedia Categories for Process, Method, Material, and Data In part 1 of the study, we categorized the scientific entities by our four generic concept formalism, comprising Process, Method, Material, and Data. Linking the entities to Wikipedia further enables their broadened categorization. While in Figure is depicted the rich set of Wikipedia categories obtained overall, here, in Tables TABREF44 and TABREF45, we show the top 30 Wikipedia categories for the scientific entities by their four concept types. we observe the most of the Wikipedia categories pertinently broaden the semantic expressivity of each of our four concepts. Further that in each type, they are diverse reflecting the underlying data domains in our corpus. As examples, consider the Wikipedia categories for the Data scientific entities: “SIBaseQuantities” category over the entity “Kelvin” in Che; “FluidDynamics” in Eng and MS domains; and “SolarCalendars” in the Ast domain.
6,127 scientific entities, including 2,112 Process, 258 Method, 2,099 Material, and 1,658 Data entities
fe6bb55b28f14ed8ac82c122681905397e31279d
fe6bb55b28f14ed8ac82c122681905397e31279d_0
Q: Why is a Gaussian process an especially appropriate method for this classification problem? Text: Introduction There is an increasing need to interpret and act upon rumours spreading quickly through social media during breaking news, where new reports are released piecemeal and often have an unverified status at the time of posting. Previous research has posited the damage that the diffusion of false rumours can cause in society, and that corrections issued by news organisations or state agencies such as the police may not necessarily achieve the desired effect sufficiently quickly BIBREF0 , BIBREF1 . Being able to determine the accuracy of reports is therefore crucial in these scenarios. However, the veracity of rumours in circulation is usually hard to establish BIBREF2 , since as many views and testimonies as possible need to be assembled and examined in order to reach a final judgement. Examples of rumours that were later disproven, after being widely circulated, include a 2010 earthquake in Chile, where rumours of a volcano eruption and a tsunami warning in Valparaiso spawned on Twitter BIBREF3 . Another example is the England riots in 2011, where false rumours claimed that rioters were going to attack Birmingham's Children's Hospital and that animals had escaped from London Zoo BIBREF4 . Previous work by ourselves and others has argued that looking at how users in social media orient to rumours is a crucial first step towards making an informed judgement on the veracity of a rumourous report BIBREF5 , BIBREF6 , BIBREF3 . For example, in the case of the riots in England in August 2011, Procter et al. manually analysed the stance expressed by users in social media towards rumours BIBREF4 . Each tweet discussing a rumour was manually categorised as supporting, denying or questioning it. It is obvious that manual methods have their disadvantages in that they do not scale well; the ability to perform stance categorisation of tweets in an automated way would be of great use in tracking rumours, flagging those that are largely denied or questioned as being more likely to be false. Determining the stance of social media posts automatically has been attracting increasing interest in the scientific community in recent years, as this is a useful first step towards more in-depth rumour analysis: Work on automatic rumour stance classification, however, is still in its infancy, with some methods ignoring temporal ordering and rumour identities (e.g. BIBREF10 ), while others being rule-based and thus with unclear generalisability to new rumours BIBREF7 . Our work advances the state-of-the-art in tweet-level stance classification through multi-task learning and Gaussian Processes. This article substantially extends our earlier short paper BIBREF11 , fistly by using a second dataset, which enables us to test the generalisability of our results. Secondly, a comparison against additional baseline classifiers and recent state-of-the-art approaches has been added to the experimental section. Lastly, we carried out a more thorough analysis of the results, including now per-class performance scores, which furthers our understanding of rumour stance classification. In comparison to the state-of-the-art, our approach is novel in several crucial aspects: Based on the assumption of a common underlying linguistic signal in rumours on different topics, we build a transfer learning system based on Gaussian Processes, that can classify stance in newly emerging rumours. The paper reports results on two different rumour datasets and explores two different experimental settings – without any training data and with very limited training data. We refer to these as: Our results demonstrate that Gaussian Process-based, multi-task learning leads to significantly improved performance over state-of-the-art methods and competitive baselines, as demonstrated on two very different datasets. The classifier relying on Gaussian Processes performs particularly well over the rest of the baseline classifiers in the Leave Part Out setting, proving that it does particularly well in determining the distribution of supporting, denying and questioning tweets associated with a rumour. Estimating the distribution of stances is the key aspect for which our classifier performs especially well compared to the baseline classifiers. Related Work This section provides a more in-depth motivation of the rumour stance detection task and an overview of the state-of-the-art methods and their limitations. First, however, let us start by introducing the formal definition of a rumour. Rumour Definition There have been multiple attempts at defining rumours in the literature. Most of them are complementary to one another, with slight variations depending on the context of their analyses. The core concept that most researchers agree on matches the definition that major dictionaries provide, such as the Oxford English Dictionary defining a rumour as “a currently circulating story or report of uncertain or doubtful truth”. For instance, DiFonzo and Bordia BIBREF12 defined rumours as “unverified and instrumentally relevant information statements in circulation.” Researchers have long looked at the properties of rumours to understand their diffusion patterns and to distinguish them from other kinds of information that people habitually share BIBREF13 . Allport and Postman BIBREF2 claimed that rumours spread due to two factors: people want to find meaning in things and, when faced with ambiguity, people try to find meaning by telling stories. The latter factor also explains why rumours tend to change in time by becoming shorter, sharper and more coherent. This is the case, it is argued, because in this way rumours explain things more clearly. On the other hand, Rosnow BIBREF14 claimed that there are four important factors for rumour transmission. Rumours must be outcome-relevant to the listener, must increase personal anxiety, be somewhat credible and be uncertain. Furthermore, Shibutani BIBREF15 defined rumours to be “a recurrent form of communication through which men [sic] caught together in an ambiguous situation attempt to construct a meaningful interpretation of it by pooling their intellectual resources. It might be regarded as a form of collective problem-solving”. In contrast with these three theories, Guerin and Miyazaki BIBREF16 state that a rumour is a form of relationship-enhancing talk. Building on their previous work, they recall that many ways of talking serve the purpose of forming and maintaining social relationships. Rumours, they say, can be explained by such means. In our work, we adhere to the widely accepted fact that rumours are unverified pieces of information. More specifically, following BIBREF5 , we regard a rumour in the context of breaking news, as a “circulating story of questionable veracity, which is apparently credible but hard to verify, and produces sufficient skepticism and/or anxiety so as to motivate finding out the actual truth”. Descriptive Analysis of Rumours in Social Media One particularly influential piece of work in the field of rumour analysis in social media is that by Mendoza et al. BIBREF3 . By manually analysing the data from the earthquake in Chile in 2010, the authors selected 7 confirmed truths and 7 false rumours, each consisting of close to 1000 tweets or more. The veracity value of the selected stories was corroborated by using reliable sources. Each tweet from each of the news items was manually classified into one of the following classes: affirmation, denial, questioning, unknown or unrelated. In this way, each tweet was classified according to the position it showed towards the topic it was about. The study showed that a much higher percentage of tweets about false rumours are shown to deny the respective rumours (approximately 50%). This is in contrast to rumours later proven to be true, where only 0.3% of tweets were denials. Based on this, authors claimed that rumours can be detected using aggregate analysis of the stance expressed in tweets. Recent research put together in a special issue on rumours and social media BIBREF17 also shows the increasing interest of the scientific community in the topic. BIBREF18 proposed an agenda for research that establishes an interdisciplinary methodology to explore in full the propagation and regulation of unverified content on social media. BIBREF19 described an approach for geoparsing social media posts in real-time, which can be of help to determine the veracity of rumours by tracking down the poster's location. The contribution of BIBREF20 to rumour resolution is to build an automated system that rates the level of trust of users in social media, hence enabling to get rid of users with low reputation. Complementary to these approaches, our objective is to determine the stance of tweets towards a rumour, which can then be aggregated to establish an overall veracity score for the rumour. Another study that shows insightful conclusions with respect to stance towards rumours is that by Procter et al. BIBREF4 . The authors conducted an analysis of a large dataset of tweets related to the riots in the UK, which took place in August 2011. The dataset collected in the riots study is one of the two used in our experiments, and we describe it in more detail in section "Datasets" . After grouping the tweets into topics, where each represents a rumour, they were manually categorised into different classes, namely: media reports, which are tweets sent by mainstream media accounts or journalists connected to media, pictures, being tweets uploading a link to images, rumours, being tweets claiming or counter claiming something without giving any source, reactions, consisting of tweets being responses of users to the riots phenomenon or specific event related to the riots. Besides categorisation of tweets by type, Procter et al. also manually categorised the accounts posting tweets into different types, such as mainstream media, only on-line media, activists, celebrities, bots, among others. What is interesting for the purposes of our work is that the authors observed the following four-step pattern recurrently occurring across the collected rumours: a rumour is initiated by someone claiming it may be true, a rumour spreads together with its reformulations, counter claims appear, a consensus emerges about the credibility of the rumour. This leads the authors to the conclusion that the process of 'inter-subjective sense making' by Twitter users plays a key role in exposing false rumours. This finding, together with subsequent work by Tolmie et al. into the conversational characteristics of microblogging BIBREF6 has motivated our research into automating stance classification as a methodology for accelerating this process. Qazvinian et al. BIBREF10 conducted early work on rumour stance classification. They introduced a system that analyzes a set of tweets associated with a given topic predefined by the user. Their system would then classify each of the tweets as supporting, denying or questioning a tweet. We have adopted this scheme in terms of the different types of stance in the work we report here. However, their work ended up merging denying and questioning tweets for each rumour into a single class, converting it into a 2-way classification problem of supporting vs denying-or-questioning. Instead, we keep those classes separate and, following Procter et al., we conduct a 3-way classification BIBREF21 . Another important characteristic that differentiates Qazvinian et al.'s work from ours is that they looked at support and denial on longstanding rumours, such as the fact that many people conjecture whether Barack Obama is a Muslim or not. By contrast, we look at rumours that emerge in the context of fast-paced, breaking news situations, where new information is released piecemeal, often with statements that employ hedging words such as “reportedly” or “according to sources” to make it clear that the information is not fully verified at the time of posting. This is a very different scenario from that in Qazvinian et al.'s work as the emergence of rumourous reports can lead to sudden changes in vocabulary, leading to situations that might not have been observed in the training data. Another aspect that we deal with differently in our work, aiming to make it more realistically applicable to a real world scenario, is that we apply the method to each rumour separately. Ultimately, our goal is to classify new, emerging rumours, which can differ from what the classifier has observed in the training set. Previous work ignored this separation of rumours, by pooling together tweets from all the rumours in their collections, both in training and test data. By contrast, we consider the rumour stance classification problem as a form of transfer learning and seek to classify unseen rumours by training the classifier from previously labelled rumours. We argue that this makes a more realistic classification scenario towards implementing a real-world rumour-tracking system. Following a short gap, there has been a burst of renewed interest in this task since 2015. For example, Liu et al. BIBREF9 introduce rule-based methods for stance classification, which were shown to outperform the approach by BIBREF10 . Similarly, BIBREF7 use regular expressions instead of an automated method for rumour stance classification. Hamidian and Diab BIBREF22 use Tweet Latent Vectors to assess the ability of performing 2-way classification of the stance of tweets as either supporting or denying a rumour. They study the extent to which a model trained on historical tweets can be used for classifying new tweets on the same rumour. This, however, limits the method's applicability to long-running rumours only. The work closest to ours in terms of aims is Zeng et al. BIBREF23 , who explored the use of three different classifiers for automated rumour stance classification on unseen rumours. In their case, classifiers were set up on a 2-way classification problem dealing with tweets that support or deny rumours. In the present work, we extend this research by performing 3-way classification that also deals with tweets that question the rumours. Moreover, we adopt the three classifiers used in their work, namely Random Forest, Naive Bayes and Logistic Regression, as baselines in our work. Lastly, researchers BIBREF7 , BIBREF24 have focused on the related task of detecting rumours in social media. While a rumour detection system could well be the step that is applied prior to our stance classification system, here we assume that rumours have already been identified to focus on the subsequent step of determining stances. Individual tweets may discuss the same rumour in different ways, where each user expresses their own stance towards the rumour. Within this scenario, we define the tweet level rumour stance classification task as that in which a classifier has to determine the stance of each tweet towards the rumour. More specifically, given the tweet $t_i$ as input, the classifier has to determine which of the set $Y = \lbrace supporting, denying, questioning\rbrace $ applies to the tweet, $y(t_i) \in Y$ . Here we define the task as a supervised classification problem, where the classifier is trained from a labelled set of tweets and is applied to tweets on a new, unseen set of rumours. Let $R$ be a set of rumours, each of which consists of tweets discussing it, $\forall _{r \in R}$ $T_r$ $= \lbrace t^r_1, \cdots , t^r_{r_n}\rbrace $ . $T = \cup _{r \in R} T_r$ is the complete set of tweets from all rumours. Each tweet is classified as supporting, denying or questioning with respect to its rumour: $y(t_i) \in \lbrace s, d, q\rbrace $ . We formulate the problem in two different settings. First, we consider the Leave One Out (LOO) setting, which means that for each rumour $r \in R$ , we construct the test set equal to $T_r$ and the training set equal to $T \setminus T_r$ . This is the most challenging scenario, where the test set contains an entirely unseen rumour. The second setting is Leave Part Out (LPO). In this formulation, a very small number of initial tweets from the target rumour is added to the training set $\lbrace t^r_1, \cdots , t^r_{{{r_k}}}\rbrace $ . This scenario becomes applicable typically soon after a rumour breaks out and journalists have started monitoring and analysing the related tweet stream. The experimental section investigates how the number of initial training tweets influences classification performance on a fixed test set, namely: $\lbrace t^r_{{{r_l}}{}}, \cdots , t^r_{r_n}\rbrace $ , $l>k$ . The tweet-level stance classification problem here assumes that tweets from the training set are already labelled with the rumour discussed and the attitude expressed towards that. This information can be acquired either via manual annotation as part of expert analysis, as is the case with our dataset, or automatically, e.g. using pattern-based rumour detection BIBREF7 . Our method is then used to classify the stance expressed in each new tweet from the test set. We evaluate our work on two different datasets, which we describe below. We use two recent datasets from previous work for our study, both of which adapt to our needs. We do not use the dataset by BIBREF10 given that it uses a different annotation scheme limited to two categories of stances. The reason why we use the two datasets separately instead of combining them is that they have very different characteristics. Our experiments, instead, enable us to assess the ability of our classifier to deal with these different characteristics. The first dataset consists of several rumours circulating on Twitter during the England riots in 2011 (see Table 2 ). The dataset was collected by tracking a long set of keywords associated with the event. The dataset was analysed and annotated manually as supporting, questioning, or denying a rumour, by a team of social scientists studying the role of social media during the riots BIBREF4 . As can be seen from the dataset overview in Table 2 , different rumours exhibit varying proportions of supporting, denying and questioning tweets, which was also observed in other studies of rumours BIBREF3 , BIBREF10 . These variations in the number of instances for each class across rumours posits the challenge of properly modelling a rumour stance classifier. The classifier needs to be able to deal with a test set where the distribution of classes can be very different to that observed in the training set. Thus, we perform 7-fold cross-validation in the experiments, each fold having six rumours in the training set, and the remaining rumour in the test set. The seven rumours were as follows BIBREF4 : Rioters had attacked London Zoo and released the animals. Rioters were gathering to attack Birmingham's Children's Hospital. Rioters had set the London Eye on fire. Police had beaten a sixteen year old girl. The Army was being mobilised in London to deal with the rioters. Rioters had broken into a McDonalds and set about cooking their own food. A store belonging to the Miss Selfridge retail group had been set on fire in Manchester. Additionally, we use another rumour dataset associated with five different events, which was collected as part of the PHEME FP7 research project and described in detail in BIBREF5 , BIBREF25 . Note that the authors released datasets for nine events, but here we remove non-English datasets, as well as small English datasets each of which includes only 1 rumour, as opposed to the 40+ rumours in each of the datasets that we are using. We summarise the details of the five events we use from this dataset in Table 3 . In contrast to the England riots dataset, the PHEME datasets were collected by tracking conversations initiated by rumourous tweets. This was done in two steps. First, we collected tweets that contained a set of keywords associated with a story unfolding in the news. We will be referring to the latter as an event. Next, we sampled the most retweeted tweets, on the basis that rumours by definition should be “a circulation story which produces sufficient skepticism or anxiety”. This allows us to filter potentially rumourous tweets and collect conversations initiated by those. Conversations were tracked by collecting replies to tweets and, therefore, unlike the England riots, this dataset also comprises replying tweets by definition. This is an important characteristic of the dataset, as one would expect that replies are generally shorter and potentially less descriptive than the source tweets that initiated the conversation. We take this difference into consideration when performing the analysis of our results. This dataset includes tweets associated with the following five events: Ferguson unrest: Citizens of Ferguson in Michigan, USA, protested after the fatal shooting of an 18-year-old African American, Michael Brown, by a white police officer on August 9, 2014. Ottawa shooting: Shootings occurred on Ottawa's Parliament Hill in Canada, resulting in the death of a Canadian soldier on October 22, 2014. Sydney siege: A gunman held as hostages ten customers and eight employees of a Lindt chocolate café located at Martin Place in Sydney, Australia, on December 15, 2014. Charlie Hebdo shooting: Two brothers forced their way into the offices of the French satirical weekly newspaper Charlie Hebdo in Paris, killing 11 people and wounding 11 more, on January 7, 2015. Germanwings plane crash: A passenger plane from Barcelona to Düsseldorf crashed in the French Alps on March 24, 2015, killing all passengers and crew on board. The plane was ultimately found to have been deliberately crashed by the co-pilot of the plane. In this case, we perform 5-fold cross-validation, having four events in the training set and the remaining event in the test set for each fold. This section details the features and evaluation measures used in our experiments on tweet level stance classification. We begin by describing the classifiers we use for our experimentation, including Gaussian Processes, as well as a set of competitive baseline classifiers that we use for comparison. Gaussian Processes are a Bayesian non-parametric machine learning framework that has been shown to work well for a range of NLP problems, often beating other state-of-the-art methods BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . A Gaussian Process defines a prior over functions, which combined with the likelihood of data points gives rise to a posterior over functions explaining the data. The key concept is a kernel function, which specifies how outputs correlate as a function of the input. Thus, from a practitioner's point of view, a key step is to choose an appropriate kernel function capturing the similarities between inputs. We use Gaussian Processes as this probabilistic kernelised framework avoids the need for expensive cross-validation for hyperparameter selection. Instead, the marginal likelihood of the data can be used for hyperparameter selection. The central concept of Gaussian Process Classification (GPC; BIBREF30 ) is a latent function $f$ over inputs $\mathbf {x}$ : $f(\mathbf {x}) \sim \ \mathcal {GP}(m(\mathbf {x}), k(\mathbf {x}, \mathbf {x}^{\prime }))$ , where $m$ is the mean function, assumed to be 0 and $k$ is the kernel function, specifying the degree to which the outputs covary as a function of the inputs. We use a linear kernel, $k(\mathbf {x}, \mathbf {x}^{\prime }) = \sigma ^2 \mathbf {x}^{\top }\mathbf {x}^{\prime }$ . The latent function is then mapped by the probit function $\Phi (f)$ into the range $[0, 1]$ , such that the resulting value can be interpreted as $p(y=1 | \mathbf {x})$ . The GPC posterior is calculated as $ p(f^* | X, \mathbf {y}, \mathbf {x_*}) = \int p(f^* | X, \mathbf {x_*}, \mathbf {f}) \frac{p(\mathbf {y} | \mathbf {f})p(\mathbf {f})}{p(\mathbf {y}|X)} d\mathbf {f} \, \!, $ where $p(\mathbf {y}|\mathbf {f}) = \displaystyle \prod _{j=1}^{n} \Phi (f_j)^{y_j} (1 - \Phi (f_j))^{1-y_j}$ is the Bernoulli likelihood of class $y$ . After calculating the above posterior from the training data, this is used in prediction, i.e., $ p(y_* \!=\! 1|X, \mathbf {y}, \mathbf {x_*}) \!=\!\! \int \Phi \left(f_*\right)p\left(f_*|X, \mathbf {y}, \mathbf {x_*}\right)df_* \, . $ The above integrals are intractable and approximation techniques are required to solve them. There exist various methods to deal with calculating the posterior; here we use Expectation Propagation (EP; BIBREF31 ). In EP, the posterior is approximated by a fully factorised distribution, where each component is assumed to be an unnormalised Gaussian. In order to conduct multi-class classification, we perform a one-vs-all classification for each label and then assign the one with the highest likelihood, amongst the three (supporting, denying, questioning). We choose this method due to interpretability of results, similar to recent work on occupational class classification BIBREF29 . In the Leave-Part-Out (LPO) setting initial labelled tweets from the target rumour are observed as well, as opposed to the Leave-One-Out (LOO) setting. In the case of LPO, we propose to weigh the importance of tweets from the reference rumours depending on how similar their characteristics are to the tweets from the target rumour available for training. To handle this with GPC, we use a multiple output model based on the Intrinsic Coregionalisation Model (ICM; BIBREF32 ). This model has already been applied successfully to NLP regression problems BIBREF28 and it can also be applied to classification ones. ICM parametrizes the kernel by a matrix which represents the extent of covariance between pairs of tasks. The complete kernel takes form of $ k((\mathbf {x}, d), (\mathbf {x}^{\prime }, d^{\prime })) = k_{data}(\mathbf {x}, \mathbf {x}^{\prime }) B_{d, d^{\prime }} \, , $ where B is a square coregionalisation matrix, $d$ and $d^{\prime }$ denote the tasks of the two inputs and $k_{data}$ is a kernel for comparing inputs $\mathbf {x}$ and $\mathbf {x}^{\prime }$ (here, linear). We parametrize the coregionalisation matrix $B=\kappa I+vv^T$ , where $v$ specifies the correlation between tasks and the vector $\mathbf {\kappa }$ controls the extent of task independence. Note that in case of LOO setting this model does not provide useful information, since no target rumour data is available to estimate similarity to other rumours. We tune hyperparameters $\mathbf {v}$ , $\kappa $ and $\sigma ^2$ by maximizing evidence of the model $p(\mathbf {y}|X)$ , thus having no need for a validation set. We consider GPs in three different settings, varying in what data the model is trained on and what kernel it uses. The first setting (denoted GP) considers only target rumour data for training. The second (GPPooled) additionally considers tweets from reference rumours (i.e. other than the target rumour). The third setting is GPICM, where an ICM kernel is used to weight influence from tweets from reference rumours. To assess and compare the efficiency of Gaussian Processes for rumour stance classification, we also experimented with five more baseline classifiers, all of which were implemented using the scikit Python package BIBREF33 : (1) majority classifier, which is a naive classifier that labels all the instances in the test set with the most common class in the training set, (2) logistic regression (MaxEnt), (3) support vector machines (SVM), (4) naive bayes (NB) and (5) random forest (RF). The selection of these baselines is in line with the classifiers used in recent research on stance classification BIBREF23 , who found that random forests, followed by logistic regression, performed best. We conducted a series of preprocessing steps in order to address data sparsity. All words were converted to lowercase; stopwords have been removed; all emoticons were replaced by words; and stemming was performed. In addition, multiple occurrences of a character were replaced with a double occurrence BIBREF34 , to correct for misspellings and lengthenings, e.g., looool. All punctuation was also removed, except for ., ! and ?, which we hypothesize to be important for expressing emotion. Lastly, usernames were removed as they tend to be rumour-specific, i.e., very few users comment on more than one rumour. After preprocessing the text data, we use either the resulting bag of words (BOW) feature representation and replace all words with their Brown cluster ids (Brown). Brown clustering is a hard hierarchical clustering method BIBREF35 . It clusters words based on maximizing the probability of the words under the bigram language model, where words are generated based on their clusters. In previous work it has been shown that Brown clusters yield better performance than directly using the BOW features BIBREF11 . In our experiments, the clusters used were obtained using 1000 clusters acquired from a large scale Twitter corpus BIBREF36 , from which we can learn Brown clusters aimed at representing a generalisable Twitter vocabulary. Retweets are removed from the training set to prevent bias BIBREF37 . More details on the Brown clusters that we used as well as the words that are part of each cluster are available online. During the experimentation process, we also tested additional features, including the use of the bag of words instead of the Brown clusters, as well as using word embeddings trained from the training sets BIBREF38 . However, results turned out to be substantially poorer than those we obtained with the Brown clusters. We conjecture that this was due to the little data available to train the word embeddings; further exploring use of word embeddings trained from larger training datasets is left future work. In order to focus on our main objective of proving the effectiveness of a multi-task learning approach, as well as for clarity purposes, since the number of approaches to show in the figures increases if we also consider the BOW features, we only show results for the classifiers relying on Brown clusters as features. Accuracy is often deemed a suitable evaluation measure to assess the performance of a classifier on a multi-class classification task. However, the classes are clearly imbalanced in our case, with varying tendencies towards one of the classes in each of the rumours. We argue that in these scenarios the sole evaluation based on accuracy is insufficient, and further measurement is needed to account for category imbalance. This is especially necessary in our case, as a classifier that always predicts the majority class in an imbalanced dataset will achieve high accuracy, even if the classifier is useless in practice. To tackle this, we use both micro-averaged and macro-averaged F1 scores. Note that the micro-averaged F1 score is equivalent to the well-known accuracy measure, while the macro-averaged F1 score complements it by measuring performance assigning the same weight to each category. Both of the measures rely on precision (Equation 50 ) and recall (Equation 51 ) to compute the final F1 score. $$\text{Precision}_k = \frac{tp_k}{tp_k+fp_k}$$ (Eq. 50) $$\text{Recall}_k = \frac{tp_k}{tp_k+fn_k}$$ (Eq. 51) where $tp_k$ (true positives) refer to the number of instances correctly classified in class $k$ , $fp_k$ is the number of instances incorrectly classified in class $k$ , and $fn_k$ is the number of instances that actually belong to class $k$ but were not classified as such. The above equations can be used to compute precision and recall for a specific class. Precision and recall for all the classes in a problem with $c$ classes are computed differently if they are microaveraged (see Equations 52 and 53 ) or macroaveraged (see Equations 54 and 55 ). $$\text{Precision}_{\text{micro}} = \frac{\sum _{k = 1}^{c} tp_k}{\sum _{k = 1}^{c} tp_k + \sum _{k = 1}^{c} fp_k}$$ (Eq. 52) $$\text{Recall}_{\text{micro}} = \frac{\sum _{k = 1}^{c} tp_k}{\sum _{k = 1}^{c} tp_k + \sum _{k = 1}^{c} fn_k}$$ (Eq. 53) After computing microaveraged and macroveraged precision and recall, the final F1 score is computed in the same way, i.e., calculating the harmonic mean of the precision and recall in question (see Equation 56 ). $$\text{F1} = \frac{2 \times \text{Precision} \times \text{Recall}}{\text{Precision} + \text{Recall}}$$ (Eq. 56) After computing the F1 score for each fold, we compute the micro-averaged score across folds. First, we look at the results on each dataset separately. Then we complement the analysis by aggregating the results from both datasets, which leads to further understanding the performance of our classifiers on rumour stance classification. We show the results for the LOO and LPO settings in the same figure, distinguished by the training size displayed in the X axis. In all the cases, labelled tweets from the remainder of the rumours (rumours other than the test/targer rumour) are used for training, and hence the training size shown in the X axis is in addition to those. Note that the training size refers to the number of labelled instances that the classifier is making use of from the target rumour. Thus, a training size of 0 indicates the LOO setting, while training sizes from 10 to 50 pertain to the LPO setting. Figure 1 and Table 4 show how micro-averaged and macro-averaged F1 scores for the England riots dataset change as the number of tweets from the target rumour used for training increases. We observe that, as initially expected, the performance of most of the methods improves as the number of labelled training instances from the target rumour increases. This increase is especially remarkable with the GP-ICM method, which gradually increases after having as few as 10 training instances. GP-ICM's performance keeps improving as the number of training instances approaches 50 Two aspects stand out from analysing GP-ICM's performance: It performs poorly in terms of micro-averaged F1 when no labelled instances from the target rumour are used. However, it makes very effective use of the labelled training instances, overtaking the rest of the approaches and achieving the best results. This proves the ability of GP-ICM to make the most of the labelled instances from the target rumour, which the rest of the approaches struggle with. Irrespective of the number of labelled instances, GP-ICM is robust when evaluated in terms of macro-averaged F1. This means that GP-ICM is managing to determine the distribution of classes effectively, assigning labels to instances in the test set in a way that is better distributed than the rest of the classifier. Despite the saliency of GP-ICM, we notice that two other baseline approaches, namely MaxEnt and RF, achieve competitive results that are above the rest of the baselines, but still perform worse than GP-ICM. The results from the PHEME dataset are shown in Figure 2 and Table 5 . Overall, we can observe that results are lower in this case than they were for the riots dataset. The reason for this can be attributed to the following two observations: on the one hand, each fold pertaining to a different event in the PHEME dataset means that the classifier encounters a new event in the classification, where it will likely find new vocabulary, which may be more difficult to classify; on the other hand, the PHEME dataset is more prominently composed of tweets that are replying to others, which are likely shorter and less descriptive on their own and hence more difficult to get meaningful features from. Despite the additional difficulty in this dataset, we are interested in exploring if the same trend holds across classifiers, from which we can generalise the analysis to different types of classifiers. One striking difference with respect to the results from the riots dataset is that, in this case, the classifiers, including GP-ICM, are not gaining as much from the inclusion of labelled instances from the target rumour. This is likely due to the heterogeneity of each of the events in the PHEME dataset. Here a diverse set of rumourous newsworthy pieces of information are discussed pertaining to the selected events as they unfold. By contrast, each rumour in the riots dataset is more homogeneous, as each rumour focuses on a specific story. Interestingly, when we compare the performance of different classifiers, we observe that GP-ICM again outperforms the rest of the approaches, both in terms of micro-averaged and macro-averaged F1 scores. While the micro-averaged F1 score does not increase as the number of training instances increases, we can see a slight improvement in terms of macro-averaged F1. This improvement suggests that GP-ICM does still take advantage of the labelled training instances to boost performance, in this case by better distributing the predicted labels. Again, as we observed in the case of the riots dataset, two baselines stand out, MaxEnt and RF. They are very close to the performance of GP-ICM for the PHEME dataset, event outperforming it in a few occasions. In the following subsection we take a closer look at the differences among the three classifiers. We delve into the results of the best-performing classifiers, namely GP-ICM, MaxEnt and RF, looking at their per-class performance. This will help us understand when they perform well and where it is that GP-ICM stands out achieving the best results. Tables 6 and 7 show per-class F1 measures for the aforementioned three best-performing classifiers for the England riots dataset and the PHEME dataset, respectively. They also show statistics of the mis-classifications that the classifiers made, in the form of percentage of deviations towards the other classes. Looking at the per-class performance analysis, we observe that the performance of GP-ICM varies when we look into Precision and Recall. Still, in all the dataset-class pairs, GP-ICM performs best in terms of either Precision or Recall, even though never in both. Moreover, it is generally the best in terms of F1, achieving the best Precision and Recall. The only exception is with MaxEnt classifying questioning tweets more accurately in terms of F1 for the England riots. When we look at the deviations, we see that all the classifiers suffer from the datasets being imbalanced towards supporting tweets. This results in all classifiers classifying numerous instances as supporting, while they are actually denying or questioning. This is a known problem in rumour diffusion, as previous studies have found that people barely deny or question rumours but generally tend to support them irrespective of their actual veracity value BIBREF5 . While we have found that GP-ICM can tackle the imbalance issue quite effectively and better than other classifiers, this caveat posits the need for further research in dealing with the striking majority of supporting tweets in the context of rumours in social media. Experimentation with two different approaches based on Gaussian Processes (GP and GP-ICM) and comparison with respect to a set of competitive baselines over two rumour datasets enables us to gain generalisable insight on rumour stance classification on Twitter. This is reinforced by the fact that the two datasets are very different from each other. The first dataset, collected during the England riots in 2011, is a single event that we have split into folds, each fold belonging to a separate rumour within the event; hence, all the rumours are part of the same event. The second dataset, collected within the PHEME project, includes tweets for a set of five newsworthy events, where each event has been assigned a separate fold; therefore, the classifier needs to learn from four events and test on a new, unknown event, which has proven more challenging. Results are generally consistent across datasets, which enables us to generalise conclusions well. We observe that while GP itself does not suffice to achieve competitive results, GP-ICM does instead help boost the performance of the classifier substantially to even outperform the rest of the baselines in the majority of the cases. GP-ICM has proven to consistently perform well in both datasets, despite their very different characteristics, being competitive not only in terms of micro-averaged F1, but also in terms of macro-averaged F1. GP-ICM manages to balance the varying class distributions effectively, showing that its performance is above the rest of the baselines in accurately determining the distribution of classes. This is very important in this task of rumour stance classification, owing to the fact that even if a classifier that is 100% accurate is unlikely, a classifier that accurately guesses the overall distribution of classes can be of great help. If a classifier makes a good estimation of the number of denials in an aggregated set of tweets, it can be useful to flag those potentially false rumours with high level of confidence. Another factor that stands out from GP-ICM is its capacity to perform well when a few labelled instances of the target rumour are leveraged in the training phase. GP-ICM effectively exploits the knowledge garnered from the few instances from the target rumour, outperforming the rest of the baselines even when its performance was modest when no labelled instances were used from the target rumour. In light of these results, we deem GP-ICM the most competitive approach to use when one can afford to get a few instances labelled from the target rumour. The labels from the target rumour can be obtained in practice in different ways: (1) having someone in-house (e.g. journalists monitoring breaking news stories) label a few instances prior to running the classifier, (2) making use of resources for human computation such as crowdsourcing platforms to outsource the labelling work, or (3) developing techniques that will attempt to classify the first few instances, incorporating in the training set those for which a classification with high level of confidence has been produced. The latter presents an ambitious avenue for future work that could help alleviate the labelling task. On the other hand, in the absence of labelled data from the target rumour, which is the case of the LOO setting, the effectiveness of the GP-ICM classifier is not as prominent. For this scenario, other classifiers such as MaxEnt and Random Forests have proven more competitive and one could see them as better options. However, we do believe that the remarkable difference that the reliance on the LPO setting produces is worth exploiting where possible. Social media is becoming an increasingly important tool for maintaining social resilience: individuals use it to express opinions and follow events as they unfold; news media organisations use it as a source to inform their coverage of these events; and government agencies, such as the emergency services, use it to gather intelligence to help in decision-making and in advising the public about how they should respond BIBREF1 . While previous research has suggested that mechanisms for exposing false rumours are implicit in the ways in which people use social media BIBREF4 , it is nevertheless critically important to explore if there are ways in which computational tools can help to accelerate these mechanisms so that misinformation and disinformation can be targeted more rapidly, and the benefits of social media to society maintained BIBREF8 . As a first step to achieving this aim, this paper has investigated the problem of classifying the different types of stance expressed by individuals in tweets about rumours. First, we considered a setting where no training data from the target rumours is available (LOO). Without access to annotated examples of the target rumour the learning problem becomes very difficult. We showed that in the supervised domain adaptation setting (LPO), even annotating a small number of tweets helps to achieve better results. Moreover, we demonstrated the benefits of a multi-task learning approach, as well as that Brown cluster features are more useful for the task than simple bag of words. Findings from previous work, such as BIBREF39 , BIBREF4 , have suggested that the aggregate stance of individual users is correlated with actual rumour veracity. Hence, the next step in our own work will be to make use of the classifier for the stance expressed in the reactions of individual Twitter users in order to predict the actual veracity of the rumour in question. Another interesting direction for future work would be the addition of non-textual features to the classifier. For example, the rumour diffusion patterns BIBREF40 may be a useful cue for stance classification. This work is partially supported by the European Union under grant agreement No. 611233 Pheme. The work was implemented using the GPy toolkit BIBREF41 . This research utilised Queen Mary's MidPlus computational facilities, supported by QMUL Research-IT and funded by EPSRC grant EP/K000128/1.
avoids the need for expensive cross-validation for hyperparameter selection
b3ac67232c8c7d5a759ae025aee85e9c838584eb
b3ac67232c8c7d5a759ae025aee85e9c838584eb_0
Q: Do the authors do manual evaluation? Text: 0pt1ex1ex 0pt1ex0ex 0pt0.5ex0ex Success of deep learning techniques have renewed the interest in development of dialogue systems. However, current systems struggle to have consistent long term conversations with the users and fail to build rapport. Topic spotting, the task of automatically inferring the topic of a conversation, has been shown to be helpful in making a dialog system more engaging and efficient. We propose a hierarchical model with self attention for topic spotting. Experiments on the Switchboard corpus show the superior performance of our model over previously proposed techniques for topic spotting and deep models for text classification. Additionally, in contrast to offline processing of dialog, we also analyze the performance of our model in a more realistic setting i.e. in an online setting where the topic is identified in real time as the dialog progresses. Results show that our model is able to generalize even with limited information in the online setting. Introduction Recently, a number of commercial conversation systems have been introduced e.g. Alexa, Google Assistant, Siri, Cortana, etc. Most of the available systems perform well on goal-oriented conversations which spans over few utterances in a dialogue. However, with longer conversations (in open domains), existing systems struggle to remain consistent and tend to deviate from the current topic during the conversation. This hinders the establishment of long term social relationship with the users BIBREF0 . In order to have coherent and engaging conversations with humans, besides other relevant natural language understanding (NLU) techniques BIBREF1 , a system, while responding, should take into account the topic of the current conversation i.e. Topic Spotting. Topic spotting has been shown to be important in commercial dialog systems BIBREF2 , BIBREF3 directly dealing with the customers. Topical information is useful for speech recognition systems BIBREF4 as well as in audio document retrieval systems BIBREF5 , BIBREF6 . Importance of topic spotting can be gauged from the work of Alexa team BIBREF7 , who have proposed topic based metrics for evaluating the quality of conversational bots. The authors empirically show that topic based metrics correlate with human judgments. Given the importance of topical information in a dialog system, this paper proposes self attention based hierarchical model for predicting topics in a dialog. We evaluate our model on Switchboard (SWBD) corpus BIBREF8 and show that our model supersedes previously applied techniques for topic spotting. We address the evaluative limitations of the current SWBD corpus by creating a new version of the corpus referred as SWBD2. We hope that SWBD2 corpus would provide a new standard for evaluating topic spotting models. We also experiment with an online setting where we examine the performance of our topic classifier as the length of the dialog is varied and show that our model can be used in a real time dialog system as well. Related Work Topic spotting is the task of detecting the topic of a dialog BIBREF5 . Topic spotting has been an active area of research over the past few decades both in the NLP community as well as in the speech community. In this section we briefly outline some of the main works in this area. For a detailed survey of prior research in this area, the reader is referred to BIBREF6 ( BIBREF6 ). Most of the methods proposed for topic spotting use features extracted from transcribed text as input to a classifier (typically Naïve Bayes or SVM ). Extracted features include: Bag of Words (BoW), TF-IDF BIBREF9 , BIBREF10 , n-grams, and word co-occurrences BIBREF6 , BIBREF11 . Some approaches (in addition to word co-occurrences features) incorporate background world knowledge using Wikipedia BIBREF12 . In our work, we do not explicitly extract the features but learn these during training. Moreover, unlike previous approaches, we explicitly model the dependencies between utterances via self attention mechanism and hierarchical structure. Topic spotting has been explored in depth in the speech processing community (see for example, BIBREF13 ( BIBREF13 ); BIBREF14 ( BIBREF14 ); BIBREF15 ( BIBREF15 ); BIBREF16 ( BIBREF16 )). Researchers in this community have attempted to predict the topic directly from the audio signals using phoneme based features. However, the performance of word based models supersedes those of audio models BIBREF5 . Recently, there has been lot of work in deep learning community for text classification BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . These deep learning models use either RNN-LSTM based neural networks BIBREF22 or CNN based neural networks BIBREF23 for learning representation of words/sentences. We follow similar approach for topic spotting. Our model is related to the Hierarchical Attention Network (HN-ATT) model proposed by BIBREF24 ( BIBREF24 ) for document classification. HN-ATT models the document hierarchically by composing words (with weights determined by first level of attention mechanism) to get sentence representations and then combines the sentence representations with help of second level attention to get document representation which is then used for classification. The aim of this paper is not to improve text classification but to improve topic spotting. Topic spotting and text classification differ in various aspects. We are among the first to show the use of hierarchical self attention (HN-SA) model for topic spotting. It is natural to consider applying text classification techniques for topic spotting. However, as we empirically show in this paper, text classification techniques do not perform well in this setting. Moreover, for the dialog corpus simple BoW approaches perform better than more recently proposed HN-ATT model BIBREF24 . Hierarchical Model with Self Attention We propose a hierarchical model with self attention (HN-SA) for topic spotting. We are given a topic label for each dialog and we want to learn a model mapping from space of dialogues to the space of topic labels. We learn a prediction model by minimizing Negative Log Likelihood ( $\mathcal {NLL}$ ) of the data. Model Architecture We propose a hierarchical architecture as shown in Figure 1 . An utterance encoder takes each utterance in the dialog and outputs the corresponding utterance representation. A dialog encoder processes the utterance representations to give a compact vector representation for the dialog which is used to predict the topic of the dialog. Utterance Encoder: Each utterance in the dialog is processed sequentially using single layer Bi-directional Long Short Term Memory (BiLSTM) BIBREF25 network and self-attention mechanism BIBREF26 to get the utterance representation. In particular, given an utterance with one-hot encoding for the tokens, $u_{k} = \lbrace \mathbf {w_{k,1}, w_{k,2},....,w_{k,L}}\rbrace $ , each token is mapped to a vector $\mathbf {v_{k,i}} = \mathbf {E} \mathbf {w_{k,i}} \ \ ;i=1,2,...L$ using pre-trained embeddings (matrix $\mathbf {E}$ ). Utterance representation ( $\mathbf {s_{k}} = \mathbf {a}^{T} \mathbf {H^{(1)}}$ ) is the weighted sum of the forward and backward direction concatenated hidden states at each step of the BiLSTM ( $\mathbf {H^{(1)}} = [\mathbf {h_{1}^{(1)}},....,\mathbf {h_{L}^{(1)}}]^{T}$ where $\mathbf {h_{i}^{(1)}} = [\overrightarrow{\mathbf {{h_{i}}}}^{(1)}:\overleftarrow{\mathbf {h_{i}}}^{(1)}] = \mathbf {BiLSTM}(\mathbf {v_{k,i}})$ ). The weights of the combination ( $\mathbf {a} = \textrm {softmax}(\mathbf {h^{(2)}_{a}})$ ) are determined using self-attention mechanism proposed by BIBREF26 ( BIBREF26 ) by measuring the similarity between the concatenated hidden states ( $\mathbf {h^{(2)}_{a}} = \mathbf {W_{a}^{(2)}} \mathbf {h^{(1)}_{a}} + \mathbf {b_{a}^{(2)}}$ and $\mathbf {h^{(1)}_{a}} = \textrm {tanh} ( \mathbf {W_{a}^{(1)}} \mathbf {H^{(1)}} + \mathbf {b_{a}^{(1)}})$ ) at each step in the utterance sequence. Self-attention computes the similarity of a token in the context of an utterance and thus, boosts the contribution of some keywords to the classifier. It also mitigates the need for a second layer of attention at a dialog level reducing the number of parameters, reducing the confusion of the classifier by not trying to reweigh individual utterances and reducing the dependence on having all utterances (full future context) for an accurate prediction. A simple LSTM based model (HN) and HN-ATT perform worse than the model using self attention (§ "Experiments and Results" ), indicating the crucial role played by self-attention mechanism. Dialog Encoder: Utterance embeddings (representations) are sequentially encoded by a second single layer BiLSTM to get the dialog representation ( $\mathbf {h_{k}^{(2)}} = [\overrightarrow{\mathbf {{h_{k}}}}^{(2)}:\overleftarrow{\mathbf {h_{k}}}^{(2)}] = \mathbf {BiLSTM}(\mathbf {s_{k}}) \ \ ;k=1,2,...N$ ). Bidirectional concatenated hidden state corresponding to the last utterance (i.e. last step of BiLSTM) is used for making a prediction via a linear layer followed by softmax activation ( $p(\mathsf {T} | \mathsf {D}) = \textrm {softmax}(\mathbf {h_{D}})$ where $\mathbf {h_{D}} = \mathbf {W_{f}} \mathbf {h_{N}^{(2)}}$ ). Experimental Setup As in previous work (§ "Related Work" ), we use Switchboard (SWBD) BIBREF8 corpus for training our model. SWBD is a corpus of human-human conversations, created by recording (and later transcribing) telephonic conversations between two participants who were primed with a topic. Table 1 gives the corpus statistics. Topics in SWBD range over a variety of domains, for example, politics, health, sports, entertainment, hobbies, etc., making the task of topic spotting challenging. Dialogues in the test set of the original SWBD cover a limited number of topics (12 vs 66). The test set is not ideal for evaluating topic spotting system. We address this shortcoming by creating a new split and we refer to this version of the corpus as SWBD2. The new split provides opportunity for more rigorous evaluation of a topic spotting system. SWBD2 was created by removing infrequent topics (< 10 dialogues) from the corpus and then randomly moving dialogues between the train/development set and the test set, in order to have instances of each topic in the test set. The majority class baseline in SWBD2 is around 5%. In transcribed SWBD corpus some punctuation symbols such as #, ?, have special meanings and non-verbal sounds have been mapped to special symbols e.g. <Laughter>. To preserve the meanings of special symbols we performed minimal preprocessing. Dialog Corpora is different from text classification corpora (e.g. product reviews). If we roughly equate a dialog to a document and an utterance to a sentence, dialogs are very long documents with short sentences. Moreover, the vocabulary distribution in a dialog corpus is fundamentally different, e.g. presence of back-channel words like `uhm' and `ah'. Model Hyper-parameters: We use GloVe embeddings BIBREF27 with dimensionality of 300. The embeddings are updated during training. Each of the LSTM cell in the utterance and dialog encoder uses hidden state of dimension 256. The weight matrices in the attention network have dimension of 128. The hyper-parameters were found by experimenting with the development set. We trained the model by minimizing the cross-entropy loss using Adam optimizer BIBREF28 with an initial learning rate of 0.001. The learning rate was reduced by half when development set accuracy did not change over successive epochs. Model took around 30 epochs to train. Experiments and Results We compare the performance of our model (Table 2 ) with traditional Bag of Words (BoW), TF-IDF, and n-grams features based classifiers. We also compare against averaged Skip-Gram BIBREF29 , Doc2Vec BIBREF30 , CNN BIBREF23 , Hierarchical Attention (HN-ATT) BIBREF24 and hierarchical network (HN) models. HN it is similar to our model HN-SA but without any self attention. Analysis: As is evident from the experiments on both the versions of SWBD, our model (HN-SA) outperforms traditional feature based topic spotting models and deep learning based document classification models. It is interesting to see that simple BoW and n-gram baselines are quite competitive and outperform some of the deep learning based document classification model. Similar observation has also been reported by BIBREF31 ( BIBREF31 ) for the task of sentiment analysis. The task of topic spotting is arguably more challenging than document classification. In the topic spotting task, the number of output classes (66/42 classes) is much more than those in document classification (5/6 classes), which is done mainly on the texts from customer reviews. Dialogues in SWBD have on an average 200 utterances and are much longer texts than customer reviews. Additionally, the number of dialogues available for training the model is significantly lesser than customer reviews. We further investigated the performance on SWBD2 by examining the confusion matrix of the model. Figure 2 shows the heatmap of the normalized confusion matrix of the model on SWBD2. For most of the classes the classifier is able to predict accurately. However, the model gets confused between the classes which are semantically close (w.r.t. terms used) to each other, for example, the model gets confused between pragmatically similar topics e.g. HOBBIES€™ vs €˜GARDENING€™, €˜MOVIES vs €˜TV PROGRAMS’, €˜RIGHT TO PRIVACY vs€˜ DRUG TESTING€™. Online Setting: In an online conversational system, a topic spotting model is required to predict the topic accurately and as soon as possible during the dialog. We investigated the relationship between dialog length (in terms of number of utterances) and accuracy. This would give us an idea about how many utterances are required to reach a desirable level of accuracy. For this experiment, we varied the length of the dialogues from the test set that was available to the model for making prediction. We created sub-dialogues of length starting with $1/32$ of the dialog length and increasing it in multiples of 2, up to the full dialog. Figure 2 shows both the absolute accuracy and the accuracy relative to that on the full dialog. With just a few (3.125%) initial utterances available, the model is already 72% confident about the topic. This may be partly due to the fact that in a discussion, the first few utterances explicitly talk about the topic. However, as we have seen, since SWBD covers many different topics which are semantically close to each other but are assigned distinct classes, it is equally challenging to predict the topic with the same model. By the time the system has processed half the dialog in SWBD2 it is already within 99% accuracy of the full system. The experiment shows the possibility of using the model in an online setting where the model predicts the topic with high confidence as the conversation progresses. Conclusion and Future Work In this paper we presented a hierarchical model with self attention for topic spotting. The model outperforms the conventional topic spotting techniques as well as deep learning techniques for text classification. We empirically show that the proposed model can also be used in an online setting. We also introduced a new version of SWBD corpus: SWBD2. We hope that it will serve as the new standard for evaluating topic spotting models. Moving forward, we would like to explore a more realistic multi-modal topic spotting system. Such a system should fuse two modalities: audio and transcribed text to make topic predictions. Acknowledgments We would like to thank anonymous reviewers for their insightful comments. Mubbasir Kapadia has been funded in part by NSF IIS-1703883, NSF S&AS-1723869, and DARPA SocialSim-W911NF-17-C-0098.
No
43878a6a8fc36aaae29d95815355aaa7d25c3b53
43878a6a8fc36aaae29d95815355aaa7d25c3b53_0
Q: What datasets did they use? Text: Introduction There has been growing research interest in training dialog systems with end-to-end models BIBREF0 , BIBREF1 , BIBREF2 in recent years. These models are directly trained on past dialogs, without assumptions on the domain or dialog state structure BIBREF3 . One of their limitations is that they select responses only according to the content of the conversation and are thus incapable of adapting to users with different personalities. Specifically, common issues with such content-based models include: (i) the inability to adjust language style flexibly BIBREF4 ; (ii) the lack of a dynamic conversation policy based on the interlocutor's profile BIBREF5 ; and (iii) the incapability of handling ambiguities in user requests. Figure FIGREF1 illustrates these problems with an example. The conversation happens in a restaurant reservation scenario. First, the responses from the content-based model are plain and boring, and not able to adjust appellations and language styles like the personalized model. Second, in the recommendation phase, the content-based model can only provide candidates in a random order, while a personalized model can change recommendation policy dynamically, and in this case, match the user dietary. Third, the word “contact” can be interpreted into “phone” or “social media” contact information in the knowledge base. Instead of choosing one randomly, the personalized model handles this ambiguity based on the learned fact that young people prefer social media account while the elders prefer phone number. Psychologists have proven that during a dialog humans tend to adapt to their interlocutor to facilitate understanding, which enhances conversational efficiency BIBREF6 , BIBREF7 , BIBREF8 . To improve agent intelligence, we may polish our model to learn such human behaviors in conversations. A big challenge in building personalized dialog systems is how to utilize the user profile and generate personalized responses correspondingly. To overcome it, existing works BIBREF9 , BIBREF4 often conduct extra procedures to incorporate personalization in training, such as intermediate supervision and pre-training of user profiles, which are complex and time-consuming. In contrast, our work is totally end-to-end. In this paper, we propose a Profile Model and a Preference Model to leverage user profiles and preferences. The Profile Model learns user personalities with distributed profile representation, and uses a global memory to store conversation context from other users with similar profiles. In this way, it can choose a proper language style and change recommendation policy based on the user profile. To address the problem of ambiguity, the Preference Model learns user preferences among ambiguous candidates by building a connection between the user profile and the knowledge base. Since these two models are both under the MemN2N framework and make contributions to personalization in different aspects, we combine them into the Personalized MemN2N. Our experiments on a goal-oriented dialog corpus, the personalized bAbI dialog dataset, show that leveraging personal information can significantly improve the performance of dialog systems. The Personalized MemN2N outperforms current state-of-the-art methods with over 7% improvement in terms of per-response accuracy. A test with real human users also illustrates that the proposed model leads to better outcomes, including higher task completion rate and user satisfaction. Related Work End-to-end neural approaches to building dialog systems have attracted increasing research interest. It is well accepted that conversation agents include goal-oriented dialog systems and non goal-oriented (chit-chat) bots. Generative recurrent models like Seq2Seq have showed promising performance in non goal-oriented chit-chat BIBREF10 , BIBREF11 , BIBREF12 . More recently, retrieval-based models using a memory network framework have shown their potential in goal-oriented systems BIBREF2 , BIBREF3 . Although steady progress has been made, there are still issues to be addressed: most existing models are content-based, which are not aware of the interlocutor profile, and thus are not capable of adapting to different kinds of users. Considerable research efforts have been devoted so far to make conversational agents smarter by incorporating user profile. Personalized Chit-Chat The first attempt to model persona is BIBREF13 , which proposes an approach to assign specific personality and conversation style to agents based on learned persona embeddings. BIBREF14 describe an interesting approach that uses multi-task learning with personalized text data. There are some researchers attempting to introduce personalized information to dialogs by transfer learning BIBREF15 , BIBREF16 . Since there is usually no explicit personalized information in conversation context, existing models BIBREF9 , BIBREF4 often require extra procedures to incorporate personalization in training. BIBREF9 add intermediate supervision to learn when to employ the user profile. BIBREF4 pre-train the user profile with external service. This work, in contrast, is totally end-to-end. A common approach to leveraging personality in these works is using a conditional language model as the response decoder BIBREF17 , BIBREF13 . This can help assign personality or language style to chit-chat bots, but it is useless in goal-oriented dialog systems. Instead of assigning personality to agents BIBREF13 , BIBREF14 , BIBREF9 , our model pays more attention to the user persona and aims to make agents more adaptive to different kinds of interlocutors. Personalized Goal-Oriented Dialog As most previous works BIBREF13 , BIBREF18 , BIBREF9 focus on chit-chat, the combination of personalization and goal-oriented dialog remains unexplored. Recently a new dataset has been released that enriches research resources for personalization in chit-chat BIBREF19 . However, no open dataset allows researchers to train goal-oriented dialog with personalized information, until the personalized bAbI dialog corpus released by BIBREF5 . Our work is in the vein of the memory network models for goal-oriented dialog from BIBREF2 and BIBREF3 . We enrich these models by incorporating the profile vector and using conversation context from users with similar attributes as global memory. End-to-End Memory Network Since we construct our model based on the MemN2N by BIBREF3 , we first briefly recall its structure to facilitate the delivery of our models. The MemN2N consists of two components: context memory and next response prediction. As the model conducts a conversation with the user, utterance (from the user) and response (from the model) are in turn appended to the memory. At any given time step INLINEFORM0 there are INLINEFORM1 user utterances and INLINEFORM2 model responses. The aim at time INLINEFORM3 is to retrieve the next response INLINEFORM4 . Memory Representation Following BIBREF20 , we represent each utterance as a bag-of-words using the embedding matrix INLINEFORM0 , and the context memory INLINEFORM1 is represented as a vector of utterances as: DISPLAYFORM0 where INLINEFORM0 maps the utterance to a bag of dimension INLINEFORM1 (the vocabulary size), and INLINEFORM2 is a INLINEFORM3 matrix in which INLINEFORM4 is the embedding dimension. So far, information of which speaker spoke an utterance, and at what time during the conversation, are not included in the contents of memory. We therefore encode those pieces of information in the mapping INLINEFORM0 by extending the vocabulary to contain INLINEFORM1 extra “time features” which encode the index INLINEFORM2 of an utterance into the bag-of-words, and two more features (# INLINEFORM3 , # INLINEFORM4 ) encoding whether the speaker is the user or the bot. The last user utterance INLINEFORM0 is encoded into INLINEFORM1 , which also denotes the initial query at time INLINEFORM2 , using the same matrix INLINEFORM3 . Memory Operation The model first reads the memory to find relevant parts of the previous conversation for responses selection. The match between INLINEFORM0 and the memory slots is computed by taking the inner product followed by a softmax: INLINEFORM1 , which yields a vector of attention weights. Subsequently, the output vector is constructed by INLINEFORM2 where INLINEFORM3 is a INLINEFORM4 square matrix. In a multi-layer MemN2N framework, the query is then updated with INLINEFORM5 . Therefore, the memory can be iteratively reread to look for additional pertinent information using the updated query INLINEFORM6 instead of INLINEFORM7 , and in general using INLINEFORM8 on iteration INLINEFORM9 , with a fixed number of iterations INLINEFORM10 (termed INLINEFORM11 hops). Let INLINEFORM0 , where INLINEFORM1 is another word embedding matrix, and INLINEFORM2 is a (large) set of candidate responses which includes all possible bot utterances and API calls. The final predicted response distribution is then defined as: DISPLAYFORM0 where there are INLINEFORM0 candidate responses in INLINEFORM1 . Personalized Dialog System We first propose two personalized models. The Profile Model introduces the personality of the interlocutor explicitly (using profile embedding) and implicitly (using global memory). The Preference Model models user preferences over knowledge base entities. The two models are independent to each other and we also explore their combination as the Personalized MemN2N. Figure FIGREF8 shows the structure of combined model. The different components are labeled with dashed boxes separately. Notation The user profile representation is defined as follows. Each interlocutor has a user profile represented by INLINEFORM0 attributes INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 denote the key and value of the INLINEFORM4 -th attribute, respectively. Take the user in the first dialog in Figure FIGREF1 as an example, the representation should be INLINEFORM5 . The INLINEFORM6 -th profile attribute is represented as a one-hot vector INLINEFORM7 , where there are INLINEFORM8 possible values for key INLINEFORM9 . We define the user profile INLINEFORM10 as the concatenation of one-hot representations of attributes: INLINEFORM11 , where INLINEFORM12 . The notations of the memory network are the same as introduced in Section SECREF3 . Profile Model Our first model is the Profile Model, which aims to integrate personalized information into the query and ranking part of the MemN2N. The model consists of two different components: profile embedding and global memory. Profile Embedding In the MemN2N, the query INLINEFORM0 plays a key role in both reading memory and choosing the response, while it contains no information about the user. We expect to add a personalized information term to INLINEFORM1 at each iteration of the query. Then, the model can be aware of the user profile in the steps of searching relevant utterances in the memory and selecting the final response from the candidates. We thus obtain a distributed profile representation INLINEFORM2 by applying a linear transformation with the one-hot user profile: INLINEFORM3 , where INLINEFORM4 . Note that this distributed profile representation shares the same embedding dimension INLINEFORM5 with the bag-of-words. The query update equation can be changed as: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are the query and output at the INLINEFORM2 -th hop, respectively. Also, the likelihood of a candidate being selected should be affected directly by the user profile, no matter what the query is. Therefore, we obtain tendency weights by computing the inner product between INLINEFORM0 and candidates followed by a sigmoid, and revise the candidates accordingly: DISPLAYFORM0 where INLINEFORM0 is a sigmoid. The prediction INLINEFORM1 is then computed by Equation ( EQREF5 ) using INLINEFORM2 instead of INLINEFORM3 . Global Memory Users with similar profiles may expect the same or a similar response for a certain request. Therefore, instead of using the profile directly, we also implicitly integrate personalized information of an interlocutor by utilizing the conversation history from similar users as a global memory. The definition of similarity varies with task domains. In this paper, we regard those with the same profile as similar users. As shown in Figure FIGREF8 , the global memory component has an identical structure as the original MemN2N. The difference is that the contents in the memory are history utterances from other similar users, instead of the current conversation. Similarly, we construct the attention weights, output vector, and iteration equation by DISPLAYFORM0 where INLINEFORM0 denotes the global memory, INLINEFORM1 is the attention weight over the global memory, INLINEFORM2 is a INLINEFORM3 square matrix, INLINEFORM4 is the intermediate output vector and INLINEFORM5 is the result at the INLINEFORM6 -th iteration. Lastly, we use INLINEFORM7 instead of INLINEFORM8 to make the following computation. Preference Model The Profile Model has not yet solved the challenge of handling the ambiguity among KB entities, such as the choice between “phone” and “social media” in Figure FIGREF1 . The ambiguity refers to the user preference when more than one valid entities are available for a specific request. We propose inferring such preference by taking the relation between user profile and knowledge base into account. Assuming we have a knowledge base that describes the details of several items, where each row denotes an item and each column denotes one of their corresponding properties. The entity INLINEFORM0 at row INLINEFORM1 and column INLINEFORM2 is the value of the INLINEFORM3 -th property of item INLINEFORM4 . The Preference Model operates as follows. Given a user profile and a knowledge base with INLINEFORM0 columns, we predict the user's preference on different columns. We first model the user preference INLINEFORM1 as: DISPLAYFORM0 where INLINEFORM0 . Note that we assume the bot cannot provide more than one option in a single response, so a candidate can only contains one entity at most. The probability of choosing a candidate response should be affected by this preference if the response mentions one of the KB entities. We add a bias term INLINEFORM0 to revise the logits in Equation ( EQREF5 ). The bias for INLINEFORM1 -th candidate INLINEFORM2 is constructed as the following steps. If the INLINEFORM3 -th candidate contains no entity, then INLINEFORM4 ; if the candidate contains an entity INLINEFORM5 , which belongs to item INLINEFORM6 , then INLINEFORM7 , where given the current conversation context INLINEFORM8 , DISPLAYFORM0 For example, the candidate “Here is the information: The_Place_Phone” contains a KB entity “The_Place_Phone” which belongs to restaurant “The_Place” and column “Phone”. If “The_Place” has been mentioned in the conversation, the bias term for this response should be INLINEFORM0 . We update the Equation ( EQREF5 ) to DISPLAYFORM0 Combined Model As discussed previously, the Profile Model and the Preference Model make contributions to personalization in different aspects. The Profile Model enables the MemN2N to change the response policy based on the user profile, but fails to establish a clear connection between the user and the knowledge base. On the other hand, the Preference Model bridges this gap by learning the user preferences over the KB entities. To take advantages of both models, we construct a general Personalized MemN2N model by combining them together, as shown in Algorithm SECREF16 . All these models are trained to minimize a standard cross-entropy loss between INLINEFORM0 and the true label INLINEFORM1 . Response Prediction by Personalized MemN2N Input: User utterance INLINEFORM0 , Context memory INLINEFORM1 , global memory INLINEFORM2 , candidates INLINEFORM3 and user profile INLINEFORM4 Output: The index INLINEFORM0 of the next response [1] Predict INLINEFORM1 INLINEFORM2 Profile embedding INLINEFORM3 INLINEFORM0 hops INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM0 INLINEFORM1 Bias term INLINEFORM2 Final query INLINEFORM3 Revised candidates INLINEFORM4 INLINEFORM5 Dataset The personalized bAbI dialog dataset BIBREF5 is a multi-turn dialog corpus extended from the bAbI dialog dataset BIBREF3 . It introduces an additional user profile associated with each dialog and updates the utterances and KB entities to integrate personalized style. Five separate tasks in a restaurant reservation scenario are introduced along with the dataset. Here we briefly introduce them for better understanding of our experiments. More details on the dataset can be found in the work by BIBREF5 . Task 1: Issuing API Calls Users make queries that contain several blanks to fill in. The bot must ask proper questions to fill the missing fields and make the correct API calls. Task 2: Updating API Calls Users may update their request and the bot must change the API call accordingly. Task 3: Displaying Options Given a user request, the KB is queried and the returning facts are added to the dialog history. The bot is supposed to sort the options based on how much users like the restaurant. The bot must be conscious of the user profile and change the sorting strategy accordingly to accomplish this task. Task 4: Providing Information Users ask for some information about a restaurant, and more than one answer may meet the requirement (i.e., contact with-respect-to social media account and phone number). The bot must infer which answer the user prefers based on the user profile. Task 5: Full Dialog This task conducts full dialog combining all the aspects of Tasks 1 to 4. The difficulties of personalization in these tasks are not incremental. In Tasks 1 and 2, the bot is only required to select responses with appropriate meaning and language style. In Tasks 3 and 4, the knowledge base is supposed to be searched, which makes personalization harder. In these two tasks, apart from capturing shallow personalized features in the utterances such as language style, the bot also has to learn different searching or sorting strategies for different user profiles. In Task 5 we expect an average performance (utterance-wise) since it combines the other four tasks. There are two variations of dataset provided for each task: a full set with around 6000 dialogs and a small set with only 1000 dialogs to create realistic learning conditions. We get the dataset released on ParlAI. Baselines We consider the following baselines: Supervised Embedding Model: a strong baseline for both chit-chat and goal-oriented dialog BIBREF20 , BIBREF3 . Memory Network: the MemN2N by BIBREF3 , which has been described in detail in Section SECREF3 . We add the profile information as an utterance said by the user at the beginning of each dialog. In this way the standard MemN2N may capture the user persona to some extent. Split Memory Network: the model proposed by BIBREF5 that splits the memory into two parts: profile attributes and conversation history. The various attributes are stored as separate entries in the profile memory before the dialog starts, and the conversation memory operates the same as the MemN2N. Experiment Settings The parameters are updated by Nesterov accelerated gradient algorithm BIBREF21 and initialized by Xavier initializer. We try different combinations of hyperparameters and find the best settings as follows. The learning rate is INLINEFORM0 , and the parameter of momentum INLINEFORM1 is INLINEFORM2 . Gradients are clipped to avoid gradient explosion with a threshold of 10. We employ early-stopping as a regularization strategy. Models are trained in mini-batches with a batch size of 64. The dimensionality of word/profile embeddings is 128. We set the maximum context memory and global memory size (i.e. number of utterances) as 250 and 1000, separately. We pad zeros if the number of utterances in a memory is less than 250 or 1000, otherwise we keep the last 250 utterances for the context memory, or randomly choose 1000 valid utterances for the global memory. Results Following BIBREF5 , we report per-response accuracy across all models and tasks on the personalized bAbI dataset in Table TABREF18 . The per-response accuracy counts the percentage of correctly chosen candidates. Rows 4 to 6 of Table TABREF18 show the evaluation results of the Profile Model. As reported in BIBREF5 , their personalized dialogs model might be too complex for some simple tasks (such as Tasks 1 and 2, which do not rely on KB facts) and tends to overfit the training data. It is reflected in the failure of the split memory model on Tasks 1 and 2. Although it outperforms the standard MemN2N in some complicated tasks, the latter one is good enough to capture the profile information given in a simple raw text format, and defeats the split memory model in simpler tasks. To overcome such a challenge, we avoid using excessively complex structures to model the personality. Instead, we only represent the profile as an embedding vector or implicitly. As expected, both profile embedding and global memory approach accomplish Tasks 1 and 2 with a very high accuracy and also notably outperform the baselines in Task 3, which requires utilizing KB facts along with the profile information. Also, the performance of combining the two components together, as shown in row 6, is slightly better than using them independently. The result suggests that we can take advantages of using profile information in an explicit and implicit way in the meantime. Since the Profile Model does not build a clear connection between the user and the knowledge base, as discussed in Section SECREF4 , it may not solve ambiguities among the KB columns. The experiment results are consistent with this inference: the performance of the Profile Model on Task 4, which requires user request disambiguation, is particularly close to the baselines. Row 7 shows the evaluation results of the Preference Model, which is proposed to handle the above mentioned challenge. The model achieves significant improvements on Task 4 by introducing the bias term derived from the learned user preference. Besides, the restaurant sorting challenge in Task 3 depends on the properties of a restaurant to some extent. Intuitively, different properties of the restaurants are weighted differently, and the user preference over the KB columns can be considered as scoring weights which is useful for task-solving. As a result, the model also improves the performance in Task 3 compared to the standard MemN2N. We test the performance of the combined Personalized MemN2N as well. As we have analyzed in Section SECREF4 , the Profile Model and the Preference Model make contributions to personalization in different aspects and their combination has the potential to take advantages of both models. Experiment results confirm our hypothesis that the combined model achieves the best performance with over 7% (and 9% on small sets) improvement over the best baseline for the full dialog task (Task 5). Analysis As the proposed Personalized MemN2N achieves better performance than previous approaches, we conduct an analysis to gain further insight on how the integration of profile and preference helps the response retrieval. Analysis of Profile Embeddings Since we use the learned profile embeddings to obtain tendency weights for candidates selection, as is illustrated in Equation ( EQREF10 ), we expect to observe larger weights on candidates that correctly match the profile. For instance, given a profile “Gender: Male, Age: Young”, we can generate a weight for each response candidate. Due to the fact that candidates are collected from dialogs with different users, they can be divided based on the user profile. Those candidates in the group of young male should have larger weights than others. We group the candidates by their corresponding user profile. For each profile, we generate tendency weights and collect the average value for each group. Figure FIGREF27 visualizes the results by a confusion matrix. The weights on the diagonal are significantly larger than others, which demonstrates the contribution of profile embeddings in candidate selection. Analysis of Global Memory To better illustrate how much the global memory impacts the performance of the proposed model, we conduct a control experiment. Specifically, we build a model with the same global memory component as described in Section SECREF7 , but the utterances in the memory are from randomly chosen users rather than similar users. We report the results of the control experiment on Task 5 in Table TABREF29 . The numbers indicate that the global memory does help improve the performance. Analysis of Preference Remember that we use a preference vector INLINEFORM0 to represent the user's preference over the columns in the knowledge base. Therefore, we investigate the learned arguments grouped by profile attributes. As seen in Figure FIGREF31 , the model successfully learns the fact that young people prefer social media as their contact information, while middle-aged and elderly people prefer phone number. The result shows great potential and advantage of end-to-end models. They are capable of learning meaningful intermediate arguments while being much simpler than existing reinforcement learning methods and pipeline models for the task of personalization in dialogs. Human Evaluation To demonstrate the effectiveness of the personalization approach over standard models more convincingly, we build an interactive system based on the proposed model and baselines, and conduct a human evaluation. Since it is impractical to find testers with all profiles we need, we randomly build 20 profiles with different genders, ages and preferences, and ask three judges to act as the given roles. They talk to the system and score the conversations in terms of task completion rate and satisfaction. Task completion rate stands for how much the system accomplish the users' goal. Satisfaction refers to whether the responses are appropriate to the user profile. The scores are averaged and range from 0 to 1 (0 is the worst and 1 is perfect). We find that Personalized MemN2N wins the MemN2N baseline with INLINEFORM0 and INLINEFORM1 higher in terms of task completion rate and satisfaction, respectively, with INLINEFORM2 . Conclusion and Future Work We introduce a novel end-to-end model for personalization in goal-oriented dialog. Experiment results on open datasets and further analysis show that the model is capable of overcoming some existing issues in dialog systems. The model improves the effectiveness of the bot responses with personalized information, and thus greatly outperforms state-of-the-art methods. In future work, more representations of personalities apart from the profile attribute can be introduced into goal-oriented dialogs models. Besides, we may explore on learning profile representations for non-domain-specific tasks and consider KB with more complex format such as ontologies. Acknowledgements We thank all reviewers for providing the constructive suggestions. Also thanks to Danni Liu, Haoyan Liu and Yuanhao Xiong for the helpful discussion and proofreading. Xu Sun is the corresponding author of this paper.
the personalized bAbI dialog dataset
68ff2a14e6f0e115ef12c213cf852a35a4d73863
68ff2a14e6f0e115ef12c213cf852a35a4d73863_0
Q: Do twitter users tend to tweet about the DOS attack when it occurs? How much data supports this assumption? Text: Introduction Denial of Service attacks are explicit attempts to stop legitimate users from accessing specific network systems BIBREF0. Attackers try to exhaust network resources like bandwidth, or server resources like CPU and memory. As a result, the targeted system slows down or becomes unusable BIBREF1. On-line service providers like Bank Of America, Facebook and Reddit are often the target of such attacks and the frequency and scale of those attacks has increased rapidly in recent years BIBREF2. To address this problem, there is ample previous work on methods to detect and handle Denial of Service attacks, especially Distributed Denial of Service attacks. D-WARD BIBREF3 is a scheme that tries to locate a DDoS attacks at the source by monitoring inbound and outbound traffic of a network and comparing it with predefined "normal" values. Some IP Traceback mechanisms BIBREF4 were developed to trace back to the attack source from the victim's end. Still other methods try to deploy a defensive scheme in an entire network to detect and respond to an attack at intermediate sub-networks. Watchers BIBREF5 is an example of this approach. Despite all the new models and techniques to prevent or handle cyber attacks, DDoS attacks keep evolving. Services are still being attacked frequently and brought down from time to time. After a service is disrupted, it is crucial for the provider to assess the scale of the outage impact. In this paper, we present a novel approach to solve this problem. No matter how complex the network becomes or what methods the attackers use, a denial of service attack always results in legitimate users being unable to access the network system or slowing down their access and they are usually willing to reveal this information on social media plaforms. Thus legitimate user feedback can be a reliable indicator about the severity level of the service outage. Thus we split this problem into two parts namely by first isolating the tweet stream that is likely related to a DoS attack and then measuring the impact of attack by analyzing the extracted tweets. A central challenge to measure the impact is how to figure out the scale of the effect on users as soon as possible so that appropriate action can be taken. Another difficulty is given the huge number of users of a service, how to effectively get and process the user feedback. With the development of Social Networks, especially micro blogs like Twitter, users post many life events in real time which can help with generating a fast response. Another advantage of social networks is that they are widely used. Twitter claims that they had 313 million monthly active users in the second quarter of 2016 BIBREF6. This characteristic will enlarge the scope of detection and is extremely helpful when dealing with cross domain attacks because tweets from multiple places can be leveraged. The large number of users of social networks will also guarantee the sensitivity of the model. However, because of the large number of users, a huge quantity of tweets will be generated in a short time, making it difficult to manually annotate the tweets, which makes unsupervised or weakly-supervised models much more desirable. In the Twitter data that we collected there are three kinds of tweets. Firstly are tweets that are actually about a cyberattack. For example, someone tweeted "Can't sign into my account for bank of America after hackers infiltrated some accounts." on September 19, 2012 when a attack on the website happened. Secondly are tweets about some random complaints about an entity like "Death to Bank of America!!!! RIP my Hello Kitty card... " which also appeared on that day. Lastly are tweets about other things related to the bank. For example, another tweet on the same day is "Should iget an account with bank of america or welsfargo?". To find out the scale of impact from an attack, we must first pick out the tweets that are about the attack. Then using the ratio and number of attack tweets, an estimation of severity can be generated. To solve the problem of detecting Denial of Service attacks from tweets, we constructed a weakly-supervised Natural Language Processing (NLP) based model to process the feeds. More generally, this is a new event detection model. We hypothesize that new topics are attack topics. The hypothesis would not always hold and this issue will be handled by a later module. The first step of the model is to detect topics in one time window of the tweets using Latent Dirichlet Allocation BIBREF7. Then, in order to get a score for each of the topics, the topics in the current time window are compared with the topics in the previous time window using Symmetric Kullback-Leibler Divergence (KL Divergence) BIBREF8. After that, a score for each tweet in the time window is computed using the distribution of topics for the tweet and the score of the topics. We're looking for tweets on new topics through time. While the experiments show promising results, precision can be further increased by adding a layer of a supervised classifier trained with attack data at the expense of recall. Following are the contributions in this paper: A dataset of annotated tweets extracted from Twitter during DoS attacks on a variety organizations from differing domains such as banking (like Bank Of America) and technology. A weakly-supervised approach to identifying detect likely DoS service related events on twitter in real-time. A score to measure impact of the DoS attack based on the frequency of user complaints about the event. The rest of this paper is organized as follows: In section 2, previous work regarding DDoS attack detection and new event detection will be discussed. In section 3, we describe the how the data was collected. We also present the model we created to estimate the impact of DDoS attacks from Twitter feeds. In section 4, the experiments are described and the results are provided. In section 5 we discuss some additional questions. Finally, section 6 concludes our paper and describes future work. Related Work Denial of Service (DoS) attacks are a major threat to Internet security, and detecting them has been a core task of the security community for more than a decade. There exists significant amount of prior work in this domain. BIBREF9, BIBREF10, BIBREF11 all introduced different methods to tackle this problem. The major difference between this work and previous ones are that instead of working on the data of the network itself, we use the reactions of users on social networks to identify an intrusion. Due to the widespread use of social networks, they have become an important platform for real-world event detection in recent years BIBREF12. BIBREF13 defined the task of new event detection as "identifying the first story on topics of interest through constantly monitoring news streams". Atefeh et al. BIBREF14 provided a comprehensive overview of event detection methods that have been applied to twitter data. We will discuss some of the approaches that are closely related to our work. Weng et al. BIBREF15 used a wavelet-signal clustering method to build a signal for individual words in the tweets that was dependent high frequency words that repeated themselves. The signals were clustered to detect events. Sankaranarayanan et al. BIBREF16 presented an unsupervised news detection method based on naive Bayes classifiers and on-line clustering. BIBREF17 described an unsupervised method to detect general new event detection using Hierarchical divisive clustering. Phuvipadawat et al. BIBREF18 discussed a pipeline to collect, cluster, rank tweets and ultimately track events. They computed the similarity between tweets using TF-IDF. The Stanford Named Entity Recognizer was used to identify nouns in the tweets providing additional features while computing the TF-IDF score. Petrović et al. BIBREF19 tried to detect events on a large web corpus by applying a modified locality sensitive hashing technique and clustering documents (tweets) together. Benson et al. BIBREF20 created a graphical model that learned a latent representation for twitter messages, ultimately generating a canonical value for each event. Tweet-scan BIBREF21 was a method to detect events in a specific geo-location. After extracting features such as name, time and location from the tweet, the method used DB-SCAN to cluster the tweets and Hierarchical Dirichlet Process to model the topics in the tweets. Badjatiya et. al. BIBREF22 applied deep neural networks to detect events. They showed different architectures such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (LSTM based) and FastText outperform standard n-gram and TF-IDF models. Burel et al. BIBREF23 created a Dual-CNN that had an additional channel to model the named entities in tweets apart from the pretrained word vectors from GloVe BIBREF24 or Word2Vec BIBREF25. Thus most event detection models can be grouped into three main categories of methods i.e. TF-IDF based methods, approaches that model topics in tweets and deep neural network based algorithms. One of the main challenges against applying a neural network model is the the requirement of a large annotated corpus of tweets. Our corpus of tweets is comparatively small. Hence we build our pipeline by modeling the topics learned from tweets. The previous work that is most similar to ours was BIBREF26. We both used Latent Dirichlet Allocation (LDA) to get the topics of the document, the difference was they only run LDA on the hash-tag of the tweets while we try to get the topics in the tweets by running it on the whole document. Latent Dirichlet Allocation BIBREF7 was a method to get topics from a corpus. In our work, we used the technique to acquire the values of some of the variables in our equation. A variation of it, Hierarchically Supervised Latent Dirichlet Allocation BIBREF27 was used in the evaluation. Approach Figure FIGREF4 outlines the entire pipeline of the model from preprocessing tweets to modeling them and finally detecting / ranking future tweets that are related to a DoS issue and measuring its severity. Approach ::: Data Collection To collect the tweets, we first gathered a list of big DDoS attacks happened from 2012 to 2014. Then for each attack on the list, we collected all the tweets from one week before the attack to the attack day that contains the name of the entity attacked. Approach ::: Preprocessing The following preprocessing procedure were applied to the corpus of tweets: Remove all the meta-data like time stamp, author, and so on. These meta-data could provide useful information, but only the content of the tweet was used for now. Lowercase all the text Use an English stop word list to filter out stop words. The last two steps are commonly used technique when preprocessing text. Approach ::: Create LDA Models Now we try to find out a quantitative representation of the corpus. To do that, the preprocessed tweets about one attack will be divided into two groups. One is on the attack day and the other is the tweets one week before it. The first set will be called $D_a$ and the other one $D_b$. This step will create two separate LDA models for $D_a$ and $D_b$ using the Genism library BIBREF28. The first Model will be called $M_a$ and the other one $M_b$. Latent Dirichlet allocation (LDA) is a generative probabilistic topic modeling model. Figure FIGREF11 is its plate notation. The meaning of different parameters $M$, $N$, $\alpha $, $\beta $, $\theta $, $z$ and $w$ is also described there. We used the LDA algorithm implemented by the Gensim library. One of the most important parameters of the LDA algorithm is the number of topics $N_t$ in the corpus. To determine that we introduced the following formula: where $N_d$ is the number of tweets in the corpus. $\alpha $ is a constant and we used $\alpha $=10 in our experiments. The logic behind the equation is discussed in section 5. Approach ::: The attack topics Then we want to find out how the new topics are different from the history topics or, in other words, how topics in $M_a$ differ from topics in $M_b$. We define the Symmetric Kullback-Leibler divergence for topic $T_j$ in Model $M_a$ as: Where n is the number of topics in Model $M_b$, $T_m^{^{\prime }}$ is the $m^{th}$ topic in Model $M_b$ and $D_kl (X,Y)$ is the original Kullback-Leibler Divergence for discrete probability distributions which defined as : Where $X(i)$ and $Y(i)$ are the probability of token $i$ in topics $X$ and $Y$ respectively. This is similar to the Jensen-Shannon divergence. So for each topic $T_j$ in Model $M_a$ its difference to topics in $M_b$ is determined by its most similar topic in $M_b$. The topics from the attack day model $M_a$ are ranked by their Symmetric Kullback-Leibler divergence to topics from the non-attack day model $M_b$. An example of selected attack topics is provided in section 4.3. Approach ::: The attack tweets This subsection is about how to find specific tweets that are about a network attack. The tweets are selected based on the relative score $S$. The score for tweet $t_i$ is defined as: Where $n$ is the number of topics on the attack day, $P_{i,j}$ is the probability that topic $j$ appears in tweet $t_i$ in the attack day LDA model, and $SKL_j$ is the Symmetric Kullback-Leibler divergence for topic $j$. The higher the score the more likely it is related to an attack event. Approach ::: Optional Classifier Layer Because annotated data is not needed, the model we described before can be regarded as a weakly-supervised model to detect new events on twitter in a given time period. To label tweets as attack tweets, one assumption must be true, which is that the new event in that time period is a cyber attack. Unfortunately, that is usually not true. Thus, an optional classifier layer can be used to prevent false positives. By using a decision tree model we want to find out whether the weakly-supervised part of the model can simplify the problem enough that a simple classification algorithm like a decision tree can have a good result. Additionally, it is easy to find out the reasoning underline a decision tree model so that we will know what the most important features are. The decision tree classifier is trained on the bag of words of collected tweets and the labels are manually annotated. We limit the minimum samples in each leaf to be no less than 4 so that the tree won't overfit. Other than that, a standard Classification and Regression Tree (CART) BIBREF29 implemented by scikit-learn BIBREF30 was used. The classifier was only trained on the training set (tweets about Bank of America on 09/19/2012), so that the test results do not overestimate accuracy. Approach ::: Measure the Severity The definition of severity varies from different network services and should be studied case by case. For the sake of completeness, we propose this general formula: In the equation above, $\beta $ is a parameter from 0 to 1 which determines the weight of the two parts. $N_{attack}$ is the number of attack tweets found. $N_{all}$ means the number of all tweets collected in the time period. And $N_{user}$ is the number of twitter followers of the network service. An interesting future work is to find out the quantitative relation between SeverityLevel score and the size of the actual DDoS attack. Experiments In this section we experimentally study the proposed attack tweet detection models and report the evaluation results. Experiments ::: Term Definition We used precision and recall for evaluation: Precision: Out of all of the tweets that are marked as attack tweets, the percentage of tweets that are actually attack tweets. Or true positive over true positive plus false positive. Recall: Out of all of the actual attack tweets, the percentage of tweets that are labeled as attack tweets. Or true positive over true positive plus false negative. Experiments ::: Experiment Dataset We collected tweets related to five different DDoS attacks on three different American banks. For each attack, all the tweets containing the bank's name posted from one week before the attack until the attack day were collected. There are in total 35214 tweets in the dataset. Then the collected tweets were preprocessed as mentioned in the preprocessing section. The following attacks were used in the dataset: Bank of America attack on 09/19/2012. Wells Fargo Bank attack on 09/19/2012. Wells Fargo Bank attack on 09/25/2012. PNC Bank attack on 09/19/2012. PNC Bank attack on 09/26/2012. Experiments ::: The Attack Topics Only the tweets from the Bank of America attack on 09/19/2012 were used in this experiment. The tweets before the attack day and on the attack day were used to train the two LDA models mentioned in the approach section. The top, bottom 4 attack topics and their top 10 words are shown in table 1 and 2. As shown in table 1, there are roughly 4 kinds of words in the attack topics. First is the name of the entity we are watching. In this case, it is Bank of America. Those words are in every tweet, so they get very high weight in the topics, while not providing useful information. Those words can be safely discarded or added to the stop word list. The second type of words are general cybersecurity words like website, outage, hackers, slowdown and so on. Those words have the potential to become an indicator. When topics with those words appears, it is likely that there exists an attack. The third kind are words related to the specific attack but not attacks in general. Those words can provide details about the attack, but it is hard to identify them without reading the full tweets. In our example, the words movie and sacrilegious are in this group. That is because the DDoS attack on Bank of America was in response to the release of a controversial sacrilegious film. The remaining words are non-related words. The higher the weights of them in a topic, the less likely the topic is actually about a DDoS attack. The results showed that except the 3rd topic, the top 4 topics have high weight on related words and the number of the forth type of words are smaller than the first three types of words. There are no high weight words related to security in the bottom 4 topics. We can say that the high SKL topics are about cyber attacks. Experiments ::: The Attack Tweets In this subsection we discuss the experiment on the attack tweets found in the whole dataset. As stated in section 3.3, the whole dataset was divided into two parts. $D_a$ contained all of the tweets collected on the attack day of the five attacks mentioned in section 4.2. And $D_b$ contained all of the tweets collected before the five attacks. There are 1180 tweets in $D_a$ and 7979 tweets in $D_b$. The tweets on the attack days ($D_a$) are manually annotated and only 50 percent of those tweets are actually about a DDoS attack. The 5 tweets that have the highest relative score in the dataset are: jiwa mines and miner u.s. bancorp, pnc latest bank websites to face access issues: (reuters) - some u.s. bancorp... http://bit.ly/p5xpmz u.s. bancorp, pnc latest bank websites to face access issues: (reuters) - some u.s. bancorp and pnc financial... @pncvwallet nothing pnc sucks fat d ur lucky there's 3 pnc's around me or your bitchassness wouldnt have my money business us bancorp, pnc latest bank websites to face access issues - reuters news forex business u.s. bancorp, pnc latest bank websites to face access issues http://dlvr.it/2d9ths The precision when labeling the first x ranked tweets as attack tweet is shown in the figure FIGREF39. The x-axis is the number of ranked tweets treated as attack tweets. And the y-axis is the corresponding precision. The straight line in figures FIGREF39, FIGREF43 and FIGREF51 is the result of a supervised LDA algorithm which is used as a baseline. Supervised LDA achieved 96.44 percent precision with 10 fold cross validation. The result shows that if the model is set to be more cautious about labeling a tweet as an attack tweet, a small x value, higher precision, even comparable to supervised model can be achieved. However as the x value increases the precision drops eventually. Figure FIGREF40 shows the recall of the same setting. We can find out that the recall increases as the model becomes more bold, at the expense of precision. Figure FIGREF41 is the detection error trade-off graph to show the relation between precision and recall more clearly (missed detection rate is the precision). Experiments ::: Generalization In this subsection we evaluate how good the model generalizes. To achieve that, the dataset is divided into two groups, one is about the attacks on Bank of America and the other group is about PNC and Wells Fargo. The only difference between this experiment and the experiment in section 4.4 is the dataset. In this experiment setting $D_a$ contains only the tweets collected on the days of attack on PNC and Wells Fargo. $D_b$ only contains the tweets collected before the Bank of America attack. There are 590 tweets in $D_a$ and 5229 tweets in $D_b$. In this experiment, we want to find out whether a model trained on Bank of America data can make good classification on PNC and Wells Fargo data. Figures FIGREF43 and FIGREF44 will show the precision and recall of the model in this experiment setting. A detection error trade-off graph (Figure FIGREF45) is also provided. The result is similar to the whole dataset setting from the previous section. The smaller the x value is, the higher the precision and lower the recall, vice versa. The precision is also comparable to the supervised model when a small x is chosen. This shows that the model generalized well. Experiments ::: Impact Estimation Using the result from last section, we choose to label the first 40 tweets as attack tweets. The number 40 can be decided by either the number of tweets labeled as attack tweets by the decision tree classifier or the number of tweets that have a relative score S higher than a threshold. The PNC and Wells Fargo bank have 308.3k followers combined as of July 2018. According to eqution (5) from section 3.6, the severity Level can be computed. The score would have a range from 6.78 * $10^{-2}$ to 1.30 * $10^{-3}$, depending on the value of $\beta $. This means that it could be a fairly important event because more than six percent of tweets mentioning the banks are talking about the DDoS attack. However it could also be a minor attack because only a tiny portion of the people following those banks are complaining about the outage. The value of $\beta $ should depend on the provider's own definition of severity. Experiments ::: Parameter Tuning This model has two parameters that need to be provided. One is $\alpha $ which is needed to determine the number of topics parameter $N_t$, and the other is whether to use the optional decision tree filter. Figures FIGREF49 and FIGREF50 provide experimental results on the model with different combinations of parameters. We selected four combinations that have the best and worst performance. All of the results can be found in appendix. The model was trained on Bank of America tweets and tested on PNC and Wells Fargo tweets like in section 4.5. In the figure, different lines have different values of $\alpha $ which ranges from 5 to 14 and the x axis is the number of ranked tweets labeled as attack tweets which have a range of 1 to 100 and the y-axis is the precision or recall of the algorithm and should be a number from 0 to 1. The results shows the decision tree layer increases precision at the cost of recall. The model's performance differs greatly with different $\alpha $ values while there lacks a good way to find the optimal one. Discussion In this section, we will discuss two questions. Firstly, we want to briefly discuss how good humans do on this task. What we find out is though humans perform well on most of the tweets, some tweets have proven to be challenging without additional information. In this experiment, we asked 18 members of our lab to classify 34 tweets picked from human annotated ones. There are only two tweets which all the 18 answers agree with each other. And there are two tweets that got exactly the same number of votes on both sides. The two tweets are "if these shoes get sold out before i can purchase them, i'ma be so mad that i might just switch banks! @bankofamerica fix yourself!" and "nothing's for sure, but if i were a pnc accountholder, i'd get my online banking business done today: http://lat.ms/uv3qlo". The second question we want to talk about is how to find out the optimal number of topics in each of the two LDA models. As shown in the parameter tuning section, the number of topics parameter greatly affects the performance of the model. We've tried several ways to figure out the number of topics. First a set number of topics for different corpora. We tried 30 different topic numbers on the Bank of America dataset and chose the best one, and then tested it on the PNC data. The result shows that this method does not perform well on different datasets. We think it is because the number of topics should be a function of the number of documents or number of words in the corpus. Then we tried to let the model itself determines the parameter. There are some LDA variations that can do automatic number of topic inference. The one we chose is the Hierarchical Dirichlet Process (HDP) mixture model, which is a nonparametric Bayesian approach to clustering grouped data and a natural nonparametric generalization of Latent Dirichlet Allocation BIBREF31. However it does not perform very well. Its precision is shown in figure FIGREF51 and recall is shown in figure FIGREF52. We think the reason for this kind of performance might be that tweets, with the restriction of 140 characters, have very different properties than usual documents like news or articles. The last method is what was proposed in this paper. An $\alpha $ equals 10 is what we chose and did a good job on the experiments. But it is only an empirical result. Conclusion In this paper, we proposed a novel weakly-supervised model with optional supervised classifier layer to determine the impact of a Denial-of-Service attack in real time using twitter. The approach computes an anomaly score based on the distribution of new topics and their KL divergence to the historical topics. Then we tested the model on same and different entities to check the model's performance and how well it generalize. Our experiment result showed that the model achieved decent result on finding out tweets related to a DDoS attack even comparable to a supervised model baseline. And it could generalize to different entities within the same domain. Using the attack tweets, we could get an estimation of the impact of the attack with a proposed formula. There remain some interesting open questions for future research. For example, it is important to figure out a way to find out the optimal number of topics in the dataset. We would also be interested to see how well this model will perform on other kind of event detection task if the optional classifier layer changes accordingly. Additional Result for Parameter Tuning Figures FIGREF53 and FIGREF54 provide all of the experimental results on the model with different combinations of parameters.
The dataset contains about 590 tweets about DDos attacks.
0b54032508c96ff3320c3db613aeb25d42d00490
0b54032508c96ff3320c3db613aeb25d42d00490_0
Q: What is the training and test data used? Text: Introduction Denial of Service attacks are explicit attempts to stop legitimate users from accessing specific network systems BIBREF0. Attackers try to exhaust network resources like bandwidth, or server resources like CPU and memory. As a result, the targeted system slows down or becomes unusable BIBREF1. On-line service providers like Bank Of America, Facebook and Reddit are often the target of such attacks and the frequency and scale of those attacks has increased rapidly in recent years BIBREF2. To address this problem, there is ample previous work on methods to detect and handle Denial of Service attacks, especially Distributed Denial of Service attacks. D-WARD BIBREF3 is a scheme that tries to locate a DDoS attacks at the source by monitoring inbound and outbound traffic of a network and comparing it with predefined "normal" values. Some IP Traceback mechanisms BIBREF4 were developed to trace back to the attack source from the victim's end. Still other methods try to deploy a defensive scheme in an entire network to detect and respond to an attack at intermediate sub-networks. Watchers BIBREF5 is an example of this approach. Despite all the new models and techniques to prevent or handle cyber attacks, DDoS attacks keep evolving. Services are still being attacked frequently and brought down from time to time. After a service is disrupted, it is crucial for the provider to assess the scale of the outage impact. In this paper, we present a novel approach to solve this problem. No matter how complex the network becomes or what methods the attackers use, a denial of service attack always results in legitimate users being unable to access the network system or slowing down their access and they are usually willing to reveal this information on social media plaforms. Thus legitimate user feedback can be a reliable indicator about the severity level of the service outage. Thus we split this problem into two parts namely by first isolating the tweet stream that is likely related to a DoS attack and then measuring the impact of attack by analyzing the extracted tweets. A central challenge to measure the impact is how to figure out the scale of the effect on users as soon as possible so that appropriate action can be taken. Another difficulty is given the huge number of users of a service, how to effectively get and process the user feedback. With the development of Social Networks, especially micro blogs like Twitter, users post many life events in real time which can help with generating a fast response. Another advantage of social networks is that they are widely used. Twitter claims that they had 313 million monthly active users in the second quarter of 2016 BIBREF6. This characteristic will enlarge the scope of detection and is extremely helpful when dealing with cross domain attacks because tweets from multiple places can be leveraged. The large number of users of social networks will also guarantee the sensitivity of the model. However, because of the large number of users, a huge quantity of tweets will be generated in a short time, making it difficult to manually annotate the tweets, which makes unsupervised or weakly-supervised models much more desirable. In the Twitter data that we collected there are three kinds of tweets. Firstly are tweets that are actually about a cyberattack. For example, someone tweeted "Can't sign into my account for bank of America after hackers infiltrated some accounts." on September 19, 2012 when a attack on the website happened. Secondly are tweets about some random complaints about an entity like "Death to Bank of America!!!! RIP my Hello Kitty card... " which also appeared on that day. Lastly are tweets about other things related to the bank. For example, another tweet on the same day is "Should iget an account with bank of america or welsfargo?". To find out the scale of impact from an attack, we must first pick out the tweets that are about the attack. Then using the ratio and number of attack tweets, an estimation of severity can be generated. To solve the problem of detecting Denial of Service attacks from tweets, we constructed a weakly-supervised Natural Language Processing (NLP) based model to process the feeds. More generally, this is a new event detection model. We hypothesize that new topics are attack topics. The hypothesis would not always hold and this issue will be handled by a later module. The first step of the model is to detect topics in one time window of the tweets using Latent Dirichlet Allocation BIBREF7. Then, in order to get a score for each of the topics, the topics in the current time window are compared with the topics in the previous time window using Symmetric Kullback-Leibler Divergence (KL Divergence) BIBREF8. After that, a score for each tweet in the time window is computed using the distribution of topics for the tweet and the score of the topics. We're looking for tweets on new topics through time. While the experiments show promising results, precision can be further increased by adding a layer of a supervised classifier trained with attack data at the expense of recall. Following are the contributions in this paper: A dataset of annotated tweets extracted from Twitter during DoS attacks on a variety organizations from differing domains such as banking (like Bank Of America) and technology. A weakly-supervised approach to identifying detect likely DoS service related events on twitter in real-time. A score to measure impact of the DoS attack based on the frequency of user complaints about the event. The rest of this paper is organized as follows: In section 2, previous work regarding DDoS attack detection and new event detection will be discussed. In section 3, we describe the how the data was collected. We also present the model we created to estimate the impact of DDoS attacks from Twitter feeds. In section 4, the experiments are described and the results are provided. In section 5 we discuss some additional questions. Finally, section 6 concludes our paper and describes future work. Related Work Denial of Service (DoS) attacks are a major threat to Internet security, and detecting them has been a core task of the security community for more than a decade. There exists significant amount of prior work in this domain. BIBREF9, BIBREF10, BIBREF11 all introduced different methods to tackle this problem. The major difference between this work and previous ones are that instead of working on the data of the network itself, we use the reactions of users on social networks to identify an intrusion. Due to the widespread use of social networks, they have become an important platform for real-world event detection in recent years BIBREF12. BIBREF13 defined the task of new event detection as "identifying the first story on topics of interest through constantly monitoring news streams". Atefeh et al. BIBREF14 provided a comprehensive overview of event detection methods that have been applied to twitter data. We will discuss some of the approaches that are closely related to our work. Weng et al. BIBREF15 used a wavelet-signal clustering method to build a signal for individual words in the tweets that was dependent high frequency words that repeated themselves. The signals were clustered to detect events. Sankaranarayanan et al. BIBREF16 presented an unsupervised news detection method based on naive Bayes classifiers and on-line clustering. BIBREF17 described an unsupervised method to detect general new event detection using Hierarchical divisive clustering. Phuvipadawat et al. BIBREF18 discussed a pipeline to collect, cluster, rank tweets and ultimately track events. They computed the similarity between tweets using TF-IDF. The Stanford Named Entity Recognizer was used to identify nouns in the tweets providing additional features while computing the TF-IDF score. Petrović et al. BIBREF19 tried to detect events on a large web corpus by applying a modified locality sensitive hashing technique and clustering documents (tweets) together. Benson et al. BIBREF20 created a graphical model that learned a latent representation for twitter messages, ultimately generating a canonical value for each event. Tweet-scan BIBREF21 was a method to detect events in a specific geo-location. After extracting features such as name, time and location from the tweet, the method used DB-SCAN to cluster the tweets and Hierarchical Dirichlet Process to model the topics in the tweets. Badjatiya et. al. BIBREF22 applied deep neural networks to detect events. They showed different architectures such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (LSTM based) and FastText outperform standard n-gram and TF-IDF models. Burel et al. BIBREF23 created a Dual-CNN that had an additional channel to model the named entities in tweets apart from the pretrained word vectors from GloVe BIBREF24 or Word2Vec BIBREF25. Thus most event detection models can be grouped into three main categories of methods i.e. TF-IDF based methods, approaches that model topics in tweets and deep neural network based algorithms. One of the main challenges against applying a neural network model is the the requirement of a large annotated corpus of tweets. Our corpus of tweets is comparatively small. Hence we build our pipeline by modeling the topics learned from tweets. The previous work that is most similar to ours was BIBREF26. We both used Latent Dirichlet Allocation (LDA) to get the topics of the document, the difference was they only run LDA on the hash-tag of the tweets while we try to get the topics in the tweets by running it on the whole document. Latent Dirichlet Allocation BIBREF7 was a method to get topics from a corpus. In our work, we used the technique to acquire the values of some of the variables in our equation. A variation of it, Hierarchically Supervised Latent Dirichlet Allocation BIBREF27 was used in the evaluation. Approach Figure FIGREF4 outlines the entire pipeline of the model from preprocessing tweets to modeling them and finally detecting / ranking future tweets that are related to a DoS issue and measuring its severity. Approach ::: Data Collection To collect the tweets, we first gathered a list of big DDoS attacks happened from 2012 to 2014. Then for each attack on the list, we collected all the tweets from one week before the attack to the attack day that contains the name of the entity attacked. Approach ::: Preprocessing The following preprocessing procedure were applied to the corpus of tweets: Remove all the meta-data like time stamp, author, and so on. These meta-data could provide useful information, but only the content of the tweet was used for now. Lowercase all the text Use an English stop word list to filter out stop words. The last two steps are commonly used technique when preprocessing text. Approach ::: Create LDA Models Now we try to find out a quantitative representation of the corpus. To do that, the preprocessed tweets about one attack will be divided into two groups. One is on the attack day and the other is the tweets one week before it. The first set will be called $D_a$ and the other one $D_b$. This step will create two separate LDA models for $D_a$ and $D_b$ using the Genism library BIBREF28. The first Model will be called $M_a$ and the other one $M_b$. Latent Dirichlet allocation (LDA) is a generative probabilistic topic modeling model. Figure FIGREF11 is its plate notation. The meaning of different parameters $M$, $N$, $\alpha $, $\beta $, $\theta $, $z$ and $w$ is also described there. We used the LDA algorithm implemented by the Gensim library. One of the most important parameters of the LDA algorithm is the number of topics $N_t$ in the corpus. To determine that we introduced the following formula: where $N_d$ is the number of tweets in the corpus. $\alpha $ is a constant and we used $\alpha $=10 in our experiments. The logic behind the equation is discussed in section 5. Approach ::: The attack topics Then we want to find out how the new topics are different from the history topics or, in other words, how topics in $M_a$ differ from topics in $M_b$. We define the Symmetric Kullback-Leibler divergence for topic $T_j$ in Model $M_a$ as: Where n is the number of topics in Model $M_b$, $T_m^{^{\prime }}$ is the $m^{th}$ topic in Model $M_b$ and $D_kl (X,Y)$ is the original Kullback-Leibler Divergence for discrete probability distributions which defined as : Where $X(i)$ and $Y(i)$ are the probability of token $i$ in topics $X$ and $Y$ respectively. This is similar to the Jensen-Shannon divergence. So for each topic $T_j$ in Model $M_a$ its difference to topics in $M_b$ is determined by its most similar topic in $M_b$. The topics from the attack day model $M_a$ are ranked by their Symmetric Kullback-Leibler divergence to topics from the non-attack day model $M_b$. An example of selected attack topics is provided in section 4.3. Approach ::: The attack tweets This subsection is about how to find specific tweets that are about a network attack. The tweets are selected based on the relative score $S$. The score for tweet $t_i$ is defined as: Where $n$ is the number of topics on the attack day, $P_{i,j}$ is the probability that topic $j$ appears in tweet $t_i$ in the attack day LDA model, and $SKL_j$ is the Symmetric Kullback-Leibler divergence for topic $j$. The higher the score the more likely it is related to an attack event. Approach ::: Optional Classifier Layer Because annotated data is not needed, the model we described before can be regarded as a weakly-supervised model to detect new events on twitter in a given time period. To label tweets as attack tweets, one assumption must be true, which is that the new event in that time period is a cyber attack. Unfortunately, that is usually not true. Thus, an optional classifier layer can be used to prevent false positives. By using a decision tree model we want to find out whether the weakly-supervised part of the model can simplify the problem enough that a simple classification algorithm like a decision tree can have a good result. Additionally, it is easy to find out the reasoning underline a decision tree model so that we will know what the most important features are. The decision tree classifier is trained on the bag of words of collected tweets and the labels are manually annotated. We limit the minimum samples in each leaf to be no less than 4 so that the tree won't overfit. Other than that, a standard Classification and Regression Tree (CART) BIBREF29 implemented by scikit-learn BIBREF30 was used. The classifier was only trained on the training set (tweets about Bank of America on 09/19/2012), so that the test results do not overestimate accuracy. Approach ::: Measure the Severity The definition of severity varies from different network services and should be studied case by case. For the sake of completeness, we propose this general formula: In the equation above, $\beta $ is a parameter from 0 to 1 which determines the weight of the two parts. $N_{attack}$ is the number of attack tweets found. $N_{all}$ means the number of all tweets collected in the time period. And $N_{user}$ is the number of twitter followers of the network service. An interesting future work is to find out the quantitative relation between SeverityLevel score and the size of the actual DDoS attack. Experiments In this section we experimentally study the proposed attack tweet detection models and report the evaluation results. Experiments ::: Term Definition We used precision and recall for evaluation: Precision: Out of all of the tweets that are marked as attack tweets, the percentage of tweets that are actually attack tweets. Or true positive over true positive plus false positive. Recall: Out of all of the actual attack tweets, the percentage of tweets that are labeled as attack tweets. Or true positive over true positive plus false negative. Experiments ::: Experiment Dataset We collected tweets related to five different DDoS attacks on three different American banks. For each attack, all the tweets containing the bank's name posted from one week before the attack until the attack day were collected. There are in total 35214 tweets in the dataset. Then the collected tweets were preprocessed as mentioned in the preprocessing section. The following attacks were used in the dataset: Bank of America attack on 09/19/2012. Wells Fargo Bank attack on 09/19/2012. Wells Fargo Bank attack on 09/25/2012. PNC Bank attack on 09/19/2012. PNC Bank attack on 09/26/2012. Experiments ::: The Attack Topics Only the tweets from the Bank of America attack on 09/19/2012 were used in this experiment. The tweets before the attack day and on the attack day were used to train the two LDA models mentioned in the approach section. The top, bottom 4 attack topics and their top 10 words are shown in table 1 and 2. As shown in table 1, there are roughly 4 kinds of words in the attack topics. First is the name of the entity we are watching. In this case, it is Bank of America. Those words are in every tweet, so they get very high weight in the topics, while not providing useful information. Those words can be safely discarded or added to the stop word list. The second type of words are general cybersecurity words like website, outage, hackers, slowdown and so on. Those words have the potential to become an indicator. When topics with those words appears, it is likely that there exists an attack. The third kind are words related to the specific attack but not attacks in general. Those words can provide details about the attack, but it is hard to identify them without reading the full tweets. In our example, the words movie and sacrilegious are in this group. That is because the DDoS attack on Bank of America was in response to the release of a controversial sacrilegious film. The remaining words are non-related words. The higher the weights of them in a topic, the less likely the topic is actually about a DDoS attack. The results showed that except the 3rd topic, the top 4 topics have high weight on related words and the number of the forth type of words are smaller than the first three types of words. There are no high weight words related to security in the bottom 4 topics. We can say that the high SKL topics are about cyber attacks. Experiments ::: The Attack Tweets In this subsection we discuss the experiment on the attack tweets found in the whole dataset. As stated in section 3.3, the whole dataset was divided into two parts. $D_a$ contained all of the tweets collected on the attack day of the five attacks mentioned in section 4.2. And $D_b$ contained all of the tweets collected before the five attacks. There are 1180 tweets in $D_a$ and 7979 tweets in $D_b$. The tweets on the attack days ($D_a$) are manually annotated and only 50 percent of those tweets are actually about a DDoS attack. The 5 tweets that have the highest relative score in the dataset are: jiwa mines and miner u.s. bancorp, pnc latest bank websites to face access issues: (reuters) - some u.s. bancorp... http://bit.ly/p5xpmz u.s. bancorp, pnc latest bank websites to face access issues: (reuters) - some u.s. bancorp and pnc financial... @pncvwallet nothing pnc sucks fat d ur lucky there's 3 pnc's around me or your bitchassness wouldnt have my money business us bancorp, pnc latest bank websites to face access issues - reuters news forex business u.s. bancorp, pnc latest bank websites to face access issues http://dlvr.it/2d9ths The precision when labeling the first x ranked tweets as attack tweet is shown in the figure FIGREF39. The x-axis is the number of ranked tweets treated as attack tweets. And the y-axis is the corresponding precision. The straight line in figures FIGREF39, FIGREF43 and FIGREF51 is the result of a supervised LDA algorithm which is used as a baseline. Supervised LDA achieved 96.44 percent precision with 10 fold cross validation. The result shows that if the model is set to be more cautious about labeling a tweet as an attack tweet, a small x value, higher precision, even comparable to supervised model can be achieved. However as the x value increases the precision drops eventually. Figure FIGREF40 shows the recall of the same setting. We can find out that the recall increases as the model becomes more bold, at the expense of precision. Figure FIGREF41 is the detection error trade-off graph to show the relation between precision and recall more clearly (missed detection rate is the precision). Experiments ::: Generalization In this subsection we evaluate how good the model generalizes. To achieve that, the dataset is divided into two groups, one is about the attacks on Bank of America and the other group is about PNC and Wells Fargo. The only difference between this experiment and the experiment in section 4.4 is the dataset. In this experiment setting $D_a$ contains only the tweets collected on the days of attack on PNC and Wells Fargo. $D_b$ only contains the tweets collected before the Bank of America attack. There are 590 tweets in $D_a$ and 5229 tweets in $D_b$. In this experiment, we want to find out whether a model trained on Bank of America data can make good classification on PNC and Wells Fargo data. Figures FIGREF43 and FIGREF44 will show the precision and recall of the model in this experiment setting. A detection error trade-off graph (Figure FIGREF45) is also provided. The result is similar to the whole dataset setting from the previous section. The smaller the x value is, the higher the precision and lower the recall, vice versa. The precision is also comparable to the supervised model when a small x is chosen. This shows that the model generalized well. Experiments ::: Impact Estimation Using the result from last section, we choose to label the first 40 tweets as attack tweets. The number 40 can be decided by either the number of tweets labeled as attack tweets by the decision tree classifier or the number of tweets that have a relative score S higher than a threshold. The PNC and Wells Fargo bank have 308.3k followers combined as of July 2018. According to eqution (5) from section 3.6, the severity Level can be computed. The score would have a range from 6.78 * $10^{-2}$ to 1.30 * $10^{-3}$, depending on the value of $\beta $. This means that it could be a fairly important event because more than six percent of tweets mentioning the banks are talking about the DDoS attack. However it could also be a minor attack because only a tiny portion of the people following those banks are complaining about the outage. The value of $\beta $ should depend on the provider's own definition of severity. Experiments ::: Parameter Tuning This model has two parameters that need to be provided. One is $\alpha $ which is needed to determine the number of topics parameter $N_t$, and the other is whether to use the optional decision tree filter. Figures FIGREF49 and FIGREF50 provide experimental results on the model with different combinations of parameters. We selected four combinations that have the best and worst performance. All of the results can be found in appendix. The model was trained on Bank of America tweets and tested on PNC and Wells Fargo tweets like in section 4.5. In the figure, different lines have different values of $\alpha $ which ranges from 5 to 14 and the x axis is the number of ranked tweets labeled as attack tweets which have a range of 1 to 100 and the y-axis is the precision or recall of the algorithm and should be a number from 0 to 1. The results shows the decision tree layer increases precision at the cost of recall. The model's performance differs greatly with different $\alpha $ values while there lacks a good way to find the optimal one. Discussion In this section, we will discuss two questions. Firstly, we want to briefly discuss how good humans do on this task. What we find out is though humans perform well on most of the tweets, some tweets have proven to be challenging without additional information. In this experiment, we asked 18 members of our lab to classify 34 tweets picked from human annotated ones. There are only two tweets which all the 18 answers agree with each other. And there are two tweets that got exactly the same number of votes on both sides. The two tweets are "if these shoes get sold out before i can purchase them, i'ma be so mad that i might just switch banks! @bankofamerica fix yourself!" and "nothing's for sure, but if i were a pnc accountholder, i'd get my online banking business done today: http://lat.ms/uv3qlo". The second question we want to talk about is how to find out the optimal number of topics in each of the two LDA models. As shown in the parameter tuning section, the number of topics parameter greatly affects the performance of the model. We've tried several ways to figure out the number of topics. First a set number of topics for different corpora. We tried 30 different topic numbers on the Bank of America dataset and chose the best one, and then tested it on the PNC data. The result shows that this method does not perform well on different datasets. We think it is because the number of topics should be a function of the number of documents or number of words in the corpus. Then we tried to let the model itself determines the parameter. There are some LDA variations that can do automatic number of topic inference. The one we chose is the Hierarchical Dirichlet Process (HDP) mixture model, which is a nonparametric Bayesian approach to clustering grouped data and a natural nonparametric generalization of Latent Dirichlet Allocation BIBREF31. However it does not perform very well. Its precision is shown in figure FIGREF51 and recall is shown in figure FIGREF52. We think the reason for this kind of performance might be that tweets, with the restriction of 140 characters, have very different properties than usual documents like news or articles. The last method is what was proposed in this paper. An $\alpha $ equals 10 is what we chose and did a good job on the experiments. But it is only an empirical result. Conclusion In this paper, we proposed a novel weakly-supervised model with optional supervised classifier layer to determine the impact of a Denial-of-Service attack in real time using twitter. The approach computes an anomaly score based on the distribution of new topics and their KL divergence to the historical topics. Then we tested the model on same and different entities to check the model's performance and how well it generalize. Our experiment result showed that the model achieved decent result on finding out tweets related to a DDoS attack even comparable to a supervised model baseline. And it could generalize to different entities within the same domain. Using the attack tweets, we could get an estimation of the impact of the attack with a proposed formula. There remain some interesting open questions for future research. For example, it is important to figure out a way to find out the optimal number of topics in the dataset. We would also be interested to see how well this model will perform on other kind of event detection task if the optional classifier layer changes accordingly. Additional Result for Parameter Tuning Figures FIGREF53 and FIGREF54 provide all of the experimental results on the model with different combinations of parameters.
Tweets related to a Bank of America DDos attack were used as training data. The test datasets contain tweets related to attacks to Bank of America, PNC and Wells Fargo.
86be8241737dd8f7b656a3af2cd17c8d54bf1553
86be8241737dd8f7b656a3af2cd17c8d54bf1553_0
Q: Was performance of the weakly-supervised model compared to the performance of a supervised model? Text: Introduction Denial of Service attacks are explicit attempts to stop legitimate users from accessing specific network systems BIBREF0. Attackers try to exhaust network resources like bandwidth, or server resources like CPU and memory. As a result, the targeted system slows down or becomes unusable BIBREF1. On-line service providers like Bank Of America, Facebook and Reddit are often the target of such attacks and the frequency and scale of those attacks has increased rapidly in recent years BIBREF2. To address this problem, there is ample previous work on methods to detect and handle Denial of Service attacks, especially Distributed Denial of Service attacks. D-WARD BIBREF3 is a scheme that tries to locate a DDoS attacks at the source by monitoring inbound and outbound traffic of a network and comparing it with predefined "normal" values. Some IP Traceback mechanisms BIBREF4 were developed to trace back to the attack source from the victim's end. Still other methods try to deploy a defensive scheme in an entire network to detect and respond to an attack at intermediate sub-networks. Watchers BIBREF5 is an example of this approach. Despite all the new models and techniques to prevent or handle cyber attacks, DDoS attacks keep evolving. Services are still being attacked frequently and brought down from time to time. After a service is disrupted, it is crucial for the provider to assess the scale of the outage impact. In this paper, we present a novel approach to solve this problem. No matter how complex the network becomes or what methods the attackers use, a denial of service attack always results in legitimate users being unable to access the network system or slowing down their access and they are usually willing to reveal this information on social media plaforms. Thus legitimate user feedback can be a reliable indicator about the severity level of the service outage. Thus we split this problem into two parts namely by first isolating the tweet stream that is likely related to a DoS attack and then measuring the impact of attack by analyzing the extracted tweets. A central challenge to measure the impact is how to figure out the scale of the effect on users as soon as possible so that appropriate action can be taken. Another difficulty is given the huge number of users of a service, how to effectively get and process the user feedback. With the development of Social Networks, especially micro blogs like Twitter, users post many life events in real time which can help with generating a fast response. Another advantage of social networks is that they are widely used. Twitter claims that they had 313 million monthly active users in the second quarter of 2016 BIBREF6. This characteristic will enlarge the scope of detection and is extremely helpful when dealing with cross domain attacks because tweets from multiple places can be leveraged. The large number of users of social networks will also guarantee the sensitivity of the model. However, because of the large number of users, a huge quantity of tweets will be generated in a short time, making it difficult to manually annotate the tweets, which makes unsupervised or weakly-supervised models much more desirable. In the Twitter data that we collected there are three kinds of tweets. Firstly are tweets that are actually about a cyberattack. For example, someone tweeted "Can't sign into my account for bank of America after hackers infiltrated some accounts." on September 19, 2012 when a attack on the website happened. Secondly are tweets about some random complaints about an entity like "Death to Bank of America!!!! RIP my Hello Kitty card... " which also appeared on that day. Lastly are tweets about other things related to the bank. For example, another tweet on the same day is "Should iget an account with bank of america or welsfargo?". To find out the scale of impact from an attack, we must first pick out the tweets that are about the attack. Then using the ratio and number of attack tweets, an estimation of severity can be generated. To solve the problem of detecting Denial of Service attacks from tweets, we constructed a weakly-supervised Natural Language Processing (NLP) based model to process the feeds. More generally, this is a new event detection model. We hypothesize that new topics are attack topics. The hypothesis would not always hold and this issue will be handled by a later module. The first step of the model is to detect topics in one time window of the tweets using Latent Dirichlet Allocation BIBREF7. Then, in order to get a score for each of the topics, the topics in the current time window are compared with the topics in the previous time window using Symmetric Kullback-Leibler Divergence (KL Divergence) BIBREF8. After that, a score for each tweet in the time window is computed using the distribution of topics for the tweet and the score of the topics. We're looking for tweets on new topics through time. While the experiments show promising results, precision can be further increased by adding a layer of a supervised classifier trained with attack data at the expense of recall. Following are the contributions in this paper: A dataset of annotated tweets extracted from Twitter during DoS attacks on a variety organizations from differing domains such as banking (like Bank Of America) and technology. A weakly-supervised approach to identifying detect likely DoS service related events on twitter in real-time. A score to measure impact of the DoS attack based on the frequency of user complaints about the event. The rest of this paper is organized as follows: In section 2, previous work regarding DDoS attack detection and new event detection will be discussed. In section 3, we describe the how the data was collected. We also present the model we created to estimate the impact of DDoS attacks from Twitter feeds. In section 4, the experiments are described and the results are provided. In section 5 we discuss some additional questions. Finally, section 6 concludes our paper and describes future work. Related Work Denial of Service (DoS) attacks are a major threat to Internet security, and detecting them has been a core task of the security community for more than a decade. There exists significant amount of prior work in this domain. BIBREF9, BIBREF10, BIBREF11 all introduced different methods to tackle this problem. The major difference between this work and previous ones are that instead of working on the data of the network itself, we use the reactions of users on social networks to identify an intrusion. Due to the widespread use of social networks, they have become an important platform for real-world event detection in recent years BIBREF12. BIBREF13 defined the task of new event detection as "identifying the first story on topics of interest through constantly monitoring news streams". Atefeh et al. BIBREF14 provided a comprehensive overview of event detection methods that have been applied to twitter data. We will discuss some of the approaches that are closely related to our work. Weng et al. BIBREF15 used a wavelet-signal clustering method to build a signal for individual words in the tweets that was dependent high frequency words that repeated themselves. The signals were clustered to detect events. Sankaranarayanan et al. BIBREF16 presented an unsupervised news detection method based on naive Bayes classifiers and on-line clustering. BIBREF17 described an unsupervised method to detect general new event detection using Hierarchical divisive clustering. Phuvipadawat et al. BIBREF18 discussed a pipeline to collect, cluster, rank tweets and ultimately track events. They computed the similarity between tweets using TF-IDF. The Stanford Named Entity Recognizer was used to identify nouns in the tweets providing additional features while computing the TF-IDF score. Petrović et al. BIBREF19 tried to detect events on a large web corpus by applying a modified locality sensitive hashing technique and clustering documents (tweets) together. Benson et al. BIBREF20 created a graphical model that learned a latent representation for twitter messages, ultimately generating a canonical value for each event. Tweet-scan BIBREF21 was a method to detect events in a specific geo-location. After extracting features such as name, time and location from the tweet, the method used DB-SCAN to cluster the tweets and Hierarchical Dirichlet Process to model the topics in the tweets. Badjatiya et. al. BIBREF22 applied deep neural networks to detect events. They showed different architectures such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (LSTM based) and FastText outperform standard n-gram and TF-IDF models. Burel et al. BIBREF23 created a Dual-CNN that had an additional channel to model the named entities in tweets apart from the pretrained word vectors from GloVe BIBREF24 or Word2Vec BIBREF25. Thus most event detection models can be grouped into three main categories of methods i.e. TF-IDF based methods, approaches that model topics in tweets and deep neural network based algorithms. One of the main challenges against applying a neural network model is the the requirement of a large annotated corpus of tweets. Our corpus of tweets is comparatively small. Hence we build our pipeline by modeling the topics learned from tweets. The previous work that is most similar to ours was BIBREF26. We both used Latent Dirichlet Allocation (LDA) to get the topics of the document, the difference was they only run LDA on the hash-tag of the tweets while we try to get the topics in the tweets by running it on the whole document. Latent Dirichlet Allocation BIBREF7 was a method to get topics from a corpus. In our work, we used the technique to acquire the values of some of the variables in our equation. A variation of it, Hierarchically Supervised Latent Dirichlet Allocation BIBREF27 was used in the evaluation. Approach Figure FIGREF4 outlines the entire pipeline of the model from preprocessing tweets to modeling them and finally detecting / ranking future tweets that are related to a DoS issue and measuring its severity. Approach ::: Data Collection To collect the tweets, we first gathered a list of big DDoS attacks happened from 2012 to 2014. Then for each attack on the list, we collected all the tweets from one week before the attack to the attack day that contains the name of the entity attacked. Approach ::: Preprocessing The following preprocessing procedure were applied to the corpus of tweets: Remove all the meta-data like time stamp, author, and so on. These meta-data could provide useful information, but only the content of the tweet was used for now. Lowercase all the text Use an English stop word list to filter out stop words. The last two steps are commonly used technique when preprocessing text. Approach ::: Create LDA Models Now we try to find out a quantitative representation of the corpus. To do that, the preprocessed tweets about one attack will be divided into two groups. One is on the attack day and the other is the tweets one week before it. The first set will be called $D_a$ and the other one $D_b$. This step will create two separate LDA models for $D_a$ and $D_b$ using the Genism library BIBREF28. The first Model will be called $M_a$ and the other one $M_b$. Latent Dirichlet allocation (LDA) is a generative probabilistic topic modeling model. Figure FIGREF11 is its plate notation. The meaning of different parameters $M$, $N$, $\alpha $, $\beta $, $\theta $, $z$ and $w$ is also described there. We used the LDA algorithm implemented by the Gensim library. One of the most important parameters of the LDA algorithm is the number of topics $N_t$ in the corpus. To determine that we introduced the following formula: where $N_d$ is the number of tweets in the corpus. $\alpha $ is a constant and we used $\alpha $=10 in our experiments. The logic behind the equation is discussed in section 5. Approach ::: The attack topics Then we want to find out how the new topics are different from the history topics or, in other words, how topics in $M_a$ differ from topics in $M_b$. We define the Symmetric Kullback-Leibler divergence for topic $T_j$ in Model $M_a$ as: Where n is the number of topics in Model $M_b$, $T_m^{^{\prime }}$ is the $m^{th}$ topic in Model $M_b$ and $D_kl (X,Y)$ is the original Kullback-Leibler Divergence for discrete probability distributions which defined as : Where $X(i)$ and $Y(i)$ are the probability of token $i$ in topics $X$ and $Y$ respectively. This is similar to the Jensen-Shannon divergence. So for each topic $T_j$ in Model $M_a$ its difference to topics in $M_b$ is determined by its most similar topic in $M_b$. The topics from the attack day model $M_a$ are ranked by their Symmetric Kullback-Leibler divergence to topics from the non-attack day model $M_b$. An example of selected attack topics is provided in section 4.3. Approach ::: The attack tweets This subsection is about how to find specific tweets that are about a network attack. The tweets are selected based on the relative score $S$. The score for tweet $t_i$ is defined as: Where $n$ is the number of topics on the attack day, $P_{i,j}$ is the probability that topic $j$ appears in tweet $t_i$ in the attack day LDA model, and $SKL_j$ is the Symmetric Kullback-Leibler divergence for topic $j$. The higher the score the more likely it is related to an attack event. Approach ::: Optional Classifier Layer Because annotated data is not needed, the model we described before can be regarded as a weakly-supervised model to detect new events on twitter in a given time period. To label tweets as attack tweets, one assumption must be true, which is that the new event in that time period is a cyber attack. Unfortunately, that is usually not true. Thus, an optional classifier layer can be used to prevent false positives. By using a decision tree model we want to find out whether the weakly-supervised part of the model can simplify the problem enough that a simple classification algorithm like a decision tree can have a good result. Additionally, it is easy to find out the reasoning underline a decision tree model so that we will know what the most important features are. The decision tree classifier is trained on the bag of words of collected tweets and the labels are manually annotated. We limit the minimum samples in each leaf to be no less than 4 so that the tree won't overfit. Other than that, a standard Classification and Regression Tree (CART) BIBREF29 implemented by scikit-learn BIBREF30 was used. The classifier was only trained on the training set (tweets about Bank of America on 09/19/2012), so that the test results do not overestimate accuracy. Approach ::: Measure the Severity The definition of severity varies from different network services and should be studied case by case. For the sake of completeness, we propose this general formula: In the equation above, $\beta $ is a parameter from 0 to 1 which determines the weight of the two parts. $N_{attack}$ is the number of attack tweets found. $N_{all}$ means the number of all tweets collected in the time period. And $N_{user}$ is the number of twitter followers of the network service. An interesting future work is to find out the quantitative relation between SeverityLevel score and the size of the actual DDoS attack. Experiments In this section we experimentally study the proposed attack tweet detection models and report the evaluation results. Experiments ::: Term Definition We used precision and recall for evaluation: Precision: Out of all of the tweets that are marked as attack tweets, the percentage of tweets that are actually attack tweets. Or true positive over true positive plus false positive. Recall: Out of all of the actual attack tweets, the percentage of tweets that are labeled as attack tweets. Or true positive over true positive plus false negative. Experiments ::: Experiment Dataset We collected tweets related to five different DDoS attacks on three different American banks. For each attack, all the tweets containing the bank's name posted from one week before the attack until the attack day were collected. There are in total 35214 tweets in the dataset. Then the collected tweets were preprocessed as mentioned in the preprocessing section. The following attacks were used in the dataset: Bank of America attack on 09/19/2012. Wells Fargo Bank attack on 09/19/2012. Wells Fargo Bank attack on 09/25/2012. PNC Bank attack on 09/19/2012. PNC Bank attack on 09/26/2012. Experiments ::: The Attack Topics Only the tweets from the Bank of America attack on 09/19/2012 were used in this experiment. The tweets before the attack day and on the attack day were used to train the two LDA models mentioned in the approach section. The top, bottom 4 attack topics and their top 10 words are shown in table 1 and 2. As shown in table 1, there are roughly 4 kinds of words in the attack topics. First is the name of the entity we are watching. In this case, it is Bank of America. Those words are in every tweet, so they get very high weight in the topics, while not providing useful information. Those words can be safely discarded or added to the stop word list. The second type of words are general cybersecurity words like website, outage, hackers, slowdown and so on. Those words have the potential to become an indicator. When topics with those words appears, it is likely that there exists an attack. The third kind are words related to the specific attack but not attacks in general. Those words can provide details about the attack, but it is hard to identify them without reading the full tweets. In our example, the words movie and sacrilegious are in this group. That is because the DDoS attack on Bank of America was in response to the release of a controversial sacrilegious film. The remaining words are non-related words. The higher the weights of them in a topic, the less likely the topic is actually about a DDoS attack. The results showed that except the 3rd topic, the top 4 topics have high weight on related words and the number of the forth type of words are smaller than the first three types of words. There are no high weight words related to security in the bottom 4 topics. We can say that the high SKL topics are about cyber attacks. Experiments ::: The Attack Tweets In this subsection we discuss the experiment on the attack tweets found in the whole dataset. As stated in section 3.3, the whole dataset was divided into two parts. $D_a$ contained all of the tweets collected on the attack day of the five attacks mentioned in section 4.2. And $D_b$ contained all of the tweets collected before the five attacks. There are 1180 tweets in $D_a$ and 7979 tweets in $D_b$. The tweets on the attack days ($D_a$) are manually annotated and only 50 percent of those tweets are actually about a DDoS attack. The 5 tweets that have the highest relative score in the dataset are: jiwa mines and miner u.s. bancorp, pnc latest bank websites to face access issues: (reuters) - some u.s. bancorp... http://bit.ly/p5xpmz u.s. bancorp, pnc latest bank websites to face access issues: (reuters) - some u.s. bancorp and pnc financial... @pncvwallet nothing pnc sucks fat d ur lucky there's 3 pnc's around me or your bitchassness wouldnt have my money business us bancorp, pnc latest bank websites to face access issues - reuters news forex business u.s. bancorp, pnc latest bank websites to face access issues http://dlvr.it/2d9ths The precision when labeling the first x ranked tweets as attack tweet is shown in the figure FIGREF39. The x-axis is the number of ranked tweets treated as attack tweets. And the y-axis is the corresponding precision. The straight line in figures FIGREF39, FIGREF43 and FIGREF51 is the result of a supervised LDA algorithm which is used as a baseline. Supervised LDA achieved 96.44 percent precision with 10 fold cross validation. The result shows that if the model is set to be more cautious about labeling a tweet as an attack tweet, a small x value, higher precision, even comparable to supervised model can be achieved. However as the x value increases the precision drops eventually. Figure FIGREF40 shows the recall of the same setting. We can find out that the recall increases as the model becomes more bold, at the expense of precision. Figure FIGREF41 is the detection error trade-off graph to show the relation between precision and recall more clearly (missed detection rate is the precision). Experiments ::: Generalization In this subsection we evaluate how good the model generalizes. To achieve that, the dataset is divided into two groups, one is about the attacks on Bank of America and the other group is about PNC and Wells Fargo. The only difference between this experiment and the experiment in section 4.4 is the dataset. In this experiment setting $D_a$ contains only the tweets collected on the days of attack on PNC and Wells Fargo. $D_b$ only contains the tweets collected before the Bank of America attack. There are 590 tweets in $D_a$ and 5229 tweets in $D_b$. In this experiment, we want to find out whether a model trained on Bank of America data can make good classification on PNC and Wells Fargo data. Figures FIGREF43 and FIGREF44 will show the precision and recall of the model in this experiment setting. A detection error trade-off graph (Figure FIGREF45) is also provided. The result is similar to the whole dataset setting from the previous section. The smaller the x value is, the higher the precision and lower the recall, vice versa. The precision is also comparable to the supervised model when a small x is chosen. This shows that the model generalized well. Experiments ::: Impact Estimation Using the result from last section, we choose to label the first 40 tweets as attack tweets. The number 40 can be decided by either the number of tweets labeled as attack tweets by the decision tree classifier or the number of tweets that have a relative score S higher than a threshold. The PNC and Wells Fargo bank have 308.3k followers combined as of July 2018. According to eqution (5) from section 3.6, the severity Level can be computed. The score would have a range from 6.78 * $10^{-2}$ to 1.30 * $10^{-3}$, depending on the value of $\beta $. This means that it could be a fairly important event because more than six percent of tweets mentioning the banks are talking about the DDoS attack. However it could also be a minor attack because only a tiny portion of the people following those banks are complaining about the outage. The value of $\beta $ should depend on the provider's own definition of severity. Experiments ::: Parameter Tuning This model has two parameters that need to be provided. One is $\alpha $ which is needed to determine the number of topics parameter $N_t$, and the other is whether to use the optional decision tree filter. Figures FIGREF49 and FIGREF50 provide experimental results on the model with different combinations of parameters. We selected four combinations that have the best and worst performance. All of the results can be found in appendix. The model was trained on Bank of America tweets and tested on PNC and Wells Fargo tweets like in section 4.5. In the figure, different lines have different values of $\alpha $ which ranges from 5 to 14 and the x axis is the number of ranked tweets labeled as attack tweets which have a range of 1 to 100 and the y-axis is the precision or recall of the algorithm and should be a number from 0 to 1. The results shows the decision tree layer increases precision at the cost of recall. The model's performance differs greatly with different $\alpha $ values while there lacks a good way to find the optimal one. Discussion In this section, we will discuss two questions. Firstly, we want to briefly discuss how good humans do on this task. What we find out is though humans perform well on most of the tweets, some tweets have proven to be challenging without additional information. In this experiment, we asked 18 members of our lab to classify 34 tweets picked from human annotated ones. There are only two tweets which all the 18 answers agree with each other. And there are two tweets that got exactly the same number of votes on both sides. The two tweets are "if these shoes get sold out before i can purchase them, i'ma be so mad that i might just switch banks! @bankofamerica fix yourself!" and "nothing's for sure, but if i were a pnc accountholder, i'd get my online banking business done today: http://lat.ms/uv3qlo". The second question we want to talk about is how to find out the optimal number of topics in each of the two LDA models. As shown in the parameter tuning section, the number of topics parameter greatly affects the performance of the model. We've tried several ways to figure out the number of topics. First a set number of topics for different corpora. We tried 30 different topic numbers on the Bank of America dataset and chose the best one, and then tested it on the PNC data. The result shows that this method does not perform well on different datasets. We think it is because the number of topics should be a function of the number of documents or number of words in the corpus. Then we tried to let the model itself determines the parameter. There are some LDA variations that can do automatic number of topic inference. The one we chose is the Hierarchical Dirichlet Process (HDP) mixture model, which is a nonparametric Bayesian approach to clustering grouped data and a natural nonparametric generalization of Latent Dirichlet Allocation BIBREF31. However it does not perform very well. Its precision is shown in figure FIGREF51 and recall is shown in figure FIGREF52. We think the reason for this kind of performance might be that tweets, with the restriction of 140 characters, have very different properties than usual documents like news or articles. The last method is what was proposed in this paper. An $\alpha $ equals 10 is what we chose and did a good job on the experiments. But it is only an empirical result. Conclusion In this paper, we proposed a novel weakly-supervised model with optional supervised classifier layer to determine the impact of a Denial-of-Service attack in real time using twitter. The approach computes an anomaly score based on the distribution of new topics and their KL divergence to the historical topics. Then we tested the model on same and different entities to check the model's performance and how well it generalize. Our experiment result showed that the model achieved decent result on finding out tweets related to a DDoS attack even comparable to a supervised model baseline. And it could generalize to different entities within the same domain. Using the attack tweets, we could get an estimation of the impact of the attack with a proposed formula. There remain some interesting open questions for future research. For example, it is important to figure out a way to find out the optimal number of topics in the dataset. We would also be interested to see how well this model will perform on other kind of event detection task if the optional classifier layer changes accordingly. Additional Result for Parameter Tuning Figures FIGREF53 and FIGREF54 provide all of the experimental results on the model with different combinations of parameters.
Yes
a4422019d19f9c3d95ce8dc1d529bf3da5edcfb1
a4422019d19f9c3d95ce8dc1d529bf3da5edcfb1_0
Q: Do the tweets come from a specific region? Text: Introduction Over the last couple of years, the MeToo movement has facilitated several discussions about sexual abuse. Social media, especially Twitter, was one of the leading platforms where people shared their experiences of sexual harassment, expressed their opinions, and also offered support to victims. A large portion of these tweets was tagged with a dedicated hashtag #MeToo, and it was one of the main trending topics in many countries. The movement was viral on social media and the hashtag used over 19 million times in a year. The MeToo movement has been described as an essential development against the culture of sexual misconduct by many feminists, activists, and politicians. It is one of the primary examples of successful digital activism facilitated by social media platforms. The movement generated many conversations on stigmatized issues like sexual abuse and violence, which were not often discussed before because of the associated fear of shame or retaliation. This creates an opportunity for researchers to study how people express their opinion on a sensitive topic in an informal setting like social media. However, this is only possible if there are annotated datasets that explore different linguistic facets of such social media narratives. Twitter served as a platform for many different types of narratives during the MeToo movement BIBREF0. It was used for sharing personal stories of abuse, offering support and resources to victims, and expressing support or opposition towards the movement BIBREF1. It was also used to allege individuals of sexual misconduct, refute such claims, and sometimes voice hateful or sarcastic comments about the campaign or individuals. In some cases, people also misused hashtag to share irrelevant or uninformative content. To capture all these complex narratives, we decided to curate a dataset of tweets related to the MeToo movement that is annotated for various linguistic aspects. In this paper, we present a new dataset (MeTooMA) that contains 9,973 tweets associated with the MeToo movement annotated for relevance, stance, hate speech, sarcasm, and dialogue acts. We introduce and annotate three new dialogue acts that are specific to the movement: Allegation, Refutation, and Justification. The dataset also contains geographical information about the tweets: from which country it was posted. We expect this dataset would be of great interest and use to both computational and socio-linguists. For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media across multiple countries. Related Datasets Table TABREF3 presents a summary of datasets that contain social media posts about sexual abuse and annotated for various labels. BIBREF2 created a dataset of 2,500 tweets for identification of malicious intent surrounding the cases of sexual assault. The tweets were annotated for labels like accusational, validation, sensational. Khatua et al BIBREF3 collected 0.7 million tweets containing hashtags such as #MeToo, #AlyssaMilano, #harassed. The annotated a subset of 1024 tweets for the following assault-related labels: assault at the workplace by colleagues, assault at the educational institute by teachers or classmates, assault at public places by strangers, assault at home by a family member, multiple instances of assaults, or a generic tweet about sexual violence. BIBREF4 created the Reddit Domestic Abuse Dataset, which contained 18,336 posts annotated for 2 classes, abuse and non-abuse. BIBREF5 presented a dataset consisting of 5119 tweets distributed into recollection and non-recollection classes. The tweet was annotated as recollection if it explicitly mentioned a personal instance of sexual harassment. Sharifirad et al BIBREF6 created a dataset with 3240 tweets labeled into three categories of sexism: Indirect sexism, casual sexism, physical sexism. SVAC (Sexual Violence in Armed Conflict) is another related dataset which contains reports annotated for six different aspects of sexual violence: prevalence, perpetrators, victims, forms, location, and timing. Unlike all the datasets described above, which are annotated for a single group of labels, our dataset is annotated for five different linguistic aspects. It also has more annotated samples than most of its contemporaries. Dataset ::: Data Collection We focused our data collection over the period of October to December 2018 because October marked the one year anniversary of the MeToo movement. Our first step was to identify a list of countries where the movement was trending during the data collection period. To this end, we used Google's interactive tool named MeTooRisingWithGoogle, which visualizes search trends of the term "MeToo" across the globe. This helped us narrow down our query space to 16 countries. We then scraped 500 random posts from online sexual harassment support forums to help identify keywords or phrases related to the movement . The posts were first manually inspected by the annotators to determine if they were related to the MeToo movement. Namely, if they contained self-disclosures of sexual violence, relevant information about the events associated with the movement, references to news articles or advertisements calling for support for the movement. We then processed the relevant posts to extract a set of uni-grams and bi-grams with high tf-idf scores. The annotators further pruned this set by removing irrelevant terms resulting in a lexicon of 75 keywords. Some examples include: #Sexual Harassment, #TimesUp, #EveryDaySexism, assaulted, #WhenIwas, inappropriate, workplace harassment, groped, #NotOkay, believe survivors, #WhyIDidntReport. We then used Twitter's public streaming API to query for tweets from the selected countries, over the chosen three-month time frame, containing any of the keywords. This resulted in a preliminary corpus of 39,406 tweets. We further filtered this data down to include only English tweets based on tweet's language metadata field and also excluded short tweets (less than two tokens). Lastly, we de-duplicated the dataset based on the textual content. Namely, we removed all tweets that had more than 0.8 cosine similarity score on the unaltered text in tf-idf space with any another tweet. We employed this de-duplication to promote more lexical diversity in the dataset. After this filtering, we ended up with a corpus of 9,973 tweets. Table TABREF14 presents the distribution of the tweets by country before and after the filtering process. A large portion of the samples is from India because the MeToo movement has peaked towards the end of 2018 in India. There are very few samples from Russia likely because of content moderation and regulations on social media usage in the country. Figure FIGREF15 gives a geographical distribution of the curated dataset. Due to the sensitive nature of this data, we have decided to remove any personal identifiers (such as names, locations, and hyperlinks) from the examples presented in this paper. We also want to caution the readers that some of the examples in the rest of the paper, though censored for profanity, contain offensive language and express a harsh sentiment. Dataset ::: Annotation Task We chose against crowd-sourcing the annotation process because of the sensitive nature of the data and also to ensure a high quality of annotations. We employed three domain experts who had advanced degrees in clinical psychology and gender studies. The annotators were first provided with the guidelines document, which included instructions about each task, definitions of class labels, and examples. They studied this document and worked on a few examples to familiarize themselves with the annotation task. They also provided feedback on the document, which helped us refine the instructions and class definitions. The annotation process was broken down into five sub-tasks: for a given tweet, the annotators were instructed to identify relevance, stance, hate speech, sarcasm, and dialogue act. An important consideration was that the sub-tasks were not mutually exclusive, implying that the presence of one label did not consequently mean an absence of any. Dataset ::: Annotation Task ::: Task 1: Relevance Here the annotators had to determine if the given tweet was relevant to the MeToo movement. Relevant tweets typically include personal opinions (either positive or negative), experiences of abuse, support for victims, or links to MeToo related news articles. Following are examples of a relevant tweet: Officer [name] could be kicked out of the force after admitting he groped a woman at [place] festival last year. His lawyer argued saying the constable shouldn't be punished because of the #MeToo movement. #notokay #sexualabuse. and an irrelevant tweet: Had a bit of break. Went to the beautiful Port [place] and nearby areas. Absolutely stunning as usual. #beautiful #MeToo #Australia #auspol [URL]. We expect this relevance annotation could serve as a useful filter for downstream modeling. Dataset ::: Annotation Task ::: Task 2: Stance Stance detection is the task of determining if the author of a text is in favour or opposition of a particular target of interest BIBREF7, BIBREF8. Stance helps understand public opinion about a topic and also has downstream applications in information extraction, text summarization, and textual entailment BIBREF9. We categorized stance into three classes: Support, Opposition, Neither. Support typically included tweets that expressed appreciation of the MeToo movement, shared resources for victims of sexual abuse, or offered empathy towards victims. Following is an example of a tweet with a Support stance: Opinion: #MeToo gives a voice to victims while bringing attention to a nationwide stigma surrounding sexual misconduct at a local level.[URL]. This should go on. On the other hand, Opposition included tweets expressing dissent over the movement or demonstrating indifference towards the victims of sexual abuse or sexual violence. An example of an Opposition tweet is shown below: The double standards and selective outrage make it clear that feminist concerns about power imbalances in the workplace aren't principles but are tools to use against powerful men they hate and wish to destroy. #fakefeminism. #men. Dataset ::: Annotation Task ::: Task 3: Hate Speech Detection of hate speech in social media has been gaining interest from NLP researchers lately BIBREF10, BIBREF11. Our annotation scheme for hate speech is based on the work of BIBREF12. For a given tweet, the annotators first had to determine if it contained any hate speech. If the tweet was hateful, they had to identify if the hate was Directed or Generalized. Directed hate is targeted at a particular individual or entity, whereas Generalized hate is targeted at larger groups that belonged to a particular ethnicity, gender, or sexual orientation. Following are examples of tweets with Directed hate: [username] were lit minus getting f*c*i*g mouthraped by some drunk chick #MeToo (no body cares because I'm a male) [URL] and Generalized hate: For the men who r asking "y not then, y now?", u guys will still doubt her & harrass her even more for y she shared her story immediately no matter what! When your sister will tell her childhood story to u one day, i challenge u guys to ask "y not then, y now?" #Metoo [username] [URL] #a**holes. Dataset ::: Annotation Task ::: Task 4: Sarcasm Sarcasm detection has also become a topic of interest for computational linguistics over the last few years BIBREF13, BIBREF14 with applications in areas like sentiment analysis and affective computing. Sarcasm was an integral part of the MeToo movement. For example, many women used the hashtag #NoWomanEver to sarcastically describe some of their experiences with harassment. We instructed the annotators to identify the presence of any sarcasm in a tweet either about the movement or about an individual or entity. Following is an example of a sarcastic tweet: # was pound before it was a hashtag. If you replace hashtag with the pound in the #metoo, you get pound me too. Does that apply to [name]. Dataset ::: Annotation Task ::: Task 5: Dialogue Acts A dialogue act is defined as the function of a speaker's utterance during a conversation BIBREF15, for example, question, answer, request, suggestion, etc. Dialogue Acts have been extensive studied in spoken BIBREF16 and written BIBREF17 conversations and have lately been gaining interest in social media BIBREF18. In this task, we introduced three new dialogue acts that are specific to the MeToo movement: Allegation, Refutation, and Justification. Allegation: This category includes tweets that allege an individual or a group of sexual misconduct. The tweet could either be personal opinion or text summarizing allegations made against someone BIBREF19. The annotators were instructed to identify if the tweet includes the hypothesis of allegation based on first-hand account or a verifiable source confirming the allegation. Following is an example of a tweet that qualifies as an Allegation: More women accuse [name] of grave sexual misconduct...twitter seethes with anger. #MeToo #pervert. Refutation: This category contains tweets where an individual or an organization is denying allegations with or without evidence. Following is an example of a Refutation tweet: She is trying to use the #MeToo movement to settle old scores, says [name1] after [name2] levels sexual assault allegations against him. Justification: The class includes tweets where the author is justifying their actions. These could be alleged actions in the real world (e.g. allegation of sexual misconduct) or some action performed on twitter (e.g. supporting someone who was alleged of misconduct). Following is an example of a tweet that would be tagged as Justification: I actually did try to report it, but he and of his friends got together and lied to the police about it. #WhyIDidNotReport. Dataset Analysis This section includes descriptive and quantitative analysis performed on the dataset. Dataset Analysis ::: Inter-annotator agreement We evaluated inter-annotator agreements using Krippendorff's alpha (K-alpha) BIBREF20. K-alpha, unlike simple agreement measures, accounts for chance correction and class distributions and can be generalized to multiple annotators. Table TABREF27 summarizes the K-alpha measures for all the annotation tasks. We observe very strong agreements for most of the tasks with a maximum of 0.92 for the relevance task. The least agreement observed was for the hate speech task at 0.78. Per recommendations in BIBREF21, we conclude that these annotations are of good quality. We chose a straightforward approach of majority decision for label adjudication: if two or more annotators agreed on assigning a particular class label. In cases of discrepancy, the labels were adjudicated manually by the authors. Table TABREF28 shows a distribution of class labels after adjudication. Dataset Analysis ::: Geographical Distribution Figure FIGREF24 presents a distribution of all the tweets by their country of origin. As expected, a large portion of the tweets across all classes are from India, which is consistent with Table TABREF14. Interestingly, the US contributes comparatively a smaller proportion of tweets to Justification category, and likewise, UK contributes a lower portion of tweets to the Generalized Hate category. Further analysis is necessary to establish if these observations are statistically significant. Dataset Analysis ::: Label Correlations We conducted a simple experiment to understand the linguistic similarities (or lack thereof) for different pairs of class labels both within and across tasks. To this end, for each pair of labels, we converted the data into its tf-idf representation and then estimated Pearson, Spearman, and Kendall Tau correlation coefficients and also the corresponding $p$ values. The results are summarized in Table TABREF32. Overall, the correlation values seem to be on a lower end with maximum Pearson's correlation value obtained for the label pair Justification - Support, maximum Kendall Tau's correlation for Allegation - Support, and maximum Spearman's correlation for Directed Hate - Generalized Hate. The correlations are statistically significant ($p$ $<$ 0.05) for three pairs of class labels: Directed Hate - Generalized Hate, Directed Hate - Opposition, Sarcasm - Opposition. Sarcasm and Allegation also have statistically significant $p$ values for Pearson and Spearman correlations. Dataset Analysis ::: Keywords We used SAGE BIBREF22, a topic modelling method, to identify keywords associated with the various class labels in our dataset. SAGE is an unsupervised generative model that can identify words that distinguish one part of the corpus from rest. For our keyword analysis, we removed all the hashtags and only considered tokens that appeared at least five times in the corpus, thus ensuring they were representative of the topic. Table TABREF25 presents the top five keywords associated with each class and also their salience scores. Though Directed and Generalized hate are closely related topics, there is not much overlap between the top 5 salient keywords suggesting that there are linguistic cues to distinguish between them. The word predators is strongly indicative of Generalized Hate, which is intuitive because it is a term often used to describe people who were accused of sexual misconduct. The word lol being associated with Sarcasm is also reasonably intuitive because of sarcasm's close relation with humour. Dataset Analysis ::: Sentiment Analysis Figure FIGREF29 presents a word cloud representation of the data where the colours are assigned based on NRC emotion lexicon BIBREF23: green for positive and red for negative. We also analyzed all the classes in terms of Valence, Arousal, and Dominance using the NRC VAD lexicon BIBREF24. The results are summarized in Figure FIGREF33. Of all the classes, Directed-Hate has the largest valence spread, which is likely because of the extreme nature of the opinions expressed in such tweets. The spread for the dominance is fairly narrow for all class labels with the median score slightly above 0.5, suggesting a slightly dominant nature exhibited by the authors of the tweets. Discussion This paper introduces a new dataset containing tweets related to the #MeToo movement. It may involve opinions over socially stigmatized issues or self-reports of distressing incidents. Therefore, it is necessary to examine the social impact of this exercise, the ethics of the individuals concerned with the dataset, and it's limitations. Mental health implications: This dataset open sources posts curated by individuals who may have undergone instances of sexual exploitation in the past. While we respect and applaud their decision to raise their voices against their exploitation, we also understand that their revelations may have been met with public backlash and apathy in both the virtual as well as the real world. In such situations, where the social reputation of both accuser and accused may be under threat, mental health concerns become very important. As survivors recount their horrific episodes of sexual harassment, it becomes imperative to provide them with therapeutic care BIBREF25 as a safeguard against mental health hazards. Such measures, if combined with the integration of mental health assessment tools in social media platforms, can make victims of sexual abuse feel more empowered and self-contemplative towards their revelations. Use of MeTooMA dataset for population studies: We would like to mention that there have been no attempts to conduct population-centric analysis on the proposed dataset. The analysis presented in this dataset should be seen as a proof of concept to examine the instances of #MeToo movement on Twitter. The authors acknowledge that learning from this dataset cannot be used as-is for any direct social interventions. Network sampling of real-world users for any experimental work beyond this dataset would require careful evaluation beyond the observational analysis presented herein. Moreover, the findings could be used to assist already existing human knowledge. Experiences of the affected communities should be recorded and analyzed carefully, which could otherwise lead to social stigmatization, discrimination and societal bias. Enough care has been ensured so that this work does not come across as trying to target any specific individual for their personal stance on the issues pertaining to the social theme at hand. The authors do not aim to vilify individuals accused in the #MeToo cases in any manner. Our work tries to bring out general trends that may help researchers develop better techniques to understand mass unorganized virtual movements. Effect on marginalized communities: The authors recognize the impact of the #MeToo movement on socially stigmatized populations like LGBTQIA+. The #MeToo movement provided such individuals with the liberty to express their notions about instances of sexual violence and harassment. The movement acted as a catalyst towards implementing social policy changes to benefit the members of these communities. Hence, it is essential to keep in mind that any experimental work undertaken on this dataset should try to minimize the biases against the minority groups which might get amplified in cases of sudden outburst of public reactions over sensitive media discussions. Limitations of individual consent: Considering the mental health aspects of the individuals concerned, social media practitioners should vary of making automated interventions to aid the victims of sexual abuse as some individuals might not prefer to disclose their sexual identities or notions. Concerned social media users might also repeal their social media information if found out that their personal information may be potentially utilised for computational analysis. Hence, it is imperative to seek subtle individual consent before trying to profile authors involved in online discussions to uphold personal privacy. Use Cases The authors would like to formally propose some ideas on possible extensions of the proposed dataset: The rise of online hate speech and its related behaviours like cyber-bullying has been a hot topic of research in gender studies BIBREF26. Our dataset could be utilized for extracting actionable insights and virtual dynamics to identify gender roles for analyzing sexual abuse revelations similar to BIBREF27. The dataset could be utilized by psycholinguistics for extracting contextualized lexicons to examine how influential people are portrayed on public platforms in events of mass social media movements BIBREF28. Interestingly, such analysis may help linguists determine the power dynamics of authoritative people in terms of perspective and sentiment through campaign modelling. Marginalized voices affected by mass social movements can be studied through polarization analysis on graph-based simulations of the social media networks. Based on the data gathered from these nodes, community interactions could be leveraged to identify indigenous issues pertaining to societal unrest across various sections of the societyBIBREF29. Challenge Proposal: The authors of the paper would like to extend the present work as a challenge proposal for building computational semantic analysis systems aimed at online social movements. In contrast to already available datasets and existing challenges, we propose tasks on detecting hate speech, sarcasm, stance and relevancy that will be more focused on social media activities surrounding revelations of sexual abuse and harassment. The tasks may utilize the message-level text, linked images, tweet-level metadata and user-level interactions to model systems that are Fair, Accountable, Interpretable and Responsible (FAIR). Research ideas emerging from this work should not be limited to the above discussion. If needed, supplementary data required to enrich this dataset can be collected utilizing Twitter API and JSON records for exploratory tasks beyond the scope of the paper. Conclusion In this paper, we presented a new dataset annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. To our knowledge, there are no datasets out there that provide annotations across so many different dimensions. This allows researchers to perform various multi-label and multi-aspect classification experiments. Additionally, researchers could also address some interesting questions on how different linguistic components influence each other: e.g. does understanding one's stance help in better prediction of hate speech? In addition to these exciting computational challenges, we expect this data could be useful for socio and psycholinguists in understanding the language used by victims when disclosing their experiences of abuse. Likewise, they could analyze the language used by alleged individuals in justifying their actions. It also provides a chance to examine the language used to express hate in the context of sexual abuse. In the future, we would like to propose challenge tasks around this data where the participants will have to build computational models to capture all the different linguistic aspects that were annotated. We expect such a task would drive researchers to ask more interesting questions, find limitations of the dataset, propose improvements, and provide interesting insights.
No
bb169a0624aefe66d3b4b1116bbd152d54f9e31b
bb169a0624aefe66d3b4b1116bbd152d54f9e31b_0
Q: Did they experiment with the corpus? Text: Introduction Language resources are an essential component in entire R&D domains. From the humble but vast repositories of monolingual texts that are used by the newest language modeling approaches like BERT and GPT, to parallel corpora that allows our machine translation systems to inch closer to human performance, to the more specialized resources like WordNets that encode semantic relations between nodes, these resources are necessary for the general advancement of Natural Language Processing, which eventually evolves into real apps and services we are (already) taking for granted. We introduce RONEC - the ROmanian Named Entity Corpus, a free, open-source resource that contains annotated named entities in copy-right free text. A named entity corpus is generally used for Named Entity Recognition (NER): the identification of entities in text such as names of persons, locations, companies, dates, quantities, monetary values, etc. This information would be very useful for any number of applications: from a general information extraction system down to task-specific apps such as identifying monetary values in invoices or product and company references in customer reviews. We motivate the need for this corpus primarily because, for Romanian, there is no other such corpus. This basic necessity has sharply arisen as we, while working on a different project, have found out there are no usable resources to help us in an Information Extraction task: we were unable to extract people, locations or dates/values. This constituted a major road-block, with the only solution being to create such a corpus ourselves. As the corpus was out-of-scope for this project, the work was done privately, outside the umbrella of any authors' affiliations - this is why we are able to distribute this corpus completely free. The current landscape in Romania regarding language resources is relatively unchanged from the outline given by the META-NET project over six years ago. The in-depth analysis performed in this European-wide Horizon2020-funded project revealed that the Romanian language falls in the "fragmentary support" category, just above the last, "weak/none" category (see the language/support matrix in BIBREF3). This is why, in 2019/2020, we are able to present the first NER resource for Romanian. Introduction ::: Related corpora We note that, while fragmentary, there are a few related language resources available, but none that specifically target named entities: Introduction ::: Related corpora ::: ROCO corpus ROCO BIBREF4 is a Romanian journalistic corpus that contains approx. 7.1M tokens. It is rich in proper names, numerals and named entities. The corpus has been automatically annotated at word-level with morphosyntactic information (MSD annotations). Introduction ::: Related corpora ::: ROMBAC corpus Released in 2016, ROMBAC BIBREF5 is a Romanian text corpus containing 41M words divided in relatively equal domains like journalism, legalese, fiction, medicine, etc. Similarly to ROCO, it is automatically annotated at word level with MSD descriptors. Introduction ::: Related corpora ::: CoRoLa corpus The much larger and recently released CoRoLa corpus BIBREF6 contains over 1B words, similarly automatically annotated. In all these corpora the named entities are not a separate category - the texts are morphologically and syntactically annotated and all proper nouns are marked as such - NP - without any other annotation or assigned category. Thus, these corpora cannot be used in a true NER sense. Furthermore, annotations were done automatically with a tokenizer/tagger/parser, and thus are of slightly lower quality than one would expect of a gold-standard corpus. Corpus Description The corpus, at its current version 1.0 is composed of 5127 sentences, annotated with 16 classes, for a total of 26377 annotated entities. The 16 classes are: PERSON, NAT_REL_POL, ORG, GPE, LOC, FACILITY, PRODUCT, EVENT, LANGUAGE, WORK_OF_ART, DATETIME, PERIOD, MONEY, QUANTITY, NUMERIC_VALUE and ORDINAL. It is based on copyright-free text extracted from Southeast European Times (SETimes). The news portal has published “news and views from Southeast Europe” in ten languages, including Romanian. SETimes has been used in the past for several annotated corpora, including parallel corpora for machine translation. For RONEC we have used a hand-picked selection of sentences belonging to several categories (see table TABREF16 for stylistic examples). The corpus contains the standard diacritics in Romanian: letters ș and ț are written with a comma, not with a cedilla (like ş and ţ). In Romanian many older texts are written with cedillas instead of commas because full Unicode support in Windows came much later than the classic extended Ascii which only contained the cedilla letters. The 16 classes are inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE (Automatic Content Extraction) English Annotation Guidelines for Entities Version 6.6 2008.06.13 BIBREF8. Each class will be presented in detail, with examples, in the section SECREF3 A summary of available classes with word counts for each is available in table TABREF18. The corpus is available in two formats: BRAT and CoNLL-U Plus. Corpus Description ::: BRAT format As the corpus was developed in the BRAT environment, it was natural to keep this format as-is. BRAT is an online environment for collaborative text annotation - a web-based tool where several people can mark words, sub-word pieces, multiple word expressions, can link them together by relations, etc. The back-end format is very simple: given a text file that contains raw sentences, in another text file every annotated entity is specified by the start/end character offset as well as the entity type, one per line. RONEC is exported in the BRAT format as ready-to-use in the BRAT annotator itself. The corpus is pre-split into sub-folders, and contains all the extra files such as the entity list, etc, needed to directly start an eventual edit/extension of the corpus. Example (raw/untokenized) sentences: Tot în cadrul etapei a 2-a, a avut loc întâlnirea Vardar Skopje - S.C. Pick Szeged, care s-a încheiat la egalitate, 24 - 24. I s-a decernat Premiul Nobel pentru literatură pe anul 1959. Example annotation format: T1 ORDINAL 21 26 a 2-a T2 ORGANIZATION 50 63 Vardar Skopje T3 ORGANIZATION 66 82 S.C. Pick Szeged T4 NUMERIC_VALUE 116 118 24 T5 NUMERIC_VALUE 121 123 24 T6 DATETIME 175 184 anul 1959 Corpus Description ::: CoNLL-U Plus format The CoNLL-U Plus format extends the standard CoNLL-U which is used to annotate sentences, and in which many corpora are found today. The CoNLL-U format annotates one word per line with 10 distinct "columns" (tab separated): nolistsep ID: word index; FORM: unmodified word from the sentence; LEMMA: the word's lemma; UPOS: Universal part-of-speech tag; XPOS: Language-specific part-of-speech tag; FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; HEAD: Head of the current word, which is either a value of ID or zero; DEPREL: Universal dependency relation to the HEAD or a defined language-specific subtype of one; DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs; MISC: Miscellaneous annotations such as space after word. The CoNLL-U Plus extends this format by allowing a variable number of columns, with the restriction that the columns are to be defined in the header. For RONEC, we define our CoNLL-U Plus format as the standard 10 columns plus another extra column named RONEC:CLASS. This column has the following format: nolistsep [noitemsep] each named entity has a distinct id in the sentence, starting from 1; as an entity can span several words, all words that belong to it have the same id (no relation to word indexes) the first word belonging to an entity also contains its class (e.g. word "John" in entity "John Smith" will be marked as "1:PERSON") a non-entity word is marked with an asterisk * Table TABREF37 shows the CoNLL-U Plus format where for example "a 2-a" is an ORDINAL entity spanning 3 words. The first word "a" is marked in this last column as "1:ORDINAL" while the following words just with the id "1". The CoNLL-U Plus format we provide was created as follows: (1) annotate the raw sentences using the NLP-Cube tool for Romanian (it provides everything from tokenization to parsing, filling in all attributes in columns #1-#10; (2) align each token with the human-made entity annotations from the BRAT environment (the alignment is done automatically and is error-free) and fill in column #11. Classes and Annotation Methodology For the English language, we found two "categories" of NER annotations to be more prominent: CoNLL- and ACE-style. Because CoNLL only annotates a few classes (depending on the corpora, starting from the basic three: PERSON, ORGANIZATION and LOCATION, up to seven), we chose to follow the ACE-style with 18 different classes. After analyzing the ACE guide we have settled on 16 final classes that seemed more appropriate for Romanian, seen in table TABREF18. In the following sub-sections we will describe each class in turn, with a few examples. Some examples have been left in Romanian while some have been translated in English for the reader's convenience. In the examples at the end of each class' description, translations in English are colored for easier reading. Classes and Annotation Methodology ::: PERSON Persons, including fictive characters. We also mark common nouns that refer to a person (or several), including pronouns (us, them, they), but not articles (e.g. in "an individual" we don't mark "an"). Positions are not marked unless they directly refer to the person: "The presidential counselor has advised ... that a new counselor position is open.", here we mark "presidential counselor" because it refers to a person and not the "counselor" at the end of the sentence as it refers only to a position. Locul doi i-a revenit româncei Otilia Aionesei, o elevă de 17 ani. green!55!blueThe second place was won by Otilia Aionesei, a 17 year old student. Ministrul bulgar pentru afaceri europene, Meglena Kuneva ... green!55!blueThe Bulgarian Minister for European Affairs, Meglena Kuneva ... Classes and Annotation Methodology ::: NAT_REL_POL These are nationalities or religious or political groups. We include words that indicate the nationality of a person, group or product/object. Generally words marked as NAT_REl_POL are adjectives. avionul american green!55!bluethe American airplane Grupul olandez green!55!bluethe Dutch group Grecii iși vor alege președintele. green!55!blueThe Greeks will elect their president. Classes and Annotation Methodology ::: ORGANIZATION Companies, agencies, institutions, sports teams, groups of people. These entities must have an organizational structure. We only mark full organizational entities, not fragments, divisions or sub-structures. Universitatea Politehnica București a decis ... green!55!blueThe Politehnic University of Bucharest has decided ... Adobe Inc. a lansat un nou produs. green!55!blueAdobe Inc. has launched a new product. Classes and Annotation Methodology ::: GPE Geo-political entities: countries, counties, cities, villages. GPE entities have all of the following components: (1) a population, (2) a well-defined governing/organizing structure and (3) a physical location. GPE entities are not sub-entities (like a neighbourhood from a city). Armin van Buuren s-a născut în Leiden. green!55!blueArmin van Buuren was born in Leiden. U.S.A. ramane indiferentă amenințărilor Coreei de Nord. green!55!blueU.S.A. remains indifferent to North Korea's threats. Classes and Annotation Methodology ::: LOC Non-geo-political locations: mountains, seas, lakes, streets, neighbourhoods, addresses, continents, regions that are not GPEs. We include regions such as Middle East, "continents" like Central America or East Europe. Such regions include multiple countries, each with its own government and thus cannot be GPEs. Pe DN7 Petroșani-Obârșia Lotrului carosabilul era umed, acoperit (cca 1 cm) cu zăpadă, iar de la Obârșia Lotrului la stațiunea Vidra, stratul de zăpadă era de 5-6 cm. green!55!blueOn DN7 Petroșani-Obârșia Lotrului the road was wet, covered (about 1cm) with snow, and from Obârșia Lotrului to Vidra resort the snow depth was around 5-6 cm. Produsele comercializate în Europa de Est au o calitate inferioară celor din vest. green!55!blueProducts sold in East Europe have a lower quality than those sold in the west. Classes and Annotation Methodology ::: FACILITY Buildings, airports, highways, bridges or other functional structures built by humans. Buildings or other structures which house people, such as homes, factories, stadiums, office buildings, prisons, museums, tunnels, train stations, etc., named or not. Everything that falls within the architectural and civil engineering domains should be labeled as a FACILITY. We do not mark structures composed of multiple (and distinct) sub-structures, like a named area that is composed of several buildings, or "micro"-structures such as an apartment (as it a unit of an apartment building). However, larger, named functional structures can still be marked (such as "terminal X" of an airport). Autostrada A2 a intrat în reparații pe o bandă, însă pe A1 nu au fost încă începute lucrările. green!55!blueRepairs on one lane have commenced on the A2 highway, while on A1 no works have started yet. Aeroportul Henri Coandă ar putea sa fie extins cu un nou terminal. green!55!blueHenri Coandă Airport could be extended with a new terminal. Classes and Annotation Methodology ::: PRODUCT Objects, cars, food, items, anything that is a product, including software (such as Photoshop, Word, etc.). We don't mark services or processes. With very few exceptions (such as software products), PRODUCT entities have to have physical form, be directly man-made. We don't mark entities such as credit cards, written proofs, etc. We don't include the producer's name unless it's embedded in the name of the product. Mașina cumpărată este o Mazda. green!55!blueThe bought car is a Mazda. S-au cumpărat 5 Ford Taurus și 2 autobuze Volvo. green!55!blue5 Ford Taurus and 2 Volvo buses have been acquired. Classes and Annotation Methodology ::: EVENT Named events: Storms (e.g.:"Sandy"), battles, wars, sports events, etc. We don't mark sports teams (they are ORGs), matches (e.g. "Steaua-Rapid" will be marked as two separate ORGs even if they refer to a football match between the two teams, but the match is not specific). Events have to be significant, with at least national impact, not local. Războiul cel Mare, Războiul Națiunilor, denumit, în timpul celui de Al Doilea Război Mondial, Primul Război Mondial, a fost un conflict militar de dimensiuni mondiale. green!55!blueThe Great War, War of the Nations, as it was called during the Second World War, the First World War was a global-scale military conflict. Classes and Annotation Methodology ::: LANGUAGE This class represents all languages. Românii din România vorbesc română. green!55!blueRomanians from Romania speak Romanian. În Moldova se vorbește rusa și româna. green!55!blueIn Moldavia they speak Russian and Romanian. Classes and Annotation Methodology ::: WORK_OF_ART Books, songs, TV shows, pictures; everything that is a work of art/culture created by humans. We mark just their name. We don't mark laws. Accesul la Mona Lisa a fost temporar interzis vizitatorilor. green!55!blueAccess to Mona Lisa was temporarily forbidden to visitors. În această seară la Vrei sa Fii Miliardar vom avea un invitat special. green!55!blueThis evening in Who Wants To Be A Millionaire we will have a special guest. Classes and Annotation Methodology ::: DATETIME Date and time values. We will mark full constructions, not parts, if they refer to the same moment (e.g. a comma separates two distinct DATETIME entities only if they refer to distinct moments). If we have a well specified period (e.g. "between 20-22 hours") we mark it as PERIOD, otherwise less well defined periods are marked as DATETIME (e.g.: "last summer", "September", "Wednesday", "three days"); Ages are marked as DATETIME as well. Prepositions are not included. Te rog să vii aici în cel mult o oră, nu mâine sau poimâine. green!55!bluePlease come here in one hour at most, not tomorrow or the next day. Actul s-a semnat la orele 16. green!55!blueThe paper was signed at 16 hours. August este o lună secetoasă. green!55!blueAugust is a dry month. Pe data de 20 martie între orele 20-22 va fi oprită alimentarea cu curent. green!55!blueOn the 20th of March, between 20-22 hours, electricity will be cut-off. Classes and Annotation Methodology ::: PERIOD Periods/time intervals. Periods have to be very well marked in text. If a period is not like "a-b" then it is a DATETIME. Spectacolul are loc între 1 și 3 Aprilie. green!55!blueThe show takes place between 1 and 3 April. În prima jumătate a lunii iunie va avea loc evenimentul de două zile. green!55!blueIn the first half of June the two-day event will take place. Classes and Annotation Methodology ::: MONEY Money, monetary values, including units (e.g. USD, $, RON, lei, francs, pounds, Euro, etc.) written with number or letters. Entities that contain any monetary reference, including measuring units, will be marked as MONEY (e.g. 10$/sqm, 50 lei per hour). Words that are not clear values will not be marked, such as "an amount of money", "he received a coin". Primarul a semnat un contract în valoare de 10 milioane lei noi, echivalentul a aproape 2.6m EUR. green!55!blueThe mayor signed a contract worth 10 million new lei, equivalent of almost 2.6m EUR. Classes and Annotation Methodology ::: QUANTITY Measurements, such as weight, distance, etc. Any type of quantity belongs in this class. Conducătorul auto avea peste 1g/ml alcool în sânge, fiind oprit deoarece a fost prins cu peste 120 km/h în localitate. green!55!blueThe car driver had over 1g/ml blood alcohol, and was stopped because he was caught speeding with over 120km/h in the city. Classes and Annotation Methodology ::: NUMERIC_VALUE Any numeric value (including phone numbers), written with letters or numbers or as percents, which is not MONEY, QUANTITY or ORDINAL. Raportul XII-2 arată 4 552 de investitori, iar structura de portofoliu este: cont curent 0,05%, certificate de trezorerie 66,96%, depozite bancare 13,53%, obligațiuni municipale 19,46%. green!55!blueThe XII-2 report shows 4 552 investors, and the portfolio structure is: current account 0,05%, treasury bonds 66,96%, bank deposits 13,53%, municipal bonds 19,46%. Classes and Annotation Methodology ::: ORDINAL The first, the second, last, 30th, etc.; An ordinal must imply an order relation between elements. For example, "second grade" does not involve a direct order relation; it indicates just a succession in grades in a school system. Primul loc a fost ocupat de echipa Germaniei. green!55!blueThe first place was won by Germany's team. The corpus creation process involved a small number of people that have voluntarily joined the initiative, with the authors of this paper directing the work. Initially, we searched for NER resources in Romanian, and found none. Then we looked at English resources and read the in-depth ACE guide, out of which a 16-class draft evolved. We then identified a copy-right free text from which we hand-picked sentences to maximize the amount of entities while maintaining style balance. The annotation process was a trial-and-error, with cycles composed of annotation, discussing confusing entities, updating the annotation guide schematic and going through the corpus section again to correct entities following guide changes. The annotation process was done online, in BRAT. The actual annotation involved 4 people, has taken about 6 months (as work was volunteer-based, we could not have reached for 100% time commitment from the people involved), and followed the steps: nolistsep Each person would annotate the full corpus (this included the cycles of shaping up the annotation guide, and re-annotation). Inter-annotator agreement (ITA) at this point was relatively low, at 60-70%, especially for a number of classes. We then automatically merged all annotations, with the following criterion: if 3 of the 4 annotators agreed on an entity (class&start-stop), then it would go unchanged; otherwise mark the entity (longest span) as CONFLICTED. Two teams were created, each with two persons. Each team annotated the full corpus again, starting from the previous step. At this point, class-average ITA has risen to over 85%. Next, the same automatic merging happened, this time entities remained unchanged if both annotations agreed. Finally, one of the authors went through the full corpus one more time, correcting disagreements. We would like to make a few notes regarding classes and inter-annotator agreements: nolistsep [noitemsep] Classes like ORGANIZATION, NAT_REL_POL, LANGUAGE or GPEs have the highest ITA, over 98%. They are pretty clear and distinct from other classes. The DATETIME class also has a high ITA, with some overlap with PERIOD: annotators could fall-back if they were not sure that an expression was a PERIOD and simply mark it as DATETIME. WORK_OF_ART and EVENTs have caused some problems because the scope could not be properly defined from just one sentence. For example, a fair in a city could be a local event, but could also be a national periodic event. MONEY, QUANTITY and ORDINAL all are more specific classes than NUMERIC_VALUE. So, in cases where a numeric value has a unit of measure by it, it should become a QUANTITY, not a NUMERIC_VALUE. However, this "specificity" has created some confusion between these classes, just like with DATETIME and PERIOD. The ORDINAL class is a bit ambiguous, because, even though it ranks "higher" than NUMERIC_VALUE, it is the least diverse, most of the entities following the same patterns. PRODUCT and FACILITY classes have the lowest ITA by far (less than 40% in the first annotation cycle, less than 70% in the second). We actually considered removing these classes from the annotation process, but to try to mimic the OntoNotes classes as much as possible we decided to keep them in. There were many cases where the annotators disagreed about the scope of words being facilities or products. Even in the ACE guidelines these two classes are not very well "documented" with examples of what is and what is not a PRODUCT or FACILITY. Considering that these classes are, in our opinion, of the lowest importance among all the classes, a lower ITA was accepted. Finally, we would like to address the "semantic scope" of the entities - for example, for class PERSON, we do not annotate only proper nouns (NPs) but basically any reference to a person (e.g. through pronouns "she", job position titles, common nouns such as "father", etc.). We do this because we would like a high-coverage corpus, where entities are marked as more semantically-oriented rather than syntactically - in the same way ACE entities are more encompassing than CoNLL entities. We note that, for example, if one would like strict proper noun entities, it is very easy to extract from a PERSON multi-word entity only those words which are syntactically marked (by any tagger) as NPs. Conclusions We have presented RONEC - the first Named Entity Corpus for the Romanian language. At its current version, in its 5127 sentences we have 26377 annotated entities in 16 different classes. The corpus is based on copy-right free text, and is released as open-source, free to use and extend. We hope that in time this corpus will grow in size and mature towards a strong resource for Romanian. For this to happen we have released the corpus in two formats: CoNLL-U PLus, which is a text-based tab-separated pre-tokenized and annotated format that is simple to use, and BRAT, which is practically plug-and-play into the BRAT web annotation tool where anybody can add and annotate new sentences. Also, in the GitHub repo there are automatic alignment and conversion script to and from the two formats so they could easily be exported between. Finally, we have also provided an annotation guide that we will improve, and in time evolve into a full annotation document like the ACE Annotation Guidelines for Entities V6.6 BIBREF8.
Yes
0d7de323fd191a793858386d7eb8692cc924b432
0d7de323fd191a793858386d7eb8692cc924b432_0
Q: What writing styles are present in the corpus? Text: Introduction Language resources are an essential component in entire R&D domains. From the humble but vast repositories of monolingual texts that are used by the newest language modeling approaches like BERT and GPT, to parallel corpora that allows our machine translation systems to inch closer to human performance, to the more specialized resources like WordNets that encode semantic relations between nodes, these resources are necessary for the general advancement of Natural Language Processing, which eventually evolves into real apps and services we are (already) taking for granted. We introduce RONEC - the ROmanian Named Entity Corpus, a free, open-source resource that contains annotated named entities in copy-right free text. A named entity corpus is generally used for Named Entity Recognition (NER): the identification of entities in text such as names of persons, locations, companies, dates, quantities, monetary values, etc. This information would be very useful for any number of applications: from a general information extraction system down to task-specific apps such as identifying monetary values in invoices or product and company references in customer reviews. We motivate the need for this corpus primarily because, for Romanian, there is no other such corpus. This basic necessity has sharply arisen as we, while working on a different project, have found out there are no usable resources to help us in an Information Extraction task: we were unable to extract people, locations or dates/values. This constituted a major road-block, with the only solution being to create such a corpus ourselves. As the corpus was out-of-scope for this project, the work was done privately, outside the umbrella of any authors' affiliations - this is why we are able to distribute this corpus completely free. The current landscape in Romania regarding language resources is relatively unchanged from the outline given by the META-NET project over six years ago. The in-depth analysis performed in this European-wide Horizon2020-funded project revealed that the Romanian language falls in the "fragmentary support" category, just above the last, "weak/none" category (see the language/support matrix in BIBREF3). This is why, in 2019/2020, we are able to present the first NER resource for Romanian. Introduction ::: Related corpora We note that, while fragmentary, there are a few related language resources available, but none that specifically target named entities: Introduction ::: Related corpora ::: ROCO corpus ROCO BIBREF4 is a Romanian journalistic corpus that contains approx. 7.1M tokens. It is rich in proper names, numerals and named entities. The corpus has been automatically annotated at word-level with morphosyntactic information (MSD annotations). Introduction ::: Related corpora ::: ROMBAC corpus Released in 2016, ROMBAC BIBREF5 is a Romanian text corpus containing 41M words divided in relatively equal domains like journalism, legalese, fiction, medicine, etc. Similarly to ROCO, it is automatically annotated at word level with MSD descriptors. Introduction ::: Related corpora ::: CoRoLa corpus The much larger and recently released CoRoLa corpus BIBREF6 contains over 1B words, similarly automatically annotated. In all these corpora the named entities are not a separate category - the texts are morphologically and syntactically annotated and all proper nouns are marked as such - NP - without any other annotation or assigned category. Thus, these corpora cannot be used in a true NER sense. Furthermore, annotations were done automatically with a tokenizer/tagger/parser, and thus are of slightly lower quality than one would expect of a gold-standard corpus. Corpus Description The corpus, at its current version 1.0 is composed of 5127 sentences, annotated with 16 classes, for a total of 26377 annotated entities. The 16 classes are: PERSON, NAT_REL_POL, ORG, GPE, LOC, FACILITY, PRODUCT, EVENT, LANGUAGE, WORK_OF_ART, DATETIME, PERIOD, MONEY, QUANTITY, NUMERIC_VALUE and ORDINAL. It is based on copyright-free text extracted from Southeast European Times (SETimes). The news portal has published “news and views from Southeast Europe” in ten languages, including Romanian. SETimes has been used in the past for several annotated corpora, including parallel corpora for machine translation. For RONEC we have used a hand-picked selection of sentences belonging to several categories (see table TABREF16 for stylistic examples). The corpus contains the standard diacritics in Romanian: letters ș and ț are written with a comma, not with a cedilla (like ş and ţ). In Romanian many older texts are written with cedillas instead of commas because full Unicode support in Windows came much later than the classic extended Ascii which only contained the cedilla letters. The 16 classes are inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE (Automatic Content Extraction) English Annotation Guidelines for Entities Version 6.6 2008.06.13 BIBREF8. Each class will be presented in detail, with examples, in the section SECREF3 A summary of available classes with word counts for each is available in table TABREF18. The corpus is available in two formats: BRAT and CoNLL-U Plus. Corpus Description ::: BRAT format As the corpus was developed in the BRAT environment, it was natural to keep this format as-is. BRAT is an online environment for collaborative text annotation - a web-based tool where several people can mark words, sub-word pieces, multiple word expressions, can link them together by relations, etc. The back-end format is very simple: given a text file that contains raw sentences, in another text file every annotated entity is specified by the start/end character offset as well as the entity type, one per line. RONEC is exported in the BRAT format as ready-to-use in the BRAT annotator itself. The corpus is pre-split into sub-folders, and contains all the extra files such as the entity list, etc, needed to directly start an eventual edit/extension of the corpus. Example (raw/untokenized) sentences: Tot în cadrul etapei a 2-a, a avut loc întâlnirea Vardar Skopje - S.C. Pick Szeged, care s-a încheiat la egalitate, 24 - 24. I s-a decernat Premiul Nobel pentru literatură pe anul 1959. Example annotation format: T1 ORDINAL 21 26 a 2-a T2 ORGANIZATION 50 63 Vardar Skopje T3 ORGANIZATION 66 82 S.C. Pick Szeged T4 NUMERIC_VALUE 116 118 24 T5 NUMERIC_VALUE 121 123 24 T6 DATETIME 175 184 anul 1959 Corpus Description ::: CoNLL-U Plus format The CoNLL-U Plus format extends the standard CoNLL-U which is used to annotate sentences, and in which many corpora are found today. The CoNLL-U format annotates one word per line with 10 distinct "columns" (tab separated): nolistsep ID: word index; FORM: unmodified word from the sentence; LEMMA: the word's lemma; UPOS: Universal part-of-speech tag; XPOS: Language-specific part-of-speech tag; FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; HEAD: Head of the current word, which is either a value of ID or zero; DEPREL: Universal dependency relation to the HEAD or a defined language-specific subtype of one; DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs; MISC: Miscellaneous annotations such as space after word. The CoNLL-U Plus extends this format by allowing a variable number of columns, with the restriction that the columns are to be defined in the header. For RONEC, we define our CoNLL-U Plus format as the standard 10 columns plus another extra column named RONEC:CLASS. This column has the following format: nolistsep [noitemsep] each named entity has a distinct id in the sentence, starting from 1; as an entity can span several words, all words that belong to it have the same id (no relation to word indexes) the first word belonging to an entity also contains its class (e.g. word "John" in entity "John Smith" will be marked as "1:PERSON") a non-entity word is marked with an asterisk * Table TABREF37 shows the CoNLL-U Plus format where for example "a 2-a" is an ORDINAL entity spanning 3 words. The first word "a" is marked in this last column as "1:ORDINAL" while the following words just with the id "1". The CoNLL-U Plus format we provide was created as follows: (1) annotate the raw sentences using the NLP-Cube tool for Romanian (it provides everything from tokenization to parsing, filling in all attributes in columns #1-#10; (2) align each token with the human-made entity annotations from the BRAT environment (the alignment is done automatically and is error-free) and fill in column #11. Classes and Annotation Methodology For the English language, we found two "categories" of NER annotations to be more prominent: CoNLL- and ACE-style. Because CoNLL only annotates a few classes (depending on the corpora, starting from the basic three: PERSON, ORGANIZATION and LOCATION, up to seven), we chose to follow the ACE-style with 18 different classes. After analyzing the ACE guide we have settled on 16 final classes that seemed more appropriate for Romanian, seen in table TABREF18. In the following sub-sections we will describe each class in turn, with a few examples. Some examples have been left in Romanian while some have been translated in English for the reader's convenience. In the examples at the end of each class' description, translations in English are colored for easier reading. Classes and Annotation Methodology ::: PERSON Persons, including fictive characters. We also mark common nouns that refer to a person (or several), including pronouns (us, them, they), but not articles (e.g. in "an individual" we don't mark "an"). Positions are not marked unless they directly refer to the person: "The presidential counselor has advised ... that a new counselor position is open.", here we mark "presidential counselor" because it refers to a person and not the "counselor" at the end of the sentence as it refers only to a position. Locul doi i-a revenit româncei Otilia Aionesei, o elevă de 17 ani. green!55!blueThe second place was won by Otilia Aionesei, a 17 year old student. Ministrul bulgar pentru afaceri europene, Meglena Kuneva ... green!55!blueThe Bulgarian Minister for European Affairs, Meglena Kuneva ... Classes and Annotation Methodology ::: NAT_REL_POL These are nationalities or religious or political groups. We include words that indicate the nationality of a person, group or product/object. Generally words marked as NAT_REl_POL are adjectives. avionul american green!55!bluethe American airplane Grupul olandez green!55!bluethe Dutch group Grecii iși vor alege președintele. green!55!blueThe Greeks will elect their president. Classes and Annotation Methodology ::: ORGANIZATION Companies, agencies, institutions, sports teams, groups of people. These entities must have an organizational structure. We only mark full organizational entities, not fragments, divisions or sub-structures. Universitatea Politehnica București a decis ... green!55!blueThe Politehnic University of Bucharest has decided ... Adobe Inc. a lansat un nou produs. green!55!blueAdobe Inc. has launched a new product. Classes and Annotation Methodology ::: GPE Geo-political entities: countries, counties, cities, villages. GPE entities have all of the following components: (1) a population, (2) a well-defined governing/organizing structure and (3) a physical location. GPE entities are not sub-entities (like a neighbourhood from a city). Armin van Buuren s-a născut în Leiden. green!55!blueArmin van Buuren was born in Leiden. U.S.A. ramane indiferentă amenințărilor Coreei de Nord. green!55!blueU.S.A. remains indifferent to North Korea's threats. Classes and Annotation Methodology ::: LOC Non-geo-political locations: mountains, seas, lakes, streets, neighbourhoods, addresses, continents, regions that are not GPEs. We include regions such as Middle East, "continents" like Central America or East Europe. Such regions include multiple countries, each with its own government and thus cannot be GPEs. Pe DN7 Petroșani-Obârșia Lotrului carosabilul era umed, acoperit (cca 1 cm) cu zăpadă, iar de la Obârșia Lotrului la stațiunea Vidra, stratul de zăpadă era de 5-6 cm. green!55!blueOn DN7 Petroșani-Obârșia Lotrului the road was wet, covered (about 1cm) with snow, and from Obârșia Lotrului to Vidra resort the snow depth was around 5-6 cm. Produsele comercializate în Europa de Est au o calitate inferioară celor din vest. green!55!blueProducts sold in East Europe have a lower quality than those sold in the west. Classes and Annotation Methodology ::: FACILITY Buildings, airports, highways, bridges or other functional structures built by humans. Buildings or other structures which house people, such as homes, factories, stadiums, office buildings, prisons, museums, tunnels, train stations, etc., named or not. Everything that falls within the architectural and civil engineering domains should be labeled as a FACILITY. We do not mark structures composed of multiple (and distinct) sub-structures, like a named area that is composed of several buildings, or "micro"-structures such as an apartment (as it a unit of an apartment building). However, larger, named functional structures can still be marked (such as "terminal X" of an airport). Autostrada A2 a intrat în reparații pe o bandă, însă pe A1 nu au fost încă începute lucrările. green!55!blueRepairs on one lane have commenced on the A2 highway, while on A1 no works have started yet. Aeroportul Henri Coandă ar putea sa fie extins cu un nou terminal. green!55!blueHenri Coandă Airport could be extended with a new terminal. Classes and Annotation Methodology ::: PRODUCT Objects, cars, food, items, anything that is a product, including software (such as Photoshop, Word, etc.). We don't mark services or processes. With very few exceptions (such as software products), PRODUCT entities have to have physical form, be directly man-made. We don't mark entities such as credit cards, written proofs, etc. We don't include the producer's name unless it's embedded in the name of the product. Mașina cumpărată este o Mazda. green!55!blueThe bought car is a Mazda. S-au cumpărat 5 Ford Taurus și 2 autobuze Volvo. green!55!blue5 Ford Taurus and 2 Volvo buses have been acquired. Classes and Annotation Methodology ::: EVENT Named events: Storms (e.g.:"Sandy"), battles, wars, sports events, etc. We don't mark sports teams (they are ORGs), matches (e.g. "Steaua-Rapid" will be marked as two separate ORGs even if they refer to a football match between the two teams, but the match is not specific). Events have to be significant, with at least national impact, not local. Războiul cel Mare, Războiul Națiunilor, denumit, în timpul celui de Al Doilea Război Mondial, Primul Război Mondial, a fost un conflict militar de dimensiuni mondiale. green!55!blueThe Great War, War of the Nations, as it was called during the Second World War, the First World War was a global-scale military conflict. Classes and Annotation Methodology ::: LANGUAGE This class represents all languages. Românii din România vorbesc română. green!55!blueRomanians from Romania speak Romanian. În Moldova se vorbește rusa și româna. green!55!blueIn Moldavia they speak Russian and Romanian. Classes and Annotation Methodology ::: WORK_OF_ART Books, songs, TV shows, pictures; everything that is a work of art/culture created by humans. We mark just their name. We don't mark laws. Accesul la Mona Lisa a fost temporar interzis vizitatorilor. green!55!blueAccess to Mona Lisa was temporarily forbidden to visitors. În această seară la Vrei sa Fii Miliardar vom avea un invitat special. green!55!blueThis evening in Who Wants To Be A Millionaire we will have a special guest. Classes and Annotation Methodology ::: DATETIME Date and time values. We will mark full constructions, not parts, if they refer to the same moment (e.g. a comma separates two distinct DATETIME entities only if they refer to distinct moments). If we have a well specified period (e.g. "between 20-22 hours") we mark it as PERIOD, otherwise less well defined periods are marked as DATETIME (e.g.: "last summer", "September", "Wednesday", "three days"); Ages are marked as DATETIME as well. Prepositions are not included. Te rog să vii aici în cel mult o oră, nu mâine sau poimâine. green!55!bluePlease come here in one hour at most, not tomorrow or the next day. Actul s-a semnat la orele 16. green!55!blueThe paper was signed at 16 hours. August este o lună secetoasă. green!55!blueAugust is a dry month. Pe data de 20 martie între orele 20-22 va fi oprită alimentarea cu curent. green!55!blueOn the 20th of March, between 20-22 hours, electricity will be cut-off. Classes and Annotation Methodology ::: PERIOD Periods/time intervals. Periods have to be very well marked in text. If a period is not like "a-b" then it is a DATETIME. Spectacolul are loc între 1 și 3 Aprilie. green!55!blueThe show takes place between 1 and 3 April. În prima jumătate a lunii iunie va avea loc evenimentul de două zile. green!55!blueIn the first half of June the two-day event will take place. Classes and Annotation Methodology ::: MONEY Money, monetary values, including units (e.g. USD, $, RON, lei, francs, pounds, Euro, etc.) written with number or letters. Entities that contain any monetary reference, including measuring units, will be marked as MONEY (e.g. 10$/sqm, 50 lei per hour). Words that are not clear values will not be marked, such as "an amount of money", "he received a coin". Primarul a semnat un contract în valoare de 10 milioane lei noi, echivalentul a aproape 2.6m EUR. green!55!blueThe mayor signed a contract worth 10 million new lei, equivalent of almost 2.6m EUR. Classes and Annotation Methodology ::: QUANTITY Measurements, such as weight, distance, etc. Any type of quantity belongs in this class. Conducătorul auto avea peste 1g/ml alcool în sânge, fiind oprit deoarece a fost prins cu peste 120 km/h în localitate. green!55!blueThe car driver had over 1g/ml blood alcohol, and was stopped because he was caught speeding with over 120km/h in the city. Classes and Annotation Methodology ::: NUMERIC_VALUE Any numeric value (including phone numbers), written with letters or numbers or as percents, which is not MONEY, QUANTITY or ORDINAL. Raportul XII-2 arată 4 552 de investitori, iar structura de portofoliu este: cont curent 0,05%, certificate de trezorerie 66,96%, depozite bancare 13,53%, obligațiuni municipale 19,46%. green!55!blueThe XII-2 report shows 4 552 investors, and the portfolio structure is: current account 0,05%, treasury bonds 66,96%, bank deposits 13,53%, municipal bonds 19,46%. Classes and Annotation Methodology ::: ORDINAL The first, the second, last, 30th, etc.; An ordinal must imply an order relation between elements. For example, "second grade" does not involve a direct order relation; it indicates just a succession in grades in a school system. Primul loc a fost ocupat de echipa Germaniei. green!55!blueThe first place was won by Germany's team. The corpus creation process involved a small number of people that have voluntarily joined the initiative, with the authors of this paper directing the work. Initially, we searched for NER resources in Romanian, and found none. Then we looked at English resources and read the in-depth ACE guide, out of which a 16-class draft evolved. We then identified a copy-right free text from which we hand-picked sentences to maximize the amount of entities while maintaining style balance. The annotation process was a trial-and-error, with cycles composed of annotation, discussing confusing entities, updating the annotation guide schematic and going through the corpus section again to correct entities following guide changes. The annotation process was done online, in BRAT. The actual annotation involved 4 people, has taken about 6 months (as work was volunteer-based, we could not have reached for 100% time commitment from the people involved), and followed the steps: nolistsep Each person would annotate the full corpus (this included the cycles of shaping up the annotation guide, and re-annotation). Inter-annotator agreement (ITA) at this point was relatively low, at 60-70%, especially for a number of classes. We then automatically merged all annotations, with the following criterion: if 3 of the 4 annotators agreed on an entity (class&start-stop), then it would go unchanged; otherwise mark the entity (longest span) as CONFLICTED. Two teams were created, each with two persons. Each team annotated the full corpus again, starting from the previous step. At this point, class-average ITA has risen to over 85%. Next, the same automatic merging happened, this time entities remained unchanged if both annotations agreed. Finally, one of the authors went through the full corpus one more time, correcting disagreements. We would like to make a few notes regarding classes and inter-annotator agreements: nolistsep [noitemsep] Classes like ORGANIZATION, NAT_REL_POL, LANGUAGE or GPEs have the highest ITA, over 98%. They are pretty clear and distinct from other classes. The DATETIME class also has a high ITA, with some overlap with PERIOD: annotators could fall-back if they were not sure that an expression was a PERIOD and simply mark it as DATETIME. WORK_OF_ART and EVENTs have caused some problems because the scope could not be properly defined from just one sentence. For example, a fair in a city could be a local event, but could also be a national periodic event. MONEY, QUANTITY and ORDINAL all are more specific classes than NUMERIC_VALUE. So, in cases where a numeric value has a unit of measure by it, it should become a QUANTITY, not a NUMERIC_VALUE. However, this "specificity" has created some confusion between these classes, just like with DATETIME and PERIOD. The ORDINAL class is a bit ambiguous, because, even though it ranks "higher" than NUMERIC_VALUE, it is the least diverse, most of the entities following the same patterns. PRODUCT and FACILITY classes have the lowest ITA by far (less than 40% in the first annotation cycle, less than 70% in the second). We actually considered removing these classes from the annotation process, but to try to mimic the OntoNotes classes as much as possible we decided to keep them in. There were many cases where the annotators disagreed about the scope of words being facilities or products. Even in the ACE guidelines these two classes are not very well "documented" with examples of what is and what is not a PRODUCT or FACILITY. Considering that these classes are, in our opinion, of the lowest importance among all the classes, a lower ITA was accepted. Finally, we would like to address the "semantic scope" of the entities - for example, for class PERSON, we do not annotate only proper nouns (NPs) but basically any reference to a person (e.g. through pronouns "she", job position titles, common nouns such as "father", etc.). We do this because we would like a high-coverage corpus, where entities are marked as more semantically-oriented rather than syntactically - in the same way ACE entities are more encompassing than CoNLL entities. We note that, for example, if one would like strict proper noun entities, it is very easy to extract from a PERSON multi-word entity only those words which are syntactically marked (by any tagger) as NPs. Conclusions We have presented RONEC - the first Named Entity Corpus for the Romanian language. At its current version, in its 5127 sentences we have 26377 annotated entities in 16 different classes. The corpus is based on copy-right free text, and is released as open-source, free to use and extend. We hope that in time this corpus will grow in size and mature towards a strong resource for Romanian. For this to happen we have released the corpus in two formats: CoNLL-U PLus, which is a text-based tab-separated pre-tokenized and annotated format that is simple to use, and BRAT, which is practically plug-and-play into the BRAT web annotation tool where anybody can add and annotate new sentences. Also, in the GitHub repo there are automatic alignment and conversion script to and from the two formats so they could easily be exported between. Finally, we have also provided an annotation guide that we will improve, and in time evolve into a full annotation document like the ACE Annotation Guidelines for Entities V6.6 BIBREF8.
current news, historical news, free time, sports, juridical news pieces, personal adverts, editorials.
ca8e023d142d89557714d67739e1df54d7e5ce4b
ca8e023d142d89557714d67739e1df54d7e5ce4b_0
Q: How did they determine the distinct classes? Text: Introduction Language resources are an essential component in entire R&D domains. From the humble but vast repositories of monolingual texts that are used by the newest language modeling approaches like BERT and GPT, to parallel corpora that allows our machine translation systems to inch closer to human performance, to the more specialized resources like WordNets that encode semantic relations between nodes, these resources are necessary for the general advancement of Natural Language Processing, which eventually evolves into real apps and services we are (already) taking for granted. We introduce RONEC - the ROmanian Named Entity Corpus, a free, open-source resource that contains annotated named entities in copy-right free text. A named entity corpus is generally used for Named Entity Recognition (NER): the identification of entities in text such as names of persons, locations, companies, dates, quantities, monetary values, etc. This information would be very useful for any number of applications: from a general information extraction system down to task-specific apps such as identifying monetary values in invoices or product and company references in customer reviews. We motivate the need for this corpus primarily because, for Romanian, there is no other such corpus. This basic necessity has sharply arisen as we, while working on a different project, have found out there are no usable resources to help us in an Information Extraction task: we were unable to extract people, locations or dates/values. This constituted a major road-block, with the only solution being to create such a corpus ourselves. As the corpus was out-of-scope for this project, the work was done privately, outside the umbrella of any authors' affiliations - this is why we are able to distribute this corpus completely free. The current landscape in Romania regarding language resources is relatively unchanged from the outline given by the META-NET project over six years ago. The in-depth analysis performed in this European-wide Horizon2020-funded project revealed that the Romanian language falls in the "fragmentary support" category, just above the last, "weak/none" category (see the language/support matrix in BIBREF3). This is why, in 2019/2020, we are able to present the first NER resource for Romanian. Introduction ::: Related corpora We note that, while fragmentary, there are a few related language resources available, but none that specifically target named entities: Introduction ::: Related corpora ::: ROCO corpus ROCO BIBREF4 is a Romanian journalistic corpus that contains approx. 7.1M tokens. It is rich in proper names, numerals and named entities. The corpus has been automatically annotated at word-level with morphosyntactic information (MSD annotations). Introduction ::: Related corpora ::: ROMBAC corpus Released in 2016, ROMBAC BIBREF5 is a Romanian text corpus containing 41M words divided in relatively equal domains like journalism, legalese, fiction, medicine, etc. Similarly to ROCO, it is automatically annotated at word level with MSD descriptors. Introduction ::: Related corpora ::: CoRoLa corpus The much larger and recently released CoRoLa corpus BIBREF6 contains over 1B words, similarly automatically annotated. In all these corpora the named entities are not a separate category - the texts are morphologically and syntactically annotated and all proper nouns are marked as such - NP - without any other annotation or assigned category. Thus, these corpora cannot be used in a true NER sense. Furthermore, annotations were done automatically with a tokenizer/tagger/parser, and thus are of slightly lower quality than one would expect of a gold-standard corpus. Corpus Description The corpus, at its current version 1.0 is composed of 5127 sentences, annotated with 16 classes, for a total of 26377 annotated entities. The 16 classes are: PERSON, NAT_REL_POL, ORG, GPE, LOC, FACILITY, PRODUCT, EVENT, LANGUAGE, WORK_OF_ART, DATETIME, PERIOD, MONEY, QUANTITY, NUMERIC_VALUE and ORDINAL. It is based on copyright-free text extracted from Southeast European Times (SETimes). The news portal has published “news and views from Southeast Europe” in ten languages, including Romanian. SETimes has been used in the past for several annotated corpora, including parallel corpora for machine translation. For RONEC we have used a hand-picked selection of sentences belonging to several categories (see table TABREF16 for stylistic examples). The corpus contains the standard diacritics in Romanian: letters ș and ț are written with a comma, not with a cedilla (like ş and ţ). In Romanian many older texts are written with cedillas instead of commas because full Unicode support in Windows came much later than the classic extended Ascii which only contained the cedilla letters. The 16 classes are inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE (Automatic Content Extraction) English Annotation Guidelines for Entities Version 6.6 2008.06.13 BIBREF8. Each class will be presented in detail, with examples, in the section SECREF3 A summary of available classes with word counts for each is available in table TABREF18. The corpus is available in two formats: BRAT and CoNLL-U Plus. Corpus Description ::: BRAT format As the corpus was developed in the BRAT environment, it was natural to keep this format as-is. BRAT is an online environment for collaborative text annotation - a web-based tool where several people can mark words, sub-word pieces, multiple word expressions, can link them together by relations, etc. The back-end format is very simple: given a text file that contains raw sentences, in another text file every annotated entity is specified by the start/end character offset as well as the entity type, one per line. RONEC is exported in the BRAT format as ready-to-use in the BRAT annotator itself. The corpus is pre-split into sub-folders, and contains all the extra files such as the entity list, etc, needed to directly start an eventual edit/extension of the corpus. Example (raw/untokenized) sentences: Tot în cadrul etapei a 2-a, a avut loc întâlnirea Vardar Skopje - S.C. Pick Szeged, care s-a încheiat la egalitate, 24 - 24. I s-a decernat Premiul Nobel pentru literatură pe anul 1959. Example annotation format: T1 ORDINAL 21 26 a 2-a T2 ORGANIZATION 50 63 Vardar Skopje T3 ORGANIZATION 66 82 S.C. Pick Szeged T4 NUMERIC_VALUE 116 118 24 T5 NUMERIC_VALUE 121 123 24 T6 DATETIME 175 184 anul 1959 Corpus Description ::: CoNLL-U Plus format The CoNLL-U Plus format extends the standard CoNLL-U which is used to annotate sentences, and in which many corpora are found today. The CoNLL-U format annotates one word per line with 10 distinct "columns" (tab separated): nolistsep ID: word index; FORM: unmodified word from the sentence; LEMMA: the word's lemma; UPOS: Universal part-of-speech tag; XPOS: Language-specific part-of-speech tag; FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; HEAD: Head of the current word, which is either a value of ID or zero; DEPREL: Universal dependency relation to the HEAD or a defined language-specific subtype of one; DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs; MISC: Miscellaneous annotations such as space after word. The CoNLL-U Plus extends this format by allowing a variable number of columns, with the restriction that the columns are to be defined in the header. For RONEC, we define our CoNLL-U Plus format as the standard 10 columns plus another extra column named RONEC:CLASS. This column has the following format: nolistsep [noitemsep] each named entity has a distinct id in the sentence, starting from 1; as an entity can span several words, all words that belong to it have the same id (no relation to word indexes) the first word belonging to an entity also contains its class (e.g. word "John" in entity "John Smith" will be marked as "1:PERSON") a non-entity word is marked with an asterisk * Table TABREF37 shows the CoNLL-U Plus format where for example "a 2-a" is an ORDINAL entity spanning 3 words. The first word "a" is marked in this last column as "1:ORDINAL" while the following words just with the id "1". The CoNLL-U Plus format we provide was created as follows: (1) annotate the raw sentences using the NLP-Cube tool for Romanian (it provides everything from tokenization to parsing, filling in all attributes in columns #1-#10; (2) align each token with the human-made entity annotations from the BRAT environment (the alignment is done automatically and is error-free) and fill in column #11. Classes and Annotation Methodology For the English language, we found two "categories" of NER annotations to be more prominent: CoNLL- and ACE-style. Because CoNLL only annotates a few classes (depending on the corpora, starting from the basic three: PERSON, ORGANIZATION and LOCATION, up to seven), we chose to follow the ACE-style with 18 different classes. After analyzing the ACE guide we have settled on 16 final classes that seemed more appropriate for Romanian, seen in table TABREF18. In the following sub-sections we will describe each class in turn, with a few examples. Some examples have been left in Romanian while some have been translated in English for the reader's convenience. In the examples at the end of each class' description, translations in English are colored for easier reading. Classes and Annotation Methodology ::: PERSON Persons, including fictive characters. We also mark common nouns that refer to a person (or several), including pronouns (us, them, they), but not articles (e.g. in "an individual" we don't mark "an"). Positions are not marked unless they directly refer to the person: "The presidential counselor has advised ... that a new counselor position is open.", here we mark "presidential counselor" because it refers to a person and not the "counselor" at the end of the sentence as it refers only to a position. Locul doi i-a revenit româncei Otilia Aionesei, o elevă de 17 ani. green!55!blueThe second place was won by Otilia Aionesei, a 17 year old student. Ministrul bulgar pentru afaceri europene, Meglena Kuneva ... green!55!blueThe Bulgarian Minister for European Affairs, Meglena Kuneva ... Classes and Annotation Methodology ::: NAT_REL_POL These are nationalities or religious or political groups. We include words that indicate the nationality of a person, group or product/object. Generally words marked as NAT_REl_POL are adjectives. avionul american green!55!bluethe American airplane Grupul olandez green!55!bluethe Dutch group Grecii iși vor alege președintele. green!55!blueThe Greeks will elect their president. Classes and Annotation Methodology ::: ORGANIZATION Companies, agencies, institutions, sports teams, groups of people. These entities must have an organizational structure. We only mark full organizational entities, not fragments, divisions or sub-structures. Universitatea Politehnica București a decis ... green!55!blueThe Politehnic University of Bucharest has decided ... Adobe Inc. a lansat un nou produs. green!55!blueAdobe Inc. has launched a new product. Classes and Annotation Methodology ::: GPE Geo-political entities: countries, counties, cities, villages. GPE entities have all of the following components: (1) a population, (2) a well-defined governing/organizing structure and (3) a physical location. GPE entities are not sub-entities (like a neighbourhood from a city). Armin van Buuren s-a născut în Leiden. green!55!blueArmin van Buuren was born in Leiden. U.S.A. ramane indiferentă amenințărilor Coreei de Nord. green!55!blueU.S.A. remains indifferent to North Korea's threats. Classes and Annotation Methodology ::: LOC Non-geo-political locations: mountains, seas, lakes, streets, neighbourhoods, addresses, continents, regions that are not GPEs. We include regions such as Middle East, "continents" like Central America or East Europe. Such regions include multiple countries, each with its own government and thus cannot be GPEs. Pe DN7 Petroșani-Obârșia Lotrului carosabilul era umed, acoperit (cca 1 cm) cu zăpadă, iar de la Obârșia Lotrului la stațiunea Vidra, stratul de zăpadă era de 5-6 cm. green!55!blueOn DN7 Petroșani-Obârșia Lotrului the road was wet, covered (about 1cm) with snow, and from Obârșia Lotrului to Vidra resort the snow depth was around 5-6 cm. Produsele comercializate în Europa de Est au o calitate inferioară celor din vest. green!55!blueProducts sold in East Europe have a lower quality than those sold in the west. Classes and Annotation Methodology ::: FACILITY Buildings, airports, highways, bridges or other functional structures built by humans. Buildings or other structures which house people, such as homes, factories, stadiums, office buildings, prisons, museums, tunnels, train stations, etc., named or not. Everything that falls within the architectural and civil engineering domains should be labeled as a FACILITY. We do not mark structures composed of multiple (and distinct) sub-structures, like a named area that is composed of several buildings, or "micro"-structures such as an apartment (as it a unit of an apartment building). However, larger, named functional structures can still be marked (such as "terminal X" of an airport). Autostrada A2 a intrat în reparații pe o bandă, însă pe A1 nu au fost încă începute lucrările. green!55!blueRepairs on one lane have commenced on the A2 highway, while on A1 no works have started yet. Aeroportul Henri Coandă ar putea sa fie extins cu un nou terminal. green!55!blueHenri Coandă Airport could be extended with a new terminal. Classes and Annotation Methodology ::: PRODUCT Objects, cars, food, items, anything that is a product, including software (such as Photoshop, Word, etc.). We don't mark services or processes. With very few exceptions (such as software products), PRODUCT entities have to have physical form, be directly man-made. We don't mark entities such as credit cards, written proofs, etc. We don't include the producer's name unless it's embedded in the name of the product. Mașina cumpărată este o Mazda. green!55!blueThe bought car is a Mazda. S-au cumpărat 5 Ford Taurus și 2 autobuze Volvo. green!55!blue5 Ford Taurus and 2 Volvo buses have been acquired. Classes and Annotation Methodology ::: EVENT Named events: Storms (e.g.:"Sandy"), battles, wars, sports events, etc. We don't mark sports teams (they are ORGs), matches (e.g. "Steaua-Rapid" will be marked as two separate ORGs even if they refer to a football match between the two teams, but the match is not specific). Events have to be significant, with at least national impact, not local. Războiul cel Mare, Războiul Națiunilor, denumit, în timpul celui de Al Doilea Război Mondial, Primul Război Mondial, a fost un conflict militar de dimensiuni mondiale. green!55!blueThe Great War, War of the Nations, as it was called during the Second World War, the First World War was a global-scale military conflict. Classes and Annotation Methodology ::: LANGUAGE This class represents all languages. Românii din România vorbesc română. green!55!blueRomanians from Romania speak Romanian. În Moldova se vorbește rusa și româna. green!55!blueIn Moldavia they speak Russian and Romanian. Classes and Annotation Methodology ::: WORK_OF_ART Books, songs, TV shows, pictures; everything that is a work of art/culture created by humans. We mark just their name. We don't mark laws. Accesul la Mona Lisa a fost temporar interzis vizitatorilor. green!55!blueAccess to Mona Lisa was temporarily forbidden to visitors. În această seară la Vrei sa Fii Miliardar vom avea un invitat special. green!55!blueThis evening in Who Wants To Be A Millionaire we will have a special guest. Classes and Annotation Methodology ::: DATETIME Date and time values. We will mark full constructions, not parts, if they refer to the same moment (e.g. a comma separates two distinct DATETIME entities only if they refer to distinct moments). If we have a well specified period (e.g. "between 20-22 hours") we mark it as PERIOD, otherwise less well defined periods are marked as DATETIME (e.g.: "last summer", "September", "Wednesday", "three days"); Ages are marked as DATETIME as well. Prepositions are not included. Te rog să vii aici în cel mult o oră, nu mâine sau poimâine. green!55!bluePlease come here in one hour at most, not tomorrow or the next day. Actul s-a semnat la orele 16. green!55!blueThe paper was signed at 16 hours. August este o lună secetoasă. green!55!blueAugust is a dry month. Pe data de 20 martie între orele 20-22 va fi oprită alimentarea cu curent. green!55!blueOn the 20th of March, between 20-22 hours, electricity will be cut-off. Classes and Annotation Methodology ::: PERIOD Periods/time intervals. Periods have to be very well marked in text. If a period is not like "a-b" then it is a DATETIME. Spectacolul are loc între 1 și 3 Aprilie. green!55!blueThe show takes place between 1 and 3 April. În prima jumătate a lunii iunie va avea loc evenimentul de două zile. green!55!blueIn the first half of June the two-day event will take place. Classes and Annotation Methodology ::: MONEY Money, monetary values, including units (e.g. USD, $, RON, lei, francs, pounds, Euro, etc.) written with number or letters. Entities that contain any monetary reference, including measuring units, will be marked as MONEY (e.g. 10$/sqm, 50 lei per hour). Words that are not clear values will not be marked, such as "an amount of money", "he received a coin". Primarul a semnat un contract în valoare de 10 milioane lei noi, echivalentul a aproape 2.6m EUR. green!55!blueThe mayor signed a contract worth 10 million new lei, equivalent of almost 2.6m EUR. Classes and Annotation Methodology ::: QUANTITY Measurements, such as weight, distance, etc. Any type of quantity belongs in this class. Conducătorul auto avea peste 1g/ml alcool în sânge, fiind oprit deoarece a fost prins cu peste 120 km/h în localitate. green!55!blueThe car driver had over 1g/ml blood alcohol, and was stopped because he was caught speeding with over 120km/h in the city. Classes and Annotation Methodology ::: NUMERIC_VALUE Any numeric value (including phone numbers), written with letters or numbers or as percents, which is not MONEY, QUANTITY or ORDINAL. Raportul XII-2 arată 4 552 de investitori, iar structura de portofoliu este: cont curent 0,05%, certificate de trezorerie 66,96%, depozite bancare 13,53%, obligațiuni municipale 19,46%. green!55!blueThe XII-2 report shows 4 552 investors, and the portfolio structure is: current account 0,05%, treasury bonds 66,96%, bank deposits 13,53%, municipal bonds 19,46%. Classes and Annotation Methodology ::: ORDINAL The first, the second, last, 30th, etc.; An ordinal must imply an order relation between elements. For example, "second grade" does not involve a direct order relation; it indicates just a succession in grades in a school system. Primul loc a fost ocupat de echipa Germaniei. green!55!blueThe first place was won by Germany's team. The corpus creation process involved a small number of people that have voluntarily joined the initiative, with the authors of this paper directing the work. Initially, we searched for NER resources in Romanian, and found none. Then we looked at English resources and read the in-depth ACE guide, out of which a 16-class draft evolved. We then identified a copy-right free text from which we hand-picked sentences to maximize the amount of entities while maintaining style balance. The annotation process was a trial-and-error, with cycles composed of annotation, discussing confusing entities, updating the annotation guide schematic and going through the corpus section again to correct entities following guide changes. The annotation process was done online, in BRAT. The actual annotation involved 4 people, has taken about 6 months (as work was volunteer-based, we could not have reached for 100% time commitment from the people involved), and followed the steps: nolistsep Each person would annotate the full corpus (this included the cycles of shaping up the annotation guide, and re-annotation). Inter-annotator agreement (ITA) at this point was relatively low, at 60-70%, especially for a number of classes. We then automatically merged all annotations, with the following criterion: if 3 of the 4 annotators agreed on an entity (class&start-stop), then it would go unchanged; otherwise mark the entity (longest span) as CONFLICTED. Two teams were created, each with two persons. Each team annotated the full corpus again, starting from the previous step. At this point, class-average ITA has risen to over 85%. Next, the same automatic merging happened, this time entities remained unchanged if both annotations agreed. Finally, one of the authors went through the full corpus one more time, correcting disagreements. We would like to make a few notes regarding classes and inter-annotator agreements: nolistsep [noitemsep] Classes like ORGANIZATION, NAT_REL_POL, LANGUAGE or GPEs have the highest ITA, over 98%. They are pretty clear and distinct from other classes. The DATETIME class also has a high ITA, with some overlap with PERIOD: annotators could fall-back if they were not sure that an expression was a PERIOD and simply mark it as DATETIME. WORK_OF_ART and EVENTs have caused some problems because the scope could not be properly defined from just one sentence. For example, a fair in a city could be a local event, but could also be a national periodic event. MONEY, QUANTITY and ORDINAL all are more specific classes than NUMERIC_VALUE. So, in cases where a numeric value has a unit of measure by it, it should become a QUANTITY, not a NUMERIC_VALUE. However, this "specificity" has created some confusion between these classes, just like with DATETIME and PERIOD. The ORDINAL class is a bit ambiguous, because, even though it ranks "higher" than NUMERIC_VALUE, it is the least diverse, most of the entities following the same patterns. PRODUCT and FACILITY classes have the lowest ITA by far (less than 40% in the first annotation cycle, less than 70% in the second). We actually considered removing these classes from the annotation process, but to try to mimic the OntoNotes classes as much as possible we decided to keep them in. There were many cases where the annotators disagreed about the scope of words being facilities or products. Even in the ACE guidelines these two classes are not very well "documented" with examples of what is and what is not a PRODUCT or FACILITY. Considering that these classes are, in our opinion, of the lowest importance among all the classes, a lower ITA was accepted. Finally, we would like to address the "semantic scope" of the entities - for example, for class PERSON, we do not annotate only proper nouns (NPs) but basically any reference to a person (e.g. through pronouns "she", job position titles, common nouns such as "father", etc.). We do this because we would like a high-coverage corpus, where entities are marked as more semantically-oriented rather than syntactically - in the same way ACE entities are more encompassing than CoNLL entities. We note that, for example, if one would like strict proper noun entities, it is very easy to extract from a PERSON multi-word entity only those words which are syntactically marked (by any tagger) as NPs. Conclusions We have presented RONEC - the first Named Entity Corpus for the Romanian language. At its current version, in its 5127 sentences we have 26377 annotated entities in 16 different classes. The corpus is based on copy-right free text, and is released as open-source, free to use and extend. We hope that in time this corpus will grow in size and mature towards a strong resource for Romanian. For this to happen we have released the corpus in two formats: CoNLL-U PLus, which is a text-based tab-separated pre-tokenized and annotated format that is simple to use, and BRAT, which is practically plug-and-play into the BRAT web annotation tool where anybody can add and annotate new sentences. Also, in the GitHub repo there are automatic alignment and conversion script to and from the two formats so they could easily be exported between. Finally, we have also provided an annotation guide that we will improve, and in time evolve into a full annotation document like the ACE Annotation Guidelines for Entities V6.6 BIBREF8.
inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE (Automatic Content Extraction) English Annotation Guidelines for Entities Version 6.6 2008.06.13 BIBREF8
3fddd9f6707b9e40e35518dae7f6da7c4cb77d16
3fddd9f6707b9e40e35518dae7f6da7c4cb77d16_0
Q: Do they jointly tackle multiple tagging problems? Text: Introduction Recently, character composition models have shown great success in many NLP tasks, mainly because of their robustness in dealing with out-of-vocabulary (OOV) words by capturing sub-word informations. Among the character composition models, bidirectional long short-term memory (LSTM) models and convolutional neural networks (CNN) are widely applied in many tasks, e.g. part-of-speech (POS) tagging BIBREF0 , BIBREF1 , named entity recognition BIBREF2 , language modeling BIBREF3 , BIBREF4 , machine translation BIBREF5 and dependency parsing BIBREF6 , BIBREF7 . In this paper, we present a state-of-the-art general-purpose tagger that uses CNNs both to compose word representations from characters and to encode context information for tagging. We show that the CNN model is more capable than the LSTM model for both functions, and more stable for unseen or unnormalized words, which is the main benefit of character composition models. Yu:2017 compared the performance of CNN and LSTM as character composition model for dependency parsing, and concluded that CNN performs better than LSTM. In this paper, we show that this is also the case for POS tagging. Furthermore, we extend the scope to morphological tagging and supertagging, in which the tag set is much larger and long-distance dependencies between words are more important. In these three tagging tasks, we compare our tagger with the bilstm-aux tagger BIBREF1 and the CRF-based morphological tagger MarMot BIBREF8 . The CNN tagger shows robust performance accross the three tasks, and achieves the highest average accuracy in all tasks. It (significantly) outperforms LSTM in morphological tagging, and outperforms both baselines in supertagging by a large margin. To test the robustness of the taggers against the OOV problem, we also conduct experiments using artificially constructed unnormalized text by corrupting words in the normal dev set. Again, the CNN tagger outperforms the two baselines by a very large margin. Therefore we conclude that our CNN tagger is a robust state-of-the-art general-purpose tagger that can effectively compose word representation from characters and encode context information. Model Our proposed CNN tagger has two main components: the character composition model and the context encoding model. Both components are essentially CNN models, capturing different levels of information: the first CNN captures morphological information from character n-grams, the second one captures contextual information from word n-grams. Figure FIGREF2 shows a diagram of both models of the tagger. Character Composition Model The character composition model is similar to Yu:2017, where several convolution filters are used to capture character n-grams of different sizes. The outputs of each convolution filter are fed through a max pooling layer, and the pooling outputs are concatenated to represent the word. Context Encoding Model The context encoding model captures the context information of the target word by scanning through the word representations of its context window. The word representation could be only word embeddings ( INLINEFORM0 ), only composed vectors ( INLINEFORM1 ) or the concatenation of both ( INLINEFORM2 ) A context window consists of N words to both sides of the target word and the target word itself. To indicate the target word, we concatenate a binary feature to each of the word representations with 1 indicating the target and 0 otherwise, similar to Vu:2016. Additional to the binary feature, we also concatenate a position embedding to encode the relative position of each context word, similar to Gehring:2017. Hyper-parameters For the character composition model, we take a fixed input size of 32 characters for each word, with padding on both sides or cutting from the middle if needed. We apply four convolution filters with sizes of 3, 5, 7, and 9. Each filter has an output channel of 25 dimensions, thus the composed vector is 100-dimensional. We apply Gaussian noise with standard deviation of 0.1 is applied on the composed vector during training. For the context encoding model, we take a context window of 15 (7 words to both sides of the target word) as input and predict the tag of the target word. We also apply four convolution filters with sizes of 2, 3, 4 and 5, each filter is stacked by another filter with the same size, and the output has 128 dimensions, thus the context representation is 512-dimensional. We apply one 512-dimensional hidden layer with ReLU non-linearity before the prediction layer. We apply dropout with probability of 0.1 after the hidden layer during training. The model is trained with averaged stochastic gradient descent with a learning rate of 0.1, momentum of 0.9 and mini-batch size of 100. We apply L2 regularization with a rate of INLINEFORM0 on all the parameters of the network except the embeddings. Data We use treebanks from version 1.2 of Universal Dependencies (UD), and in the case of several treebanks for one language, we only use the canonical treebank. There are in total 22 treebanks, as in Plank:2016. Each treebank splits into train, dev, and test sets, we use the dev sets for early stop, and test on the test sets. Tasks We evaluate our method on three tagging tasks: POS tagging (Pos), morphological tagging (Morph) and supertagging (Stag). For POS tagging we use Universal POS tags, which is an extension of Petrov:2012. The universal tag set tries to capture the “universal” properties of words and facilitate cross-lingual learning. Therefore the tag set is very coarse and leaves out most of the language-specific properties to morphological features. Morphological tags encode the language-specific morphological features of the words, e.g., number, gender, case. They are represented in the UD treebanks as one string which contains several key-value pairs of morphological features. Supertags BIBREF9 are tags that encode more syntactic information than standard POS tags, e.g. the head direction or the subcategorization frame. We use dependency-based supertags BIBREF10 which are extracted from the dependency treebanks. Adding such tags into feature models of statistical dependency parsers significantly improves their performance BIBREF11 , BIBREF12 . Supertags can be designed with different levels of granularity. We use the standard Model 1 from Ouchi:2014, where each tag consists of head direction, dependency label and dependent direction. Even with the basic supertag model, the Stag task is more difficult than Pos and Morph because it generally requires taking long-distance dependencies between words into consideration. We select these tasks as examples for tagging applications because they differ strongly in tag set sizes. Generally, the Pos set sizes for all the languages are no more than 17 and Stag set sizes are around 200. When treating morphological features as a string (i.e. not splitting into key-value pairs), the sizes of the Morph tag sets range from about 100 up to 2000. Setups As baselines to our models, we take the two state-of-the-art taggers MarMot (denoted as CRF) and bilstm-aux (denoted as LSTM). We train the taggers with the recommended hyper-parameters from the documentation. To ensure a fair comparison (especially between LSTM and CNN), we generally treat the three tasks equally, and do not apply task-specific tuning on them, i.e., using the same features and same model hyper-parameters in each single task. Also, we do not use any pre-trained word embeddings. For the LSTM tagger, we use the recommended hyper-parameters in the documentation including 64-dimensional word embeddings ( INLINEFORM0 ) and 100-dimensional composed vectors ( INLINEFORM1 ). We train the INLINEFORM2 , INLINEFORM3 and INLINEFORM4 models as in Plank:2016. We train the CNN taggers with the same dimensionalities for word representations. For the CRF tagger, we predict Pos and Morph jointly as in the standard setting for MarMot, which performs much better than with separate predictions, as shown in Mueller:2013 and in our preliminary experiments. Also, it splits the morphological tags into key-value pairs, whereas the neural taggers treat the whole string as a tag. We predict Stag as a separate task. Results The test results for the three tasks are shown in Table TABREF17 in three groups. The first group of seven columns are the results for Pos, where both LSTM and CNN have three variations of input features: word only ( INLINEFORM0 ), character only ( INLINEFORM1 ) and both ( INLINEFORM2 ). For Morph and Stag, we only use the INLINEFORM3 setting for both LSTM and CNN. On macro-average, three taggers perform close in the Pos task, with the CNN tagger being slightly better. In the Morph task, CNN is again slightly ahead of CRF, while LSTM is about 2 points behind. In the Stag task, CNN outperforms both taggers by a large margin: 2 points higher than LSTM and 8 points higher than CRF. While considering the input features of the LSTM and CNN taggers, both taggers perform close with only INLINEFORM0 as input, which suggests that the two taggers are comparable in encoding context for tagging Pos. However, with only INLINEFORM1 , CNN performs much better than LSTM (95.54 vs. 92.61), and close to INLINEFORM2 (96.18). Also, INLINEFORM3 consistently outperforms INLINEFORM4 for all languages. This suggests that the CNN model alone is capable of learning most of the information that the word-level model can learn, while the LSTM model is not. The more interesting cases are Morph and Stag, where CNN performs much higher than LSTM. We hypothesize three possible reasons to explain the considerably large difference. First, the LSTM tagger may be more sensitive to hyper-parameters and requires task specific tuning. We use the same setting which is tuned for the Pos task, thus it underperforms in the other tasks. Second, the LSTM tagger may not deal well with large tag sets. The tag set size for Morph are larger than Pos in orders of magnitudes, especially for Czech, Basque, Finnish and Slovene, all of which have more than 1000 distinct Morph tags in the training data, and the LSTM performs poorly on these languages. Third, the LSTM has theoretically unlimited access to all the tokens in the sentence, but in practice it might not learn the context as good as the CNN. In the LSTM model, the information of long-distance contexts will gradually fade away during the recurrence, whereas in the CNN model, all words are treated equally as long as they are in the context window. Therefore the LSTM underperforms in the Stag task, where the information from long-distance context is more important. Unnormalized Text It is a common scenario to use a model trained with news data to process text from social media, which could include intentional or unintentional misspellings. Unfortunately, we do not have social media data to test the taggers. However, we design an experiment to simulate unnormalized text, by systematically editing the words in the dev sets with three operations: insertion, deletion and substitution. For example, if we modify a word abcdef at position 2 (0-based), the modified words would be abxcdef, abdef, and abxdef, where x is a random character from the alphabet of the language. For each operation, we create a group of modified dev sets, where all words longer than two characters are edited by the operation with a probability of 0.25, 0.5, 0.75, or 1. For each language, we use the models trained on the normal training sets and predict Pos for the three groups of modified dev set. The average accuracies are shown in Figure FIGREF19 . Generally, all models suffer from the increasing degrees of unnormalized texts, but CNN always suffers the least. In the extreme case where almost all words are unnormalized, CNN performs 4 to 8 points higher than LSTM and 4 to 11 points higher than CRF. This suggests that the CNN is more robust to misspelt words. While looking into the specific cases of misspelling, CNN is more sensitive to insertion and deletion, while CRF and LSTM are more sensitive to substitution. Conclusion In this paper, we propose a general-purpose tagger that uses two CNNs for both character composition and context encoding. On the universal dependency treebanks (v1.2), the tagger achieves state-of-the-art results for POS tagging and morphological tagging, and to the best of our knowledge, it also performs best for supertagging. The tagger works well across different tagging tasks without tuning the hyper-parameters, and it is also robust against unnormalized text.
No
676c874266ee0388fe5b9a75e1006796c68c3c13
676c874266ee0388fe5b9a75e1006796c68c3c13_0
Q: How many parameters does their CNN have? Text: Introduction Recently, character composition models have shown great success in many NLP tasks, mainly because of their robustness in dealing with out-of-vocabulary (OOV) words by capturing sub-word informations. Among the character composition models, bidirectional long short-term memory (LSTM) models and convolutional neural networks (CNN) are widely applied in many tasks, e.g. part-of-speech (POS) tagging BIBREF0 , BIBREF1 , named entity recognition BIBREF2 , language modeling BIBREF3 , BIBREF4 , machine translation BIBREF5 and dependency parsing BIBREF6 , BIBREF7 . In this paper, we present a state-of-the-art general-purpose tagger that uses CNNs both to compose word representations from characters and to encode context information for tagging. We show that the CNN model is more capable than the LSTM model for both functions, and more stable for unseen or unnormalized words, which is the main benefit of character composition models. Yu:2017 compared the performance of CNN and LSTM as character composition model for dependency parsing, and concluded that CNN performs better than LSTM. In this paper, we show that this is also the case for POS tagging. Furthermore, we extend the scope to morphological tagging and supertagging, in which the tag set is much larger and long-distance dependencies between words are more important. In these three tagging tasks, we compare our tagger with the bilstm-aux tagger BIBREF1 and the CRF-based morphological tagger MarMot BIBREF8 . The CNN tagger shows robust performance accross the three tasks, and achieves the highest average accuracy in all tasks. It (significantly) outperforms LSTM in morphological tagging, and outperforms both baselines in supertagging by a large margin. To test the robustness of the taggers against the OOV problem, we also conduct experiments using artificially constructed unnormalized text by corrupting words in the normal dev set. Again, the CNN tagger outperforms the two baselines by a very large margin. Therefore we conclude that our CNN tagger is a robust state-of-the-art general-purpose tagger that can effectively compose word representation from characters and encode context information. Model Our proposed CNN tagger has two main components: the character composition model and the context encoding model. Both components are essentially CNN models, capturing different levels of information: the first CNN captures morphological information from character n-grams, the second one captures contextual information from word n-grams. Figure FIGREF2 shows a diagram of both models of the tagger. Character Composition Model The character composition model is similar to Yu:2017, where several convolution filters are used to capture character n-grams of different sizes. The outputs of each convolution filter are fed through a max pooling layer, and the pooling outputs are concatenated to represent the word. Context Encoding Model The context encoding model captures the context information of the target word by scanning through the word representations of its context window. The word representation could be only word embeddings ( INLINEFORM0 ), only composed vectors ( INLINEFORM1 ) or the concatenation of both ( INLINEFORM2 ) A context window consists of N words to both sides of the target word and the target word itself. To indicate the target word, we concatenate a binary feature to each of the word representations with 1 indicating the target and 0 otherwise, similar to Vu:2016. Additional to the binary feature, we also concatenate a position embedding to encode the relative position of each context word, similar to Gehring:2017. Hyper-parameters For the character composition model, we take a fixed input size of 32 characters for each word, with padding on both sides or cutting from the middle if needed. We apply four convolution filters with sizes of 3, 5, 7, and 9. Each filter has an output channel of 25 dimensions, thus the composed vector is 100-dimensional. We apply Gaussian noise with standard deviation of 0.1 is applied on the composed vector during training. For the context encoding model, we take a context window of 15 (7 words to both sides of the target word) as input and predict the tag of the target word. We also apply four convolution filters with sizes of 2, 3, 4 and 5, each filter is stacked by another filter with the same size, and the output has 128 dimensions, thus the context representation is 512-dimensional. We apply one 512-dimensional hidden layer with ReLU non-linearity before the prediction layer. We apply dropout with probability of 0.1 after the hidden layer during training. The model is trained with averaged stochastic gradient descent with a learning rate of 0.1, momentum of 0.9 and mini-batch size of 100. We apply L2 regularization with a rate of INLINEFORM0 on all the parameters of the network except the embeddings. Data We use treebanks from version 1.2 of Universal Dependencies (UD), and in the case of several treebanks for one language, we only use the canonical treebank. There are in total 22 treebanks, as in Plank:2016. Each treebank splits into train, dev, and test sets, we use the dev sets for early stop, and test on the test sets. Tasks We evaluate our method on three tagging tasks: POS tagging (Pos), morphological tagging (Morph) and supertagging (Stag). For POS tagging we use Universal POS tags, which is an extension of Petrov:2012. The universal tag set tries to capture the “universal” properties of words and facilitate cross-lingual learning. Therefore the tag set is very coarse and leaves out most of the language-specific properties to morphological features. Morphological tags encode the language-specific morphological features of the words, e.g., number, gender, case. They are represented in the UD treebanks as one string which contains several key-value pairs of morphological features. Supertags BIBREF9 are tags that encode more syntactic information than standard POS tags, e.g. the head direction or the subcategorization frame. We use dependency-based supertags BIBREF10 which are extracted from the dependency treebanks. Adding such tags into feature models of statistical dependency parsers significantly improves their performance BIBREF11 , BIBREF12 . Supertags can be designed with different levels of granularity. We use the standard Model 1 from Ouchi:2014, where each tag consists of head direction, dependency label and dependent direction. Even with the basic supertag model, the Stag task is more difficult than Pos and Morph because it generally requires taking long-distance dependencies between words into consideration. We select these tasks as examples for tagging applications because they differ strongly in tag set sizes. Generally, the Pos set sizes for all the languages are no more than 17 and Stag set sizes are around 200. When treating morphological features as a string (i.e. not splitting into key-value pairs), the sizes of the Morph tag sets range from about 100 up to 2000. Setups As baselines to our models, we take the two state-of-the-art taggers MarMot (denoted as CRF) and bilstm-aux (denoted as LSTM). We train the taggers with the recommended hyper-parameters from the documentation. To ensure a fair comparison (especially between LSTM and CNN), we generally treat the three tasks equally, and do not apply task-specific tuning on them, i.e., using the same features and same model hyper-parameters in each single task. Also, we do not use any pre-trained word embeddings. For the LSTM tagger, we use the recommended hyper-parameters in the documentation including 64-dimensional word embeddings ( INLINEFORM0 ) and 100-dimensional composed vectors ( INLINEFORM1 ). We train the INLINEFORM2 , INLINEFORM3 and INLINEFORM4 models as in Plank:2016. We train the CNN taggers with the same dimensionalities for word representations. For the CRF tagger, we predict Pos and Morph jointly as in the standard setting for MarMot, which performs much better than with separate predictions, as shown in Mueller:2013 and in our preliminary experiments. Also, it splits the morphological tags into key-value pairs, whereas the neural taggers treat the whole string as a tag. We predict Stag as a separate task. Results The test results for the three tasks are shown in Table TABREF17 in three groups. The first group of seven columns are the results for Pos, where both LSTM and CNN have three variations of input features: word only ( INLINEFORM0 ), character only ( INLINEFORM1 ) and both ( INLINEFORM2 ). For Morph and Stag, we only use the INLINEFORM3 setting for both LSTM and CNN. On macro-average, three taggers perform close in the Pos task, with the CNN tagger being slightly better. In the Morph task, CNN is again slightly ahead of CRF, while LSTM is about 2 points behind. In the Stag task, CNN outperforms both taggers by a large margin: 2 points higher than LSTM and 8 points higher than CRF. While considering the input features of the LSTM and CNN taggers, both taggers perform close with only INLINEFORM0 as input, which suggests that the two taggers are comparable in encoding context for tagging Pos. However, with only INLINEFORM1 , CNN performs much better than LSTM (95.54 vs. 92.61), and close to INLINEFORM2 (96.18). Also, INLINEFORM3 consistently outperforms INLINEFORM4 for all languages. This suggests that the CNN model alone is capable of learning most of the information that the word-level model can learn, while the LSTM model is not. The more interesting cases are Morph and Stag, where CNN performs much higher than LSTM. We hypothesize three possible reasons to explain the considerably large difference. First, the LSTM tagger may be more sensitive to hyper-parameters and requires task specific tuning. We use the same setting which is tuned for the Pos task, thus it underperforms in the other tasks. Second, the LSTM tagger may not deal well with large tag sets. The tag set size for Morph are larger than Pos in orders of magnitudes, especially for Czech, Basque, Finnish and Slovene, all of which have more than 1000 distinct Morph tags in the training data, and the LSTM performs poorly on these languages. Third, the LSTM has theoretically unlimited access to all the tokens in the sentence, but in practice it might not learn the context as good as the CNN. In the LSTM model, the information of long-distance contexts will gradually fade away during the recurrence, whereas in the CNN model, all words are treated equally as long as they are in the context window. Therefore the LSTM underperforms in the Stag task, where the information from long-distance context is more important. Unnormalized Text It is a common scenario to use a model trained with news data to process text from social media, which could include intentional or unintentional misspellings. Unfortunately, we do not have social media data to test the taggers. However, we design an experiment to simulate unnormalized text, by systematically editing the words in the dev sets with three operations: insertion, deletion and substitution. For example, if we modify a word abcdef at position 2 (0-based), the modified words would be abxcdef, abdef, and abxdef, where x is a random character from the alphabet of the language. For each operation, we create a group of modified dev sets, where all words longer than two characters are edited by the operation with a probability of 0.25, 0.5, 0.75, or 1. For each language, we use the models trained on the normal training sets and predict Pos for the three groups of modified dev set. The average accuracies are shown in Figure FIGREF19 . Generally, all models suffer from the increasing degrees of unnormalized texts, but CNN always suffers the least. In the extreme case where almost all words are unnormalized, CNN performs 4 to 8 points higher than LSTM and 4 to 11 points higher than CRF. This suggests that the CNN is more robust to misspelt words. While looking into the specific cases of misspelling, CNN is more sensitive to insertion and deletion, while CRF and LSTM are more sensitive to substitution. Conclusion In this paper, we propose a general-purpose tagger that uses two CNNs for both character composition and context encoding. On the universal dependency treebanks (v1.2), the tagger achieves state-of-the-art results for POS tagging and morphological tagging, and to the best of our knowledge, it also performs best for supertagging. The tagger works well across different tagging tasks without tuning the hyper-parameters, and it is also robust against unnormalized text.
Unanswerable
fc54736e67f748f804e8f66b3aaaea7f5e55b209
fc54736e67f748f804e8f66b3aaaea7f5e55b209_0
Q: How do they confirm their model working well on out-of-vocabulary problems? Text: Introduction Recently, character composition models have shown great success in many NLP tasks, mainly because of their robustness in dealing with out-of-vocabulary (OOV) words by capturing sub-word informations. Among the character composition models, bidirectional long short-term memory (LSTM) models and convolutional neural networks (CNN) are widely applied in many tasks, e.g. part-of-speech (POS) tagging BIBREF0 , BIBREF1 , named entity recognition BIBREF2 , language modeling BIBREF3 , BIBREF4 , machine translation BIBREF5 and dependency parsing BIBREF6 , BIBREF7 . In this paper, we present a state-of-the-art general-purpose tagger that uses CNNs both to compose word representations from characters and to encode context information for tagging. We show that the CNN model is more capable than the LSTM model for both functions, and more stable for unseen or unnormalized words, which is the main benefit of character composition models. Yu:2017 compared the performance of CNN and LSTM as character composition model for dependency parsing, and concluded that CNN performs better than LSTM. In this paper, we show that this is also the case for POS tagging. Furthermore, we extend the scope to morphological tagging and supertagging, in which the tag set is much larger and long-distance dependencies between words are more important. In these three tagging tasks, we compare our tagger with the bilstm-aux tagger BIBREF1 and the CRF-based morphological tagger MarMot BIBREF8 . The CNN tagger shows robust performance accross the three tasks, and achieves the highest average accuracy in all tasks. It (significantly) outperforms LSTM in morphological tagging, and outperforms both baselines in supertagging by a large margin. To test the robustness of the taggers against the OOV problem, we also conduct experiments using artificially constructed unnormalized text by corrupting words in the normal dev set. Again, the CNN tagger outperforms the two baselines by a very large margin. Therefore we conclude that our CNN tagger is a robust state-of-the-art general-purpose tagger that can effectively compose word representation from characters and encode context information. Model Our proposed CNN tagger has two main components: the character composition model and the context encoding model. Both components are essentially CNN models, capturing different levels of information: the first CNN captures morphological information from character n-grams, the second one captures contextual information from word n-grams. Figure FIGREF2 shows a diagram of both models of the tagger. Character Composition Model The character composition model is similar to Yu:2017, where several convolution filters are used to capture character n-grams of different sizes. The outputs of each convolution filter are fed through a max pooling layer, and the pooling outputs are concatenated to represent the word. Context Encoding Model The context encoding model captures the context information of the target word by scanning through the word representations of its context window. The word representation could be only word embeddings ( INLINEFORM0 ), only composed vectors ( INLINEFORM1 ) or the concatenation of both ( INLINEFORM2 ) A context window consists of N words to both sides of the target word and the target word itself. To indicate the target word, we concatenate a binary feature to each of the word representations with 1 indicating the target and 0 otherwise, similar to Vu:2016. Additional to the binary feature, we also concatenate a position embedding to encode the relative position of each context word, similar to Gehring:2017. Hyper-parameters For the character composition model, we take a fixed input size of 32 characters for each word, with padding on both sides or cutting from the middle if needed. We apply four convolution filters with sizes of 3, 5, 7, and 9. Each filter has an output channel of 25 dimensions, thus the composed vector is 100-dimensional. We apply Gaussian noise with standard deviation of 0.1 is applied on the composed vector during training. For the context encoding model, we take a context window of 15 (7 words to both sides of the target word) as input and predict the tag of the target word. We also apply four convolution filters with sizes of 2, 3, 4 and 5, each filter is stacked by another filter with the same size, and the output has 128 dimensions, thus the context representation is 512-dimensional. We apply one 512-dimensional hidden layer with ReLU non-linearity before the prediction layer. We apply dropout with probability of 0.1 after the hidden layer during training. The model is trained with averaged stochastic gradient descent with a learning rate of 0.1, momentum of 0.9 and mini-batch size of 100. We apply L2 regularization with a rate of INLINEFORM0 on all the parameters of the network except the embeddings. Data We use treebanks from version 1.2 of Universal Dependencies (UD), and in the case of several treebanks for one language, we only use the canonical treebank. There are in total 22 treebanks, as in Plank:2016. Each treebank splits into train, dev, and test sets, we use the dev sets for early stop, and test on the test sets. Tasks We evaluate our method on three tagging tasks: POS tagging (Pos), morphological tagging (Morph) and supertagging (Stag). For POS tagging we use Universal POS tags, which is an extension of Petrov:2012. The universal tag set tries to capture the “universal” properties of words and facilitate cross-lingual learning. Therefore the tag set is very coarse and leaves out most of the language-specific properties to morphological features. Morphological tags encode the language-specific morphological features of the words, e.g., number, gender, case. They are represented in the UD treebanks as one string which contains several key-value pairs of morphological features. Supertags BIBREF9 are tags that encode more syntactic information than standard POS tags, e.g. the head direction or the subcategorization frame. We use dependency-based supertags BIBREF10 which are extracted from the dependency treebanks. Adding such tags into feature models of statistical dependency parsers significantly improves their performance BIBREF11 , BIBREF12 . Supertags can be designed with different levels of granularity. We use the standard Model 1 from Ouchi:2014, where each tag consists of head direction, dependency label and dependent direction. Even with the basic supertag model, the Stag task is more difficult than Pos and Morph because it generally requires taking long-distance dependencies between words into consideration. We select these tasks as examples for tagging applications because they differ strongly in tag set sizes. Generally, the Pos set sizes for all the languages are no more than 17 and Stag set sizes are around 200. When treating morphological features as a string (i.e. not splitting into key-value pairs), the sizes of the Morph tag sets range from about 100 up to 2000. Setups As baselines to our models, we take the two state-of-the-art taggers MarMot (denoted as CRF) and bilstm-aux (denoted as LSTM). We train the taggers with the recommended hyper-parameters from the documentation. To ensure a fair comparison (especially between LSTM and CNN), we generally treat the three tasks equally, and do not apply task-specific tuning on them, i.e., using the same features and same model hyper-parameters in each single task. Also, we do not use any pre-trained word embeddings. For the LSTM tagger, we use the recommended hyper-parameters in the documentation including 64-dimensional word embeddings ( INLINEFORM0 ) and 100-dimensional composed vectors ( INLINEFORM1 ). We train the INLINEFORM2 , INLINEFORM3 and INLINEFORM4 models as in Plank:2016. We train the CNN taggers with the same dimensionalities for word representations. For the CRF tagger, we predict Pos and Morph jointly as in the standard setting for MarMot, which performs much better than with separate predictions, as shown in Mueller:2013 and in our preliminary experiments. Also, it splits the morphological tags into key-value pairs, whereas the neural taggers treat the whole string as a tag. We predict Stag as a separate task. Results The test results for the three tasks are shown in Table TABREF17 in three groups. The first group of seven columns are the results for Pos, where both LSTM and CNN have three variations of input features: word only ( INLINEFORM0 ), character only ( INLINEFORM1 ) and both ( INLINEFORM2 ). For Morph and Stag, we only use the INLINEFORM3 setting for both LSTM and CNN. On macro-average, three taggers perform close in the Pos task, with the CNN tagger being slightly better. In the Morph task, CNN is again slightly ahead of CRF, while LSTM is about 2 points behind. In the Stag task, CNN outperforms both taggers by a large margin: 2 points higher than LSTM and 8 points higher than CRF. While considering the input features of the LSTM and CNN taggers, both taggers perform close with only INLINEFORM0 as input, which suggests that the two taggers are comparable in encoding context for tagging Pos. However, with only INLINEFORM1 , CNN performs much better than LSTM (95.54 vs. 92.61), and close to INLINEFORM2 (96.18). Also, INLINEFORM3 consistently outperforms INLINEFORM4 for all languages. This suggests that the CNN model alone is capable of learning most of the information that the word-level model can learn, while the LSTM model is not. The more interesting cases are Morph and Stag, where CNN performs much higher than LSTM. We hypothesize three possible reasons to explain the considerably large difference. First, the LSTM tagger may be more sensitive to hyper-parameters and requires task specific tuning. We use the same setting which is tuned for the Pos task, thus it underperforms in the other tasks. Second, the LSTM tagger may not deal well with large tag sets. The tag set size for Morph are larger than Pos in orders of magnitudes, especially for Czech, Basque, Finnish and Slovene, all of which have more than 1000 distinct Morph tags in the training data, and the LSTM performs poorly on these languages. Third, the LSTM has theoretically unlimited access to all the tokens in the sentence, but in practice it might not learn the context as good as the CNN. In the LSTM model, the information of long-distance contexts will gradually fade away during the recurrence, whereas in the CNN model, all words are treated equally as long as they are in the context window. Therefore the LSTM underperforms in the Stag task, where the information from long-distance context is more important. Unnormalized Text It is a common scenario to use a model trained with news data to process text from social media, which could include intentional or unintentional misspellings. Unfortunately, we do not have social media data to test the taggers. However, we design an experiment to simulate unnormalized text, by systematically editing the words in the dev sets with three operations: insertion, deletion and substitution. For example, if we modify a word abcdef at position 2 (0-based), the modified words would be abxcdef, abdef, and abxdef, where x is a random character from the alphabet of the language. For each operation, we create a group of modified dev sets, where all words longer than two characters are edited by the operation with a probability of 0.25, 0.5, 0.75, or 1. For each language, we use the models trained on the normal training sets and predict Pos for the three groups of modified dev set. The average accuracies are shown in Figure FIGREF19 . Generally, all models suffer from the increasing degrees of unnormalized texts, but CNN always suffers the least. In the extreme case where almost all words are unnormalized, CNN performs 4 to 8 points higher than LSTM and 4 to 11 points higher than CRF. This suggests that the CNN is more robust to misspelt words. While looking into the specific cases of misspelling, CNN is more sensitive to insertion and deletion, while CRF and LSTM are more sensitive to substitution. Conclusion In this paper, we propose a general-purpose tagger that uses two CNNs for both character composition and context encoding. On the universal dependency treebanks (v1.2), the tagger achieves state-of-the-art results for POS tagging and morphological tagging, and to the best of our knowledge, it also performs best for supertagging. The tagger works well across different tagging tasks without tuning the hyper-parameters, and it is also robust against unnormalized text.
conduct experiments using artificially constructed unnormalized text by corrupting words in the normal dev set
a53683d1a0647c80a4398ff8f4a03e11c0929be2
a53683d1a0647c80a4398ff8f4a03e11c0929be2_0
Q: What approach does this work propose for the new task? Text: Introduction With the popularity of shared videos, social networks, online course, etc, the quantity of multimedia or spoken content is growing much faster beyond what human beings can view or listen to. Accessing large collections of multimedia or spoken content is difficult and time-consuming for humans, even if these materials are more attractive for humans than plain text information. Hence, it will be great if the machine can automatically listen to and understand the spoken content, and even visualize the key information for humans. This paper presents an initial attempt towards the above goal: machine comprehension of spoken content. In an initial task, we wish the machine can listen to and understand an audio story, and answer the questions related to that audio content. TOEFL listening comprehension test is for human English learners whose native language is not English. This paper reports how today's machine can perform with such a test. The listening comprehension task considered here is highly related to Spoken Question Answering (SQA) BIBREF0 , BIBREF1 . In SQA, when the users enter questions in either text or spoken form, the machine needs to find the answer from some audio files. SQA usually worked with ASR transcripts of the spoken content, and used information retrieval (IR) techniques BIBREF2 or relied on knowledge bases BIBREF3 to find the proper answer. Sibyl BIBREF4 , a factoid SQA system, used some IR techniques and utilized several levels of linguistic information to deal with the task. Question Answering in Speech Transcripts (QAST) BIBREF5 , BIBREF6 , BIBREF7 has been a well-known evaluation program of SQA for years. However, most previous works on SQA mainly focused on factoid questions like “What is name of the highest mountain in Taiwan?”. Sometimes this kind of questions may be correctly answered by simply extracting the key terms from a properly chosen utterance without understanding the given spoken content. More difficult questions that cannot be answered without understanding the whole spoken content seemed rarely dealt with previously. With the fast development of deep learning, neural networks have successfully applied to speech recognition BIBREF8 , BIBREF9 , BIBREF10 or NLP tasks BIBREF11 , BIBREF12 . A number of recent efforts have explored various ways to understand multimedia in text form BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 . They incorporated attention mechanisms BIBREF16 with Long Short-Term Memory based networks BIBREF19 . In Question Answering field, most of the works focused on understanding text documents BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . Even though BIBREF24 tried to answer the question related to the movie, they only used the text and image in the movie for that. It seems that none of them have studied and focused on comprehension of spoken content yet. Task Definition and Contributions In this paper, we develop and propose a new task of machine comprehension of spoken content which was never mentioned before to our knowledge. We take TOEFL listening comprehension test as an corpus for this work. TOEFL is an English examination which tests the knowledge and skills of academic English for English learners whose native languages is not English. In this examination, the subjects would first listen to an audio story around five minutes and then answer several question according to that story. The story is related to the college life such as conversation between the student and the professor or a lecture in the class. Each question has four choices where only one is correct. An real example in the TOEFL examination is shown in Fig. 1 . The upper part is the manual transcription of a small part of the audio story. The questions and four choices are listed too. The correct choice to the question in Fig. 1 is choice A. The questions in TOEFL are not simple even for a human with relatively good knowledge because the question cannot be answered by simply matching the words in the question and in the choices with those in the story, and key information is usually buried by many irrelevant utterances. To answer the questions like “Why does the student go to professor's office?", the listeners have to understand the whole audio story and draw the inferences to answer the question correctly. As a result, this task is believed to be very challenging for the state-of-the-art spoken language understanding technologies. We propose a listening comprehension model for the task defined above, the Attention-based Multi-hop Recurrent Neural Network (AMRNN) framework, and show that this model is able to perform reasonably well for the task. In the proposed approach, the audio of the stories is first transcribed into text by ASR, and the proposed model is developed to process the transcriptions for selecting the correct answer out of 4 choices given the question. The initial experiments showed that the proposed model achieves encouraging scores on the TOEFL listening comprehension test. The attention-mechanism proposed in this paper can be applied on either word or sentence levels. We found that sentence-level attention achieved better results on the manual transcriptions without ASR errors, but word-level attention outperformed the sentence-level on ASR transcriptions with errors. Proposed Approach The overall structure of the proposed model is in Fig 2 . The input of model includes the transcriptions of an audio story, a question and four answer choices, all represented as word sequences. The word sequence of the input question is first represented as a question vector $V_Q$ in Section "Experiments" . With the question vector $V_Q$ , the attention mechanism is applied to extract the question-related information from the story in Section "Story Attention Module" . The machine then goes through the story by the attention mechanism several times and obtain an answer selection vector $V_{Q_n}$ in Section "Hopping" . This answer selection vector $V_{Q_n}$ is finally used to evaluate the confidence of each choice in Section "Answer Selection" , and the choice with the highest score is taken as the output. All the model parameters in the above procedure are jointly trained with the target where 1 for the correct choice and 0 otherwise. Question Representation Fig. 3 (A) shows the procedure of encoding the input question into a vector representation $V_Q$ . The input question is a sequence of T words, $w_1,w_2,...,w_T$ , every word $W_{i}$ represented in 1-Of-N encoding. A bidirectional Gated Recurrent Unit (GRU) network BIBREF25 , BIBREF26 , BIBREF27 takes one word from the input question sequentially at a time. In Fig 3 (A), the hidden layer output of the forward GRU (green rectangle) at time index $t$ is denoted by $y_{f}(t)$ , and that of the backward GRU (blue rectangle) is by $y_{b}(t)$ . After looking through all the words in the question, the hidden layer output of forward GRU network at the last time index $y_{f}(T)$ , and that of backward GRU network at the first time index $y_{b}(1)$ , are concatenated to form the question vector representation $V_{Q}$ , or $V_{Q} = [y_{f}(T) \Vert y_{b}(1)]$ . Story Attention Module Fig. 3 (B) shows the attention mechanism which takes the question vector $V_Q$ obtained in Fig. 3 (A) and the story transcriptions as the input to encode the whole story into a story vector representation $V_{S}$ . The story transcription is a very long word sequence with many sentences, so we only show two sentences each with 4 words for simplicity. There is a bidirectional GRU in Fig 3 (B) encoding the whole story into a story vector representation $V_{S}$ . The word vector representation of the $t$ -th word $S_{t}$ is constructed by concatenating the hidden layer outputs of forward and backward GRU networks, that is $S_t = [y_{f}(t) \Vert y_{b}(t)]$ . Then the attention value $\alpha _t$ for each time index ${t}$ is the cosine similarity between the question vector $V_{Q}$ and the word vector representation $S_{t}$ of each word, $V_{S}$0 . With attention values $V_{S}$2 , there can be two different attention mechanisms, word-level and sentence-level, to encode the whole story into the story vector representations $V_{S}$3 . Word-level Attention: We normalize all the attention values $\alpha _t$ into $\alpha _t^\prime $ such that they sum to one over the whole story. Then all the word vector $S_{t}$ from the bidirectional GRU network for every word in the story are weighted with this normalized attention value $\alpha _{t}^\prime $ and sum to give the story vector, that is $V_{S} = \sum _{t}\alpha _{t}^{\prime }S_{t}$ . Sentence-level Attention: Sentence-level attention means the model collects the information only at the end of each sentence. Therefore, the normalization is only performed over those words at the end of the sentences to obtain $\alpha _t^{\prime \prime }$ . The story vector representation is then $V_{S} = \sum _{t=eos}\alpha _t^{\prime \prime }*S_{t}$ , where only those words at the end of sentences (eos) contribute to the weighted sum. So $V_{S} = \alpha _4^{\prime \prime }*S_4 + \alpha _8^{\prime \prime }*S_8$ in the example of the Fig. 3 Hopping The overall picture of the proposed model is shown in Fig 2 , in which Fig. 3 (A) and (B) are component modules (labeled as Fig. 3 (A) and (B)) of the complete proposed model. In the left of Fig. 2 , the input question is first converted into a question vector $V_{Q_0}$ by the module in Fig. 3 (A). This $V_{Q_0}$ is used to compute the attention values $\alpha _{t}$ to obtain the story vector $V_{S_1}$ by the module in Fig. 3 (B). Then $V_{Q_0}$ and $V_{S_1}$ are summed to form a new question vector $V_{Q_1}$ . This process is called the first hop (hop 1) in Fig. 2 . The output of the first hop $V_{Q_1}$ can be used to compute the new attention to obtain a new story vector $V_{S_1}$ . This can be considered as the machine going over the story again to re-focus the story with a new question vector. Again, $V_{Q_1}$ and $V_{Q_0}$0 are summed to form $V_{Q_0}$1 (hop 2). After $V_{Q_0}$2 hops ( $V_{Q_0}$3 should be pre-defined), the output of the last hop $V_{Q_0}$4 is used for the answer selection in the Section "Answer Selection" . Answer Selection As in the upper part of Fig. 2 , the same way previously used to encode the question into $V_Q$ in Fig. 3 (A) is used here to encode four choice into choice vector representations $V_A$ , $V_B$ , $V_C$ , $V_D$ . Then the cosine similarity between the output of the last hop $V_{Q_n}$ and the choice vectors are computed, and the choice with highest similarity is chosen. Experimental Setup $\bullet $ Dataset Collection: The collected TOEFL dataset included 963 examples in total (717 for training, 124 for validation, 122 for testing). Each example included a story, a question and 4 choices. Besides the audio recording of each story, the manual transcriptions of the story are also available. We used a pydub library BIBREF28 to segment the full audio recording into utterances. Each audio recording has 57.9 utterances in average. There are in average 657.7 words in a story, 12.01 words in question and 10.35 words in each choice. $\bullet $ Speech Recognition: We used the CMU speech recognizer - Sphinx BIBREF29 to transcribe the audio story. The recognition word error rate (WER) was 34.32%. $\bullet $ Pre-processing: We used a pre-trained 300 dimension glove vector model BIBREF30 to obtain the vector representation for each word. Each utterance in the stories, question and each choice can be represented as a fixed length vector by adding the vectors of the all component words. Before training, we pruned the utterances in the story whose vector representation has cosine distance far from the question's. The percentage of the pruned utterances was determined by the performance of the model on the development set. The vector representations of utterances, questions and choices were only used in this pre-processing stage and the baseline approaches in Section "Baselines" , not used in the proposed model. $\bullet $ Training Details: The size of the hidden layer for both the forward and backward GRU networks were 128. All the bidirectional GRU networks in the proposed model shared the same set of parameters to avoid overfitting. We used RmsProp BIBREF31 with initial learning rate of 1e-5 with momentum 0.9. Dropout rate was 0.2. Batch size was 40. The number of hop was tuned from 1 to 3 by development set. Baselines We compared the proposed model with some commonly used simple baselines in BIBREF24 and the memory network BIBREF16 . $\bullet $ Choice Length: The most naive baseline is to select the choices based on the number of words in it without listening to the stories and looking at the questions. This included: (i) selecting the longest choice, (ii) selecting the shortest choice or (iii) selecting the choice with the length most different from the rest choices. $\bullet $ Within-Choices similarity: With the vector representations for the choices in pre-processing of Section "Experimental Setup" , we computed the cosine distance among the four choices and selected the one which is (i) the most similar to or (ii) the most different from the others. $\bullet $ Question and Choice Similarity: With the vector representations for the choices and questions in pre-processing of Section "Experimental Setup" , the choice with the highest cosine similarity to the question is selected. $\bullet $ Sliding Window BIBREF24 , BIBREF32 : This model try to found a window of $W$ utterances in the story with the maximum similarity to the question. The similarity between a window of utterances and a question was the averaged cosine similarity of the utterances in the window and the question by their glove vector representation. After obtaining the window with the largest cosine similarity to the question, the confidence score of each choice is the average cosine similarity between the utterances in the window and the choice. The choice with the highest score is selected as the answer. $\bullet $ Memory Network BIBREF16 : We implemented the memory network with some modifications for this task to find out if memory network was able to deal it. The original memory network didn't have the embedding module for the choices, so we used the module for question in the memory network to embed the choices. Besides, in order to have the memory network select the answer out of four choices, instead of outputting a word in its original version, we computed the cosine similarity between the the output of the last hop and the choices to select the closest choice as the answer. We shared all the parameters of embedding layers in the memory network for avoiding overfitting. Without this modification, very poor results were obtained on the testing set. The embedding size of the memory network was set 128, stochastic gradient descent was used as BIBREF16 with initial learning rate of 0.01. Batch size was 40. The size of hop was tuned from 1 to 3 by development set. Results We used the accuracy (number of question answered correctly / total number of questions) as our evaluation metric. The results are showed in Table 1 . We trained the model on the manual transcriptions of the stories, while tested the model on the testing set with both manual transcriptions (column labelled “Manual”) and ASR transcriptions (column labelled “ASR”). $\bullet $ Choice Length: Part (a) shows the performance of three models for selecting the answer with the longest, shortest or most different length, ranging from 23% to 35%. $\bullet $ Within Choices similarity: Part (b) shows the performance of two models for selecting the choice which is most similar to or the most different from the others. The accuracy are 36.09% and 27.87% respectively. $\bullet $ Question and Choice Similarity: In part (c), selecting the choice which is the most similar to the question only yielded 24.59%, very close to randomly guess. $\bullet $ Sliding Window: Part (d) for sliding window is the first baseline model considering the transcription of the stories. We tried the window size {1,2,3,5,10,15,20,30} and found the best window size to be 5 on the development set. This implied the useful information for answering the questions is probably within 5 sentences. The performance of 31.15% and 33.61% with and without ASR errors respectively tells how ASR errors affected the results, and the task here is too difficult for this approach to get good results. $\bullet $ Memory Network: The results of memory network in part (e) shows this task is relatively difficult for it, even though memory network was successful in some other tasks. However, the performance of 39.17% accuracy was clearly better than all approaches mentioned above, and it's interesting that this result was independent of the ASR errors and the reason is under investigation. The performance was 31% accuracy when we didn't use the shared embedding layer in the memory network. $\bullet $ AMRNN model: The results of the proposed model are listed in part (f), respectively for the attention mechanism on word-level and sentence-level. Without the ASR errors, the proposed model with sentence-level attention gave an accuracy as high as 51.67%, and slightly lower for word-level attention. It's interesting that without ASR errors, sentence-level attention is about 2.5% higher than word-level attention. Very possibly because that getting the information from the whole sentence is more useful than listening carefully at every words, especially for the conceptual and high-level questions in this task. Paying too much attention to every single word may be a bit noisy. On the other hand, the 34.32% ASR errors affected the model on sentence-level more than on word-level. This is very possibly because the incorrectly recognized words may seriously change the meaning of the whole sentences. However, with attention on word-level, when a word is incorrectly recognized, the model may be able to pay attention on other correctly recognized words to compensate for ASR errors and still come up with correct answer. Analysis on a typical example Fig 4 shows the visualization of the attention weights obtained for a typical example story in the testing set, with the proposed AMRNN model using word-level or sentence-level attention on manual or ASR transcriptions respectively. The darker the color, the higher the weights. Only a small part of the story is shown where the response of the model made good difference. This story was mainly talking about the thick cloud and some mysteries on Venus. The question for this story is “What is a possible origin of Venus'clouds?" and the correct choice is “Gases released as a result of volcanic activity". In the manual transcriptions cases (left half of Fig 4 ), both models, with word-level or sentence-level attention, answered the question right and focused on the core and informative words/sentences to the question. The sentence-level model successfully captured the sentence including “...volcanic eruptions often omits gases.”; while the word-level model captured some important key words like “volcanic eruptions", “emit gases". However, in ASR cases (right half of Fig 4 ), the ASR errors misled both models to put some attention on some irrelevant words/sentences. The sentence-level model focus on the irrelevant sentence “In other area, you got canyons..."; while the word-level model focused on some irrelevant words “canyons", “rift malaise", but still capture some correct important words like “volcanic" or “eruptions" to answer correctly. By the darkness of the color, we can observe that the problem caused by ASR errors was more serious for the sentence-level attention when capturing the key concepts needed for the question. This may explain why in part (f) of Table 1 we find degradation caused by ASR errors was less for word-level model than for sentence-level model. Conclusions In this paper we create a new task with the TOEFL corpus. TOEFL is an English examination, where the English learner is asked to listen to a story up to 5 minutes and then answer some corresponding questions. The learner needs to do deduction, logic and summarization for answering the question. We built a model which is able to deal with this challenging task. On manual transcriptions, the proposed model achieved 51.56% accuracy, while the very capable memory network got only 39.17% accuracy. Even on ASR transcriptions with WER of 34.32%, the proposed model still yielded 48.33% accuracy. We also found that although sentence-level attention achieved the best results on the manual transcription, word-level attention outperformed the sentence-level when there were ASR errors.
We propose a listening comprehension model for the task defined above, the Attention-based Multi-hop Recurrent Neural Network (AMRNN) framework, and show that this model is able to perform reasonably well for the task. In the proposed approach, the audio of the stories is first transcribed into text by ASR, and the proposed model is developed to process the transcriptions for selecting the correct answer out of 4 choices given the question.
0fd7d12711dfe0e35467a7ee6525127378a1bacb
0fd7d12711dfe0e35467a7ee6525127378a1bacb_0
Q: What is the new task proposed in this work? Text: Introduction With the popularity of shared videos, social networks, online course, etc, the quantity of multimedia or spoken content is growing much faster beyond what human beings can view or listen to. Accessing large collections of multimedia or spoken content is difficult and time-consuming for humans, even if these materials are more attractive for humans than plain text information. Hence, it will be great if the machine can automatically listen to and understand the spoken content, and even visualize the key information for humans. This paper presents an initial attempt towards the above goal: machine comprehension of spoken content. In an initial task, we wish the machine can listen to and understand an audio story, and answer the questions related to that audio content. TOEFL listening comprehension test is for human English learners whose native language is not English. This paper reports how today's machine can perform with such a test. The listening comprehension task considered here is highly related to Spoken Question Answering (SQA) BIBREF0 , BIBREF1 . In SQA, when the users enter questions in either text or spoken form, the machine needs to find the answer from some audio files. SQA usually worked with ASR transcripts of the spoken content, and used information retrieval (IR) techniques BIBREF2 or relied on knowledge bases BIBREF3 to find the proper answer. Sibyl BIBREF4 , a factoid SQA system, used some IR techniques and utilized several levels of linguistic information to deal with the task. Question Answering in Speech Transcripts (QAST) BIBREF5 , BIBREF6 , BIBREF7 has been a well-known evaluation program of SQA for years. However, most previous works on SQA mainly focused on factoid questions like “What is name of the highest mountain in Taiwan?”. Sometimes this kind of questions may be correctly answered by simply extracting the key terms from a properly chosen utterance without understanding the given spoken content. More difficult questions that cannot be answered without understanding the whole spoken content seemed rarely dealt with previously. With the fast development of deep learning, neural networks have successfully applied to speech recognition BIBREF8 , BIBREF9 , BIBREF10 or NLP tasks BIBREF11 , BIBREF12 . A number of recent efforts have explored various ways to understand multimedia in text form BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 . They incorporated attention mechanisms BIBREF16 with Long Short-Term Memory based networks BIBREF19 . In Question Answering field, most of the works focused on understanding text documents BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . Even though BIBREF24 tried to answer the question related to the movie, they only used the text and image in the movie for that. It seems that none of them have studied and focused on comprehension of spoken content yet. Task Definition and Contributions In this paper, we develop and propose a new task of machine comprehension of spoken content which was never mentioned before to our knowledge. We take TOEFL listening comprehension test as an corpus for this work. TOEFL is an English examination which tests the knowledge and skills of academic English for English learners whose native languages is not English. In this examination, the subjects would first listen to an audio story around five minutes and then answer several question according to that story. The story is related to the college life such as conversation between the student and the professor or a lecture in the class. Each question has four choices where only one is correct. An real example in the TOEFL examination is shown in Fig. 1 . The upper part is the manual transcription of a small part of the audio story. The questions and four choices are listed too. The correct choice to the question in Fig. 1 is choice A. The questions in TOEFL are not simple even for a human with relatively good knowledge because the question cannot be answered by simply matching the words in the question and in the choices with those in the story, and key information is usually buried by many irrelevant utterances. To answer the questions like “Why does the student go to professor's office?", the listeners have to understand the whole audio story and draw the inferences to answer the question correctly. As a result, this task is believed to be very challenging for the state-of-the-art spoken language understanding technologies. We propose a listening comprehension model for the task defined above, the Attention-based Multi-hop Recurrent Neural Network (AMRNN) framework, and show that this model is able to perform reasonably well for the task. In the proposed approach, the audio of the stories is first transcribed into text by ASR, and the proposed model is developed to process the transcriptions for selecting the correct answer out of 4 choices given the question. The initial experiments showed that the proposed model achieves encouraging scores on the TOEFL listening comprehension test. The attention-mechanism proposed in this paper can be applied on either word or sentence levels. We found that sentence-level attention achieved better results on the manual transcriptions without ASR errors, but word-level attention outperformed the sentence-level on ASR transcriptions with errors. Proposed Approach The overall structure of the proposed model is in Fig 2 . The input of model includes the transcriptions of an audio story, a question and four answer choices, all represented as word sequences. The word sequence of the input question is first represented as a question vector $V_Q$ in Section "Experiments" . With the question vector $V_Q$ , the attention mechanism is applied to extract the question-related information from the story in Section "Story Attention Module" . The machine then goes through the story by the attention mechanism several times and obtain an answer selection vector $V_{Q_n}$ in Section "Hopping" . This answer selection vector $V_{Q_n}$ is finally used to evaluate the confidence of each choice in Section "Answer Selection" , and the choice with the highest score is taken as the output. All the model parameters in the above procedure are jointly trained with the target where 1 for the correct choice and 0 otherwise. Question Representation Fig. 3 (A) shows the procedure of encoding the input question into a vector representation $V_Q$ . The input question is a sequence of T words, $w_1,w_2,...,w_T$ , every word $W_{i}$ represented in 1-Of-N encoding. A bidirectional Gated Recurrent Unit (GRU) network BIBREF25 , BIBREF26 , BIBREF27 takes one word from the input question sequentially at a time. In Fig 3 (A), the hidden layer output of the forward GRU (green rectangle) at time index $t$ is denoted by $y_{f}(t)$ , and that of the backward GRU (blue rectangle) is by $y_{b}(t)$ . After looking through all the words in the question, the hidden layer output of forward GRU network at the last time index $y_{f}(T)$ , and that of backward GRU network at the first time index $y_{b}(1)$ , are concatenated to form the question vector representation $V_{Q}$ , or $V_{Q} = [y_{f}(T) \Vert y_{b}(1)]$ . Story Attention Module Fig. 3 (B) shows the attention mechanism which takes the question vector $V_Q$ obtained in Fig. 3 (A) and the story transcriptions as the input to encode the whole story into a story vector representation $V_{S}$ . The story transcription is a very long word sequence with many sentences, so we only show two sentences each with 4 words for simplicity. There is a bidirectional GRU in Fig 3 (B) encoding the whole story into a story vector representation $V_{S}$ . The word vector representation of the $t$ -th word $S_{t}$ is constructed by concatenating the hidden layer outputs of forward and backward GRU networks, that is $S_t = [y_{f}(t) \Vert y_{b}(t)]$ . Then the attention value $\alpha _t$ for each time index ${t}$ is the cosine similarity between the question vector $V_{Q}$ and the word vector representation $S_{t}$ of each word, $V_{S}$0 . With attention values $V_{S}$2 , there can be two different attention mechanisms, word-level and sentence-level, to encode the whole story into the story vector representations $V_{S}$3 . Word-level Attention: We normalize all the attention values $\alpha _t$ into $\alpha _t^\prime $ such that they sum to one over the whole story. Then all the word vector $S_{t}$ from the bidirectional GRU network for every word in the story are weighted with this normalized attention value $\alpha _{t}^\prime $ and sum to give the story vector, that is $V_{S} = \sum _{t}\alpha _{t}^{\prime }S_{t}$ . Sentence-level Attention: Sentence-level attention means the model collects the information only at the end of each sentence. Therefore, the normalization is only performed over those words at the end of the sentences to obtain $\alpha _t^{\prime \prime }$ . The story vector representation is then $V_{S} = \sum _{t=eos}\alpha _t^{\prime \prime }*S_{t}$ , where only those words at the end of sentences (eos) contribute to the weighted sum. So $V_{S} = \alpha _4^{\prime \prime }*S_4 + \alpha _8^{\prime \prime }*S_8$ in the example of the Fig. 3 Hopping The overall picture of the proposed model is shown in Fig 2 , in which Fig. 3 (A) and (B) are component modules (labeled as Fig. 3 (A) and (B)) of the complete proposed model. In the left of Fig. 2 , the input question is first converted into a question vector $V_{Q_0}$ by the module in Fig. 3 (A). This $V_{Q_0}$ is used to compute the attention values $\alpha _{t}$ to obtain the story vector $V_{S_1}$ by the module in Fig. 3 (B). Then $V_{Q_0}$ and $V_{S_1}$ are summed to form a new question vector $V_{Q_1}$ . This process is called the first hop (hop 1) in Fig. 2 . The output of the first hop $V_{Q_1}$ can be used to compute the new attention to obtain a new story vector $V_{S_1}$ . This can be considered as the machine going over the story again to re-focus the story with a new question vector. Again, $V_{Q_1}$ and $V_{Q_0}$0 are summed to form $V_{Q_0}$1 (hop 2). After $V_{Q_0}$2 hops ( $V_{Q_0}$3 should be pre-defined), the output of the last hop $V_{Q_0}$4 is used for the answer selection in the Section "Answer Selection" . Answer Selection As in the upper part of Fig. 2 , the same way previously used to encode the question into $V_Q$ in Fig. 3 (A) is used here to encode four choice into choice vector representations $V_A$ , $V_B$ , $V_C$ , $V_D$ . Then the cosine similarity between the output of the last hop $V_{Q_n}$ and the choice vectors are computed, and the choice with highest similarity is chosen. Experimental Setup $\bullet $ Dataset Collection: The collected TOEFL dataset included 963 examples in total (717 for training, 124 for validation, 122 for testing). Each example included a story, a question and 4 choices. Besides the audio recording of each story, the manual transcriptions of the story are also available. We used a pydub library BIBREF28 to segment the full audio recording into utterances. Each audio recording has 57.9 utterances in average. There are in average 657.7 words in a story, 12.01 words in question and 10.35 words in each choice. $\bullet $ Speech Recognition: We used the CMU speech recognizer - Sphinx BIBREF29 to transcribe the audio story. The recognition word error rate (WER) was 34.32%. $\bullet $ Pre-processing: We used a pre-trained 300 dimension glove vector model BIBREF30 to obtain the vector representation for each word. Each utterance in the stories, question and each choice can be represented as a fixed length vector by adding the vectors of the all component words. Before training, we pruned the utterances in the story whose vector representation has cosine distance far from the question's. The percentage of the pruned utterances was determined by the performance of the model on the development set. The vector representations of utterances, questions and choices were only used in this pre-processing stage and the baseline approaches in Section "Baselines" , not used in the proposed model. $\bullet $ Training Details: The size of the hidden layer for both the forward and backward GRU networks were 128. All the bidirectional GRU networks in the proposed model shared the same set of parameters to avoid overfitting. We used RmsProp BIBREF31 with initial learning rate of 1e-5 with momentum 0.9. Dropout rate was 0.2. Batch size was 40. The number of hop was tuned from 1 to 3 by development set. Baselines We compared the proposed model with some commonly used simple baselines in BIBREF24 and the memory network BIBREF16 . $\bullet $ Choice Length: The most naive baseline is to select the choices based on the number of words in it without listening to the stories and looking at the questions. This included: (i) selecting the longest choice, (ii) selecting the shortest choice or (iii) selecting the choice with the length most different from the rest choices. $\bullet $ Within-Choices similarity: With the vector representations for the choices in pre-processing of Section "Experimental Setup" , we computed the cosine distance among the four choices and selected the one which is (i) the most similar to or (ii) the most different from the others. $\bullet $ Question and Choice Similarity: With the vector representations for the choices and questions in pre-processing of Section "Experimental Setup" , the choice with the highest cosine similarity to the question is selected. $\bullet $ Sliding Window BIBREF24 , BIBREF32 : This model try to found a window of $W$ utterances in the story with the maximum similarity to the question. The similarity between a window of utterances and a question was the averaged cosine similarity of the utterances in the window and the question by their glove vector representation. After obtaining the window with the largest cosine similarity to the question, the confidence score of each choice is the average cosine similarity between the utterances in the window and the choice. The choice with the highest score is selected as the answer. $\bullet $ Memory Network BIBREF16 : We implemented the memory network with some modifications for this task to find out if memory network was able to deal it. The original memory network didn't have the embedding module for the choices, so we used the module for question in the memory network to embed the choices. Besides, in order to have the memory network select the answer out of four choices, instead of outputting a word in its original version, we computed the cosine similarity between the the output of the last hop and the choices to select the closest choice as the answer. We shared all the parameters of embedding layers in the memory network for avoiding overfitting. Without this modification, very poor results were obtained on the testing set. The embedding size of the memory network was set 128, stochastic gradient descent was used as BIBREF16 with initial learning rate of 0.01. Batch size was 40. The size of hop was tuned from 1 to 3 by development set. Results We used the accuracy (number of question answered correctly / total number of questions) as our evaluation metric. The results are showed in Table 1 . We trained the model on the manual transcriptions of the stories, while tested the model on the testing set with both manual transcriptions (column labelled “Manual”) and ASR transcriptions (column labelled “ASR”). $\bullet $ Choice Length: Part (a) shows the performance of three models for selecting the answer with the longest, shortest or most different length, ranging from 23% to 35%. $\bullet $ Within Choices similarity: Part (b) shows the performance of two models for selecting the choice which is most similar to or the most different from the others. The accuracy are 36.09% and 27.87% respectively. $\bullet $ Question and Choice Similarity: In part (c), selecting the choice which is the most similar to the question only yielded 24.59%, very close to randomly guess. $\bullet $ Sliding Window: Part (d) for sliding window is the first baseline model considering the transcription of the stories. We tried the window size {1,2,3,5,10,15,20,30} and found the best window size to be 5 on the development set. This implied the useful information for answering the questions is probably within 5 sentences. The performance of 31.15% and 33.61% with and without ASR errors respectively tells how ASR errors affected the results, and the task here is too difficult for this approach to get good results. $\bullet $ Memory Network: The results of memory network in part (e) shows this task is relatively difficult for it, even though memory network was successful in some other tasks. However, the performance of 39.17% accuracy was clearly better than all approaches mentioned above, and it's interesting that this result was independent of the ASR errors and the reason is under investigation. The performance was 31% accuracy when we didn't use the shared embedding layer in the memory network. $\bullet $ AMRNN model: The results of the proposed model are listed in part (f), respectively for the attention mechanism on word-level and sentence-level. Without the ASR errors, the proposed model with sentence-level attention gave an accuracy as high as 51.67%, and slightly lower for word-level attention. It's interesting that without ASR errors, sentence-level attention is about 2.5% higher than word-level attention. Very possibly because that getting the information from the whole sentence is more useful than listening carefully at every words, especially for the conceptual and high-level questions in this task. Paying too much attention to every single word may be a bit noisy. On the other hand, the 34.32% ASR errors affected the model on sentence-level more than on word-level. This is very possibly because the incorrectly recognized words may seriously change the meaning of the whole sentences. However, with attention on word-level, when a word is incorrectly recognized, the model may be able to pay attention on other correctly recognized words to compensate for ASR errors and still come up with correct answer. Analysis on a typical example Fig 4 shows the visualization of the attention weights obtained for a typical example story in the testing set, with the proposed AMRNN model using word-level or sentence-level attention on manual or ASR transcriptions respectively. The darker the color, the higher the weights. Only a small part of the story is shown where the response of the model made good difference. This story was mainly talking about the thick cloud and some mysteries on Venus. The question for this story is “What is a possible origin of Venus'clouds?" and the correct choice is “Gases released as a result of volcanic activity". In the manual transcriptions cases (left half of Fig 4 ), both models, with word-level or sentence-level attention, answered the question right and focused on the core and informative words/sentences to the question. The sentence-level model successfully captured the sentence including “...volcanic eruptions often omits gases.”; while the word-level model captured some important key words like “volcanic eruptions", “emit gases". However, in ASR cases (right half of Fig 4 ), the ASR errors misled both models to put some attention on some irrelevant words/sentences. The sentence-level model focus on the irrelevant sentence “In other area, you got canyons..."; while the word-level model focused on some irrelevant words “canyons", “rift malaise", but still capture some correct important words like “volcanic" or “eruptions" to answer correctly. By the darkness of the color, we can observe that the problem caused by ASR errors was more serious for the sentence-level attention when capturing the key concepts needed for the question. This may explain why in part (f) of Table 1 we find degradation caused by ASR errors was less for word-level model than for sentence-level model. Conclusions In this paper we create a new task with the TOEFL corpus. TOEFL is an English examination, where the English learner is asked to listen to a story up to 5 minutes and then answer some corresponding questions. The learner needs to do deduction, logic and summarization for answering the question. We built a model which is able to deal with this challenging task. On manual transcriptions, the proposed model achieved 51.56% accuracy, while the very capable memory network got only 39.17% accuracy. Even on ASR transcriptions with WER of 34.32%, the proposed model still yielded 48.33% accuracy. We also found that although sentence-level attention achieved the best results on the manual transcription, word-level attention outperformed the sentence-level when there were ASR errors.
listening comprehension task
5dc2f79cd8078d5976f2df9ab128d4517e894257
5dc2f79cd8078d5976f2df9ab128d4517e894257_0
Q: Which news organisations are the headlines sourced from? Text: Introduction Several successful efforts have led to publishing huge RDF (Resource Description Framework) datasets on Linked Open Data (LOD) such as DBpedia BIBREF0 and LinkedGeoData BIBREF1 . However, these sources are limited to either structured or semi-structured data. So far, a significant portion of the Web content consists of textual data from social network feeds, blogs, news, logs, etc. Although the Natural Language Processing (NLP) community has developed approaches to extract essential information from plain text (e.g., BIBREF2 , BIBREF3 , BIBREF4 ), there is convenient support for knowledge graph construction. Further, several lexical analysis based approaches extract only a limited form of metadata that is inadequate for supporting applications such as question answering systems. For example, the query “Give me the list of reported events by BBC and CNN about the number of killed people in Yemen in the last four days”, about a recent event (containing restrictions such as location and time) poses several challenges to the current state of Linked Data and relevant information extraction techniques. The query seeks “fresh” information (e.g., last four days) whereas the current version of Linked Data is encyclopedic and historical, and does not contain appropriate information present in a temporally annotated data stream. Further, the query specifies provenance (e.g., published by BBC and CNN) that might not always be available on Linked Data. Crucially, the example query asks about a specific type of event (i.e., reports of war caused killing people) with multiple arguments (e.g., in this case, location argument occurred in Yemen). In spite of recent progress BIBREF5 , BIBREF6 , BIBREF7 , there is still no standardized mechanism for (i) selecting background data model, (ii) recognizing and classifying specific event types, (iii) identifying and labeling associated arguments (i.e., entities as well as relations), (iv) interlinking events, and (v) representing events. In fact, most of the state-of-the-art solutions are ad hoc and limited. In this paper, we provide a systematic pipeline for developing knowledge graph of interlinked events. As a proof-of-concept, we show a case study of headline news on Twitter. The main contributions of this paper include: The remainder of this paper is organized as follows. Section SECREF2 is dedicated to notation and problem statement. Section SECREF3 outlines the required steps for developing a knowledge graph of interlinked events. Section SECREF4 frames our contribution in the context of related work. Section SECREF5 concludes the paper with suggestions for future work. Notation and Problem Statement A tweet of a news headline contains a sequence of words INLINEFORM0 . tab:tweetsamples provides samples of news headlines on Twitter with provenance information such as publisher and publishing date. These were sampled for the type of embedded event discussed below. We aim to create an RDF knowledge base for such news headlines. An RDF knowledge base INLINEFORM1 consists of a set of triples INLINEFORM2 , where INLINEFORM3 is the union of all RDF resources ( INLINEFORM4 are respectively a set of classes, properties and instances), and INLINEFORM5 is a set of literals ( INLINEFORM6 ). We aim to extract rich set of triples INLINEFORM7 from each tweet INLINEFORM8 in the stream of news headline tweets (as discussed below), and populate an event knowledge graph INLINEFORM9 . Formally, the extraction task can be captured as INLINEFORM10 where INLINEFORM11 is the stream of news headline tweets and INLINEFORM12 is a knowledge graph of events (where a tweet INLINEFORM13 is mapped to a single event). We address three main challenges on the way: (1) agreeing upon a background data model (either by developing or reusing one), (2) annotating events, associated entities as well as relations, (3) interlinking events across time and media, and (4) publishing triples on the event knowledge graph according to the principles of Linked Open Data. Outline of The Required Steps Here, we outline the required steps for developing a knowledge graph of interlinked events. Figure FIGREF2 illustrates the high-level overview of the full pipeline. This pipeline contains the following main steps, to be discussed in detail later. (1) Collecting tweets from the stream of several news channels such as BBC and CNN on Twitter. (2) Agreeing upon background data model. (3) Event annotation potentially contains two subtasks (i) event recognition and (ii) event classification. (4) Entity/relation annotation possibly comprises a series of tasks as (i) entity recognition, (ii) entity linking, (iii) entity disambiguation, (iv) semantic role labeling of entities and (v) inferring implicit entities. (5) Interlinking events across time and media. (6) Publishing event knowledge graph based on the best practices of Linked Open Data. Background Data Model An initial key question is “What is the suitable background data model (serving as the pivot) for extracting triples associated to an event?” Contemporary approaches to extracting RDF triples capture entities and relations in terms of binary relations BIBREF8 , BIBREF9 , BIBREF10 . We divide the current triple-based extraction approaches into two categories: (i) those that (e.g., BIBREF8 ) follow the pattern INLINEFORM0 to leverage existing relations (i.e., properties) INLINEFORM1 in the knowledge base to find the entities INLINEFORM2 and INLINEFORM3 for which the relation INLINEFORM4 holds. For example, for the relation plays holds between an athlete and his/her favorite sport, and NELL extracts the triple seve ballesteros plays golf for two entities seve ballesteros and golf, and (ii) others that (e.g., BIBREF11 , BIBREF9 ) utilize the pattern INLINEFORM5 to leverage the entities available in the knowledge graph (i.e., INLINEFORM6 ) to infer new relations (e.g., INLINEFORM7 ) that either did not exist in the knowledge base or did not hold between the entities INLINEFORM8 . For example, BIBREF11 initially recognizes named entities in a given sentence and then, by inferring over domains and ranges of properties in DBpedia, assigns an appropriate property between the recognized entities. Given an entity (e.g. Garry Marshall) with type director associated with a known movie (e.g. Pretty woman), it infers the property dbpedia:director from background ontology between the two recognized entities Garry Marshall and Pretty woman. So far, supervised and unsupervised learning approaches have been applied for these extractions, which rely on the use of a large number of specific lexical, syntactical and semantic features. We assume that each news headline maps to an event modeled by an n-ary relation that can be captured by generating multiple triples. An INLINEFORM9 -ary relation is a relation with n arguments INLINEFORM10 . For example, a binary relation triple INLINEFORM11 can be rewritten as INLINEFORM12 . Thus, the first challenge concerns the suitable background data model for representing various types of events and their associated entities by simulating n-ary relationships in terms of binary relationships. Considering our case study, news headlines are often one single sentence (potentially accompanied by subordinate clauses) along with a link directing to the body of the news report. In spite of its brevity, headline tweets provide dense and significant information. Various entities appear in the embedded core message (the latter commonly as verb phrase), including aspects that indicate temporal properties, location and agent. For example, consider the tweet no.2 in tab:tweetsamples that will serve as a running example: Instagram CEO meets with @Pontifex to discuss "the power of images to unite people" that contains several entities related to the verb phrase `meet' and are distinguished by separating boxes as [baseline=(X.base)] X) [draw, shape=rectangle, inner sep=0] Instagram CEO; [baseline=(X.base)] X) [draw, shape=rectangle, inner sep=0] meets with; [baseline=(X.base)] X) [draw, shape=rectangle, inner sep=0] @Pontifex; [baseline=(X.base)] X) [draw, shape=rectangle, inner sep=0] to discuss "the power of images to unite people";. The general intuition is that a core verb (i.e., relation) heads each headline tweet accompanied by multiple arguments (i.e., entities). The number of entities INLINEFORM0 depends on the type of relation but location and time are generic default arguments for any relation INLINEFORM1 . Thus, the core chunk (verb phrase) corresponds to the meet event and the remaining chunks of the given tweet likely function as dependent entities of this event. For instance, in the running example, the chunk [baseline=(X.base)] X) [draw, shape=rectangle, inner sep=0] meets; corresponds to the event INLINEFORM2 with the following recognized entities as associated arguments: DISPLAYFORM0 In this example, the temporal, as well as location arguments of INLINEFORM0 , are absent. Consistent with linguistic theory, not all arguments are always present for each occurrence of an event. The RDF and OWL (Web Ontology language) primarily allow binary relations, defined as a link between either two entities or an entity and its associated property value. However, in the domain of news, we often encounter events that involve more than two entities, and hence require n-ary relations. The W3C Working group Note suggests two patterns for dealing with n-ary relations. We prefer the first pattern that creates INLINEFORM0 classes and INLINEFORM1 new properties to represent an n-ary relation. We formally define a generic event class representing all categories of events (n-ary relations) and then, use a template-based definition for any subclass of the generic event. This enables the representation of specific types of events (e.g. meet event). Definition 1 (Class of Generic Event) A generic event class refers to any event that can involve n multiple entities. In other words, the Generic Event Class denoted by INLINEFORM0 abstracts a relation among n entities. Definition 2 (Class of `X' Event) `X' Event denoted by INLINEFORM0 is a subclass (i.e. specific type) of the class INLINEFORM1 , i.e., INLINEFORM2 . Conceptually it refers to events sharing common behavior, semantics, and consequences. In the following, we provide requirements on the data model for developing a knowledge graph of interlinked events. Requirement 1 (Inclusion of Generic Event) An event data model minimally includes the definition of the generic event while including the specific event as optional. Requirement 2 (Inclusion of Provenance) The provenance of each event must be represented within the data model. Requirement 3 (Inclusion of Entity Type) The type of each entity associated with a given event must be represented within the data model. This type can be fine-grained or coarse-grained. Requirement 4 (Inclusion of Properties) For any given entity INLINEFORM0 associated with a given event INLINEFORM1 , a property (i.e., binary relation) INLINEFORM2 between the entity INLINEFORM3 and the event INLINEFORM4 must be represented within the data model. Thus, for the given pair INLINEFORM5 , either the triple INLINEFORM6 or the triple INLINEFORM7 is entailed in the RDF graph of INLINEFORM8 . Using Existing Data Models In this part, we review a number of state-of-the-art event ontologies. In 2009 UC Berkeley introduced the LODE ontology. In this ontology, an event is defined as an action which takes place at a certain time at a specific location. It can be a historical action as well as a scheduled action. There were previous models BIBREF12 BIBREF13 for representing historic events and scheduled events. Some of them represent both types of events (i.e., historical and scheduled), e.g., EventsML-G2. The LODE ontology proposed to build an interlingua model, i.e., a model which encapsulates the overlap among different ontologies e.g., CIDOC CRM, ABC Ontology, Event Ontology, and EventsML-G2. This encapsulation is utilized to create a mapping among existing ontologies. LODE was introduced to publish historical events in a fine-grained manner as it assumes each event is a unique event even if it is a part of a series. Because the concept of sub-events does not exist in LODE, related events can be interlinked. This ontology helps us to link factual aspects of a historical event. A factual aspect is given by 'What happened' (event), 'Where did it happen' (atPlace), 'When did it happen' (atTime), 'Who was involved' (involvedAgent) BIBREF14 . A visualization of LODE ontology is shown in Figure 2. We conclude that LODE meets (i) Requirement 1 as it defines a generic concept of the historic event, (ii) loosely Requirement 3 as it contains generic types for entities, e.g., Agent, SpatialThing, TemporalEntity, (iii) Requirement 4 as it includes necessary relations. But LODE ontology fails to meet Requirement 2 as it does not include the publisher of the event (provenance). Figure 3 depicts our running example in LODE. In 2011, SEM ontology was introduced from Vrije University and Delft. This ontology describes events as the central element in representing historical data, cultural heritage BIBREF15 BIBREF16 and multimedia BIBREF17 . SEM is combined with a Prolog API to create event instances without the background knowledge. This API also helps in connecting the created event instances to Linked Open Data. SEM proposes a method to attain interoperability among datasets from different domains. SEM strives to remove constraints to make it reusable by supporting weak semantics. Thus, in SEM, the concept of event is specified as everything that happens BIBREF18 . A schematic representation of SEM model is shown in fig:sem (summarized version). We conclude that SEM meets (i) Requirement 1 as it defines generic event, (ii) Requirement 3 as it specifies a type for entities, e.g., Actor, and (iii) Requirement 4 as it includes required properties. Similar to LODE ontology, SEM model fails to meet Requirement 2 as it does not include the publisher of events (provenance). Fig The DBpedia ontology defines the generic concept of event with a hierarchy which is broader, including lifecycle events (e.g. birth, death), natural events (e.g. earthquake, stormsurge), and societal events (e.g. concert, election). We conclude that DBpedia meets (i) Requirement 1 as it defines generic event, (ii) Requirement 3 as it specifies a type for entities, and (iii) Requirement 4 as it includes required properties. All these can be imported from other datasets present on the Web as DBpedia links to other datasets in an easy manner. Similar to LODE ontology and SEM model, DBpedia fails to meet Requirement 2 as it does not include the publisher of events (provenance). Schema.org, a product of collaborative efforts by major companies (i.e., Google, Bing, Yahoo and Yandex) , presents similar generic concept of event. It considers temporal as well as location aspects and additionally provides a limited hierarchy. This hierarchy introduces types of events such as business events, sale events, and social events. The schemas in schema.org are set of these types which are associted with a set of properties. Furthermore, it considers multiple labels between the associated entity and the concept of the event (represented in fig:schema.org) such as actor and contributor, which distinguishes the role of the associated entity. Schema.org introduces hundreds of schemas for categories like movies, music, organizations, TV shows, products, places etc . For Schema.org, an event is an instance taking place at a certain time and at a certain location. Like LODE, the repeated events are classified different events and thus keeping all the events unique even if it is a sub event. A schematic representation of Schema.org (summarized version) is shown in fig:schema.org. We conclude that Schema.org meets (i) Requirement 1 as it defines generic event, (ii) Requirement 3 as it specifies a type for entities e.g Actor (as type Person), Location (as type Place), Organizer (as type Person), StartDate (as type Date or DateTime) etc.. and (iii) Requirement 4 as it includes required properties for every entities defined above. Like LODE, SEM and DBPedia, Schema.org also fails in meeting Requirement 2 as it can define or import publisher of the event (provenance). The CEVO ontology relies on an abstract conceptualization of English verbs provided by Beth Levin BIBREF19 . Levin categorizes English verbs according to shared meaning and behavior. CEVO ontology, which is a machine-readable format (i.e., RDF format) of Levin 's categorization, presents more than 230 event classes for over 3,000 English verbs individuals. It organizes classes into semantically coherent event classes and event hierarchy, and notably, has an inventory of the corresponding lexical items. For example, tab:threeVerbClasses in the first column presents three event classes as (i) Communication event that corresponds to the event which causes transferring a message, (ii) Meet event which is an event related to group activities, and (iii) Murder event which is referring to an event that describing killing. The second column of tab:threeVerbClasses represents the lexical items (i.e., verbs) having shared meaning and are under the umbrella of a common event. In other words, an appearance of one of these verbs shows the occurrence of its associated event. For example, w.r.t. the running example, the appearance of the verb meet in the given tweet shows the occurrence of an event with the specific type `meet'. The CEVO ontology can be employed for recognizing events and more interesting classifying them w.r.t. their specific type. Specifically, it unifies apparently disparate lexical items under a single event class. More importantly, this can prove critical in reducing the number of apparent features for classifiers and in the support of inference necessary for query response. Developing a Data Model The existing data models are basically coarse-grained. In case the domain or application requires a fine-grained data model, the existing data models can be extended. For example, here we extended event data model from CEVO ontology for three specific events. We take into account three subclasses (shown in Figure UID50 ) as (i) class communication INLINEFORM0 that refers to any event transferring a message, (ii) class meet INLINEFORM1 that ranges over all group activities, and finally, (iii) class murder INLINEFORM2 that includes any reports of killing. Furthermore, as Figure UID50 shows, the provenance information (e.g., publisher or date) is represented within the data model (default arguments for all events), to meet Requirement req:prov. Figure FIGREF49 (b-d) represents parts of data model for sub-event classes (i.e., INLINEFORM0 ) in detail. The type of all possible associated entities as well as their necessary relationships are represented within the data model. This meets the Requirements SECREF22 and SECREF23 . For example, the meet event is associated with entities with type of Participant and Topic (i.e., topic discussed in the meeting). Considering the sample of tweets in Table TABREF9 , the tweets no.1, no.4, and no.7 are instances of the event Communication with the mentions tell, say, announce. The tweets no.2, no.5, no.8 are instances of the event Meet with the mentions meet, visit. The tweets no3, no6, no9 are instances of the event Murder with the mention kill. fig:exam demonstrates the running example within the developed data model. This event has two participants (i.e. instagram CEO and Pontifex) along with a specific topic. Using Singleton Property We can adopting the concept of a singleton property introduced in BIBREF20 for modeling n-ary relations in the background data model. Singleton properties replace RDF reifications and enable efficient represention of statements about statements. Since news headlines contain both provenance information and multiple associated entities, SP is a suitable choice and furthermore, it enable systematic encoding of n-ary relations in terms of binary relations. Example 1 (Input/Output) Considering our running example which is about the occurrence of a meet event with two participant entities Instagram CEO and Pontifex and the topic INLINEFORM0 . The generated triples using singleton property are as follows: 1. :Meet#1 singletonPropertyOf :Meet. 2. :Instagram_CEO :Meet#1 :Pontifex. 3. :Meet#1 :about :t1. 4. :Meet#1 :hasSource :CNN. 5. :Meet#1 :extractedOn `26/2/2106'. 6. :t1 a :Topic. 7. :t1 :body `to discuss the power of images to unite people'. Event Annotation Events can be represented at different levels of granularity. The event annotation task potentially comprises of two subsequent tasks as follows: Event recognition: Typically, event recognition utilizes phrases and their parts of speech. Although, verbs are more common for distinguishing an event (e.g., `Obama met Merkel in Berlin'), the other POS might reveal an event (e.g., `G8 meeting in Berlin'). Furthermore, event recognition task ecan beither open domain or closed domain. In the former one, collecting a lexicon of event phrases is more challenging rather than for the latter one. In any case, a learning approach (either supervised or semi-supervised) can be applied for determining whether or not a piece of text contains an event phrase or not. Event classification: This task is necessary in case the employed background data model considers the specific type of events as part of event annotation. In this case, event phrases have to be labeled by specific types of events using multi-class classifier trained for distinguishing the specific type of a given event. For example, the tweets no.2, no.5, no.8 of tab:tweetsamples have the specific type “meet”. Entity Annotation Entity annotation is a significant task for creating a knowledge graph of events. It can be challenging when we have a fine-grained background data model, which makes the task of semantic role labeling of entities necessary. Overall, the required tasks for fulfilling entity annotation are as follows: Entity recognition: This task specifies a chunk of text as an individual entity which plays a role in the occurred event. An entity mention can be explicit or implicit. Regarding explicit entities, Named Entity Recognition (NER) tools can be used for open domain scenarios whereas alternatives such as knowledge graphs, gazetteers, and domain dictionaries are necessary for closed domain scenarios. E.g., for the tweet no.1 in tab:tweetsamples, the chunk `Michelle Obama' is recognized as a named entity with the type person. Entity linking: Entity linking can be attributed into two tasks, the first one BIBREF21 , which is required in our case, is about associating entity mentions in a given text to their appropriate corresponding entities in a given knowledge graph. Thus, it removes ambiguity. A textual mention of an entity might have a matching entity in the knowledge graph or not. In the former case, entity linking task is reduced to hook a suitable entity whereas in the latter case, it is required that a new IRI (i.e., International Resource Identifier) be minted and typed and then linked to the textual mention of the given entity. E.g., in the tweet no.1 of tab:tweetsamples, the named entity `Michelle Obama' should be linked to the entity dbr:Michelle_Obama, when DBpedia is employed as the background knowledge graph. The second type of entity linking is about linking entities across knowledge graphs using owl:sameAs links. While the first task is required in the pipeline of developing an event knowledge graph, the second one is optional but can enhance quality and visibility of the underlying knowledge graph. Semantic role labeling: Most of the existing event ontologies consider generic roles such as actor or agent for involved entities. For fine-grained background data model, the semantic role labeling can be done. E.g., w.r.t. the tweet no.1 in tab:tweetsamples, the entity `Michelle Obama' can be labelled by the generic role actor employing LODE ontology or the specific role giver applying the data model illustrated in fig:communicationpattern. Entity disambiguation: An entity mention in a text might be polysemous, thus linking to the correct entity in the underlying knowledge graph requires a disambiguation phase. Furthermore, a single entity in multiple knowledge graphs might have various representations. Thus, interlinking them is challenging and requires a disambiguation phase as well BIBREF22 , BIBREF23 , BIBREF24 . E.g., w.r.t. the tweet no.7 in tab:tweetsamples, the named entity `Obama' is ambiguous as of whether it refers to `Michelle Obama' or `Barack Obama'. Regarding context (i.e., the remaining part of the tweet), it likely refers to `Barack Obama'. Implicit entity linking: As we mentioned before, not all of the mentions of entities are explicit. For example, w.r.t. the running example, the chunk `Instagram CEO' refers to the implicit entity `Kevin Systrom' who is the CEO of Instagram. The experiment performed in BIBREF25 shows that 21% entity mentions in movie domain and 40% of entity mentions in Book domain are implicit. Inferring implicit entities depends on capturing context as well as respecting time intervals. Interlinking Events The tasks described above have been considered independently before. The interlinking requirement, which has not hyet been adequately explored, comes from the two inherent facts of events as follows: A single event might be reported by various publisher sources using different expressions. Thus, it is necessary to identify same events across various publisher sources, and then interlink them using owl:sameAs or skos:related links. Events have an evolutionary nature in the sense that more information is added with time. Thus, it is essential to spot an event and its subsequent events reported to either complement the original event or reflect its causes or consequences. To interlink such events, skos:related can be utilized. The recognized events, entities and relations have to be published according to principles of LOD, RDF and the employed background data model. To maintain the knowledge graph's consistency and coherence, the generated triples must be de-duplicated, validated and assigned URIs disambiguated. The minted URI should be dereferenceable and interlinked to external RDF data sources. Related Work Overall, there is a lack of a holistic view on event extraction from free text and subsequently developing a knowledge graph from it. In this paper, we presented the full pipeline containing the required tasks such as (i) agreeing upon a data model, (ii) event annotation, (iii) entity annotation and (iv) interlinking events. The majority of previous research is either domain-specific or event-specific and do not undertake the full pipeline (e.g., limited to only event and entity extraction). We have provided a visionary review of the full pipeline which is merely applicable to any domain. In the following, we initially refer to research approaches for n-ary relation extraction on particular domains, then we refer the prominent approaches of binary relation extraction. We end by citing successful attempts at triple extraction from structured and semi-structured data sources. The work presented in BIBREF26 introduces complex relations as n-ary relations between n-typed entities. It proposes to factorize all complex relations into a set of binary relations. Then, a classifier is trained to recognize related entities of binary relations. After identifying all pairs of related entities for binary relations, it reconstructs the complex relation using a simple graph creation approach. Another domain for extracting n-ary relations is protein-protein interactions in the biomedical literature BIBREF27 , BIBREF28 , BIBREF29 . They first identify protein mentions in text and then recognize interaction relations before finally extracting interactions. The approaches employed for protein-protein interactions can be divided into three groups: (i) graph-based approaches (e.g. co-occurrence graph), (ii) rule-based approaches and (iii) learning approaches (e.g. maximum entropy). The other category of event extraction is based on binary relation extraction. NELL: Never-Ending Language Learning BIBREF8 is a learning agent that extracts new facts using existing binary relations in its knowledge base. It was initiated in 2010 using a couple of seed binary relations but after years of running has become self-learning. A notable feature of NELL is its dynamic approach for extracting facts, as it refreshes beliefs in its knowledge base and removes the incorrect or old ones. Linked Open Data as a valuable source of diverse ontologies also can be employed for extracting either new facts or new relations. The framework proposed in BIBREF11 , BIBREF9 extracts facts using binary relations from DBpedia as background knowledge. In contrast to NELL, it initially identifies Named Entities and their type on plain text, then it tries to infer mentions of relation expression to properties in DBpedia (e.g. taking the domain and range of properties into account). Open Information Extraction BIBREF10 is another extraction framework that is not limited to any predefined relation set. Furthermore, extracting triples from structured as well as semi-structured data sources has received adequate attention in the past, especially, DBpedia BIBREF0 and LinkedGeo Data BIBREF1 that leverage the loose structure of data for extraction. Another example is the work BIBREF30 which presents a holistic approach for extraction of RDF from templated websites. Conclusion and Future Work In this paper, we presented the initial version of our framework for the real-time extraction of events. This framework is part of our project HeadEx for developing a knowledge graph of interlinked events. We presented the requirements for choosing a data model representing events and their arguments. We reviewed the existing data models which have been employed by the state-of-the-art applications. Furthermore, we outlined the required tasks for annotating events as well entities. Then, the interlinking strategies were discussed. As a proof-of-concept, we followed a case study of news headlines on Twitter. For our future agenda, we plan to develop the envisioned pipeline containing all the required tasks by either implementing new components or integrating the existing ones.
BBC and CNN
4226a1830266ed5bde1b349205effafe7a0e2337
4226a1830266ed5bde1b349205effafe7a0e2337_0
Q: What meta-information is being transferred? Text: Introduction A knowledge graph is composed by a large amount of triples in the form of $(head\; entity,\, relation,\, tail\; entity)$ ( $(h, r, t)$ in short), encoding knowledge and facts in the world. Many KGs have been proposed BIBREF0 , BIBREF1 , BIBREF2 and applied to various applications BIBREF3 , BIBREF4 , BIBREF5 . Although with huge amount of entities, relations and triples, many KGs still suffer from incompleteness, thus knowledge graph completion is vital for the development of KGs. One of knowledge graph completion tasks is link prediction, predicting new triples based on existing ones. For link prediction, KG embedding methods BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 are promising ways. They learn latent representations, called embeddings, for entities and relations in continuous vector space and accomplish link prediction via calculation with embeddings. The effectiveness of KG embedding methods is promised by sufficient training examples, thus results are much worse for elements with a few instances during training BIBREF10 . However, few-shot problem widely exists in KGs. For example, about 10% of relations in Wikidata BIBREF0 have no more than 10 triples. Relations with a few instances are called few-shot relations. In this paper, we devote to discuss few-shot link prediction in knowledge graphs, predicting tail entity $t$ given head entity $h$ and relation $r$ by only observing $K$ triples about $r$ , usually $K$ is small. Figure 1 depicts an example of 3-shot link prediction in KGs. To do few-shot link prediction, BIBREF11 made the first trial and proposed GMatching, learning a matching metric by considering both learned embeddings and one-hop graph structures, while we try to accomplish few-shot link prediction from another perspective based on the intuition that the most important information to be transferred from a few existing instances to incomplete triples should be the common and shared knowledge within one task. We call such information relation-specific meta information and propose a new framework Meta Relational Learning (MetaR) for few-shot link prediction. For example, in Figure 1 , relation-specific meta information related to the relation CEOof or CountryCapital will be extracted and transferred by MetaR from a few existing instances to incomplete triples. The relation-specific meta information is helpful in the following two perspectives: 1) transferring common relation information from observed triples to incomplete triples, 2) accelerating the learning process within one task by observing only a few instances. Thus we propose two kinds of relation-specific meta information: relation meta and gradient meta corresponding to afore mentioned two perspectives respectively. In our proposed framework MetaR, relation meta is the high-order representation of a relation connecting head and tail entities. Gradient meta is the loss gradient of relation meta which will be used to make a rapid update before transferring relation meta to incomplete triples during prediction. Compared with GMatching BIBREF11 which relies on a background knowledge graph, our MetaR is independent with them, thus is more robust as background knowledge graphs might not be available for few-shot link prediction in real scenarios. We evaluate MetaR with different settings on few-shot link prediction datasets. MetaR achieves state-of-the-art results, indicating the success of transferring relation-specific meta information in few-shot link prediction tasks. In summary, main contributions of our work are three-folds: Related Work One target of MetaR is to learn the representation of entities fitting the few-shot link prediction task and the learning framework is inspired by knowledge graph embedding methods. Furthermore, using loss gradient as one kind of meta information is inspired by MetaNet BIBREF12 and MAML BIBREF13 which explore methods for few-shot learning by meta-learning. From these two points, we regard knowledge graph embedding and meta-learning as two main kinds of related work. Knowledge Graph Embedding Knowledge graph embedding models map relations and entities into continuous vector space. They use a score function to measure the truth value of each triple $(h, r, t)$ . Same as knowledge graph embedding, our MetaR also need a score function, and the main difference is that representation for $r$ is the learned relation meta in MetaR rather than embedding of $r$ as in normal knowledge graph embedding methods. One line of work is started by TransE BIBREF6 with distance score function. TransH BIBREF14 and TransR BIBREF15 are two typical models using different methods to connect head, tail entities and their relations. DistMult BIBREF9 and ComplEx BIBREF8 are derived from RESCAL BIBREF7 , trying to mine latent semantics in different ways. There are also some others like ConvE BIBREF16 using convolutional structure to score triples and models using additional information such as entity types BIBREF17 and relation paths BIBREF18 . BIBREF19 comprehensively summarize the current popular knowledge graph embedding methods. Traditional embedding models are heavily rely on rich training instances BIBREF20 , BIBREF11 , thus are limited to do few-shot link prediction. Our MetaR is designed to fill this vulnerability of existing embedding models. Meta-Learning Meta-learning seeks for the ability of learning quickly from only a few instances within the same concept and adapting continuously to more concepts, which are actually the rapid and incremental learning that humans are very good at. Several meta-learning models have been proposed recently. Generally, there are three kinds of meta-learning methods so far: (1) Metric-based meta-learning BIBREF21 , BIBREF22 , BIBREF23 , BIBREF11 , which tries to learn a matching metric between query and support set generalized to all tasks, where the idea of matching is similar to some nearest neighbors algorithms. Siamese Neural Network BIBREF21 is a typical method using symmetric twin networks to compute the metric of two inputs. GMatching BIBREF11 , the first trial on one-shot link prediction in knowledge graphs, learns a matching metric based on entity embeddings and local graph structures which also can be regarded as a metric-based method. (2) Model-based method BIBREF24 , BIBREF12 , BIBREF25 , which uses a specially designed part like memory to achieve the ability of learning rapidly by only a few training instances. MetaNet BIBREF12 , a kind of memory augmented neural network (MANN), acquires meta information from loss gradient and generalizes rapidly via its fast parameterization. (3) Optimization-based approach BIBREF13 , BIBREF26 , which gains the idea of learning faster by changing the optimization algorithm. Model-Agnostic Meta-Learning BIBREF13 abbreviated as MAML is a model-agnostic algorithm. It firstly updates parameters of task-specific learner, and meta-optimization across tasks is performed over parameters by using above updated parameters, it's like “a gradient through a gradient". As far as we know, work proposed by BIBREF11 is the first research on few-shot learning for knowledge graphs. It's a metric-based model which consists of a neighbor encoder and a matching processor. Neighbor encoder enhances the embedding of entities by their one-hop neighbors, and matching processor performs a multi-step matching by a LSTM block. Task Formulation In this section, we present the formal definition of a knowledge graph and few-shot link prediction task. A knowledge graph is defined as follows: Definition 3.1 (Knowledge Graph $\mathcal {G}$ ) A knowledge graph $\mathcal {G} = \lbrace \mathcal {E}, \mathcal {R}, \mathcal {TP}\rbrace $ . $\mathcal {E}$ is the entity set. $\mathcal {R}$ is the relation set. And $\mathcal {TP} = \lbrace (h, r, t)\in \mathcal {E} \times \mathcal {R} \times \mathcal {E}\rbrace $ is the triple set. And a few-shot link prediction task in knowledge graphs is defined as: Definition 3.2 (Few-shot link prediction task $\mathcal {T}$ ) With a knowledge graph $\mathcal {G} = \lbrace \mathcal {E}, \mathcal {R}, \mathcal {TP}\rbrace $ , given a support set $\mathcal {S}_r = \lbrace (h_i, t_i)\in \mathcal {E} \times \mathcal {E} | (h_i, r, t_i) \in \mathcal {TP} \rbrace $ about relation $r\in \mathcal {R}$ , where $|\mathcal {S}_r | = K$ , predicting the tail entity linked with relation $r$ to head entity $h_j$ , formulated as $r:(h_j, ?)$ , is called K-shot link prediction. As defined above, a few-shot link prediction task is always defined for a specific relation. During prediction, there usually is more than one triple to be predicted, and with support set $\mathcal {S}_r$ , we call the set of all triples to be predicted as query set $\mathcal {Q}_r = \lbrace r:(h_j, ?)\rbrace $ . The goal of a few-shot link prediction method is to gain the capability of predicting new triples about a relation $r$ with only observing a few triples about $r$ . Thus its training process is based on a set of tasks $\mathcal {T}_{train}=\lbrace \mathcal {T}_{i}\rbrace _{i=1}^{M}$ where each task $\mathcal {T}_{i} = \lbrace \mathcal {S}_i, \mathcal {Q}_i\rbrace $ corresponds to an individual few-shot link prediction task with its own support and query set. Its testing process is conducted on a set of new tasks $\mathcal {T}_{test} = \lbrace \mathcal {T}_{j}\rbrace _{j=1}^{N}$ which is similar to $\mathcal {T}_{train}$ , other than that $\mathcal {T}_{j} \in \mathcal {T}_{test}$ should be about relations that have never been seen in $\mathcal {T}_{train}$ . Table 1 gives a concrete example of the data during learning and testing for few-shot link prediction. Method To make one model gain the few-shot link prediction capability, the most important thing is transferring information from support set to query set and there are two questions for us to think about: (1) what is the most transferable and common information between support set and query set and (2) how to learn faster by only observing a few instances within one task. For question (1), within one task, all triples in support set and query set are about the same relation, thus it is naturally to suppose that relation is the key common part between support and query set. For question (2), the learning process is usually conducted by minimizing a loss function via gradient descending, thus gradients reveal how the model's parameters should be changed. Intuitively, we believe that gradients are valuable source to accelerate learning process. Based on these thoughts, we propose two kinds of meta information which are shared between support set and query set to deal with above problems: In order to extract relation meta and gradient mate and incorporate them with knowledge graph embedding to solve few-shot link prediction, our proposal, MetaR, mainly contains two modules: The overview and algorithm of MetaR are shown in Figure 2 and Algorithm "Method" . Next, we introduce each module of MetaR via one few-shot link prediction task $\mathcal {T}_r = \lbrace \mathcal {S}_r, \mathcal {Q}_r\rbrace $ . [tb] 1 Learning of MetaR [1] Training tasks $\mathcal {T}_{train}$ Embedding layer $emb$ ; Parameter of relation-meta learner $\phi $ not done Sample a task $\mathcal {T}_r={\lbrace \mathcal {S}_r, \mathcal {Q}_r\rbrace }$ from $\mathcal {T}_{train}$ Get $\mathit {R}$ from $\mathcal {S}_{r}$ (Equ. 18 , Equ. 19 ) Compute loss in $\mathcal {S}_{r}$ (Equ. 22 ) Get $\mathit {G}$ by gradient of $\mathit {R}$ (Equ. 23 ) Update $emb$0 by $emb$1 (Equ. 24 ) Compute loss in $emb$2 (Equ. 26 ) Update $emb$3 and $emb$4 by loss in $emb$5 Relation-Meta Learner To extract the relation meta from support set, we design a relation-meta learner to learn a mapping from head and tail entities in support set to relation meta. The structure of this relation-meta learner can be implemented as a simple neural network. In task $\mathcal {T}_r$ , the input of relation-meta learner is head and tail entity pairs in support set $\lbrace (h_i, t_i) \in \mathcal {S}_r\rbrace $ . We firstly extract entity-pair specific relation meta via a $L$ -layers fully connected neural network, $$\begin{aligned} \mathbf {x}^0 &= \mathbf {h}_i \oplus \mathbf {t}_i \\ \mathbf {x}^l &= \sigma ({\mathbf {W}^{l}\mathbf {x}^{l-1} + b^l}) \\ \mathit {R}_{(h_i, t_i)} &= {\mathbf {W}^{L}\mathbf {x}^{L-1} + b^L} \end{aligned}$$ (Eq. 18) where $\mathbf {h}_i \in \mathbb {R}^{d}$ and $\mathbf {t}_i \in \mathbb {R}^{d}$ are embeddings of head entity $h_i$ and tail entity $t_i$ with dimension $d$ respectively. $L$ is the number of layers in neural network, and $l \in \lbrace 1, \dots , L-1 \rbrace $ . $\mathbf {W}^l$ and $\mathbf {b}^l$ are weights and bias in layer $l$ . We use LeakyReLU for activation $\mathbf {t}_i \in \mathbb {R}^{d}$0 . $\mathbf {t}_i \in \mathbb {R}^{d}$1 represents the concatenation of vector $\mathbf {t}_i \in \mathbb {R}^{d}$2 and $\mathbf {t}_i \in \mathbb {R}^{d}$3 . Finally, $\mathbf {t}_i \in \mathbb {R}^{d}$4 represent the relation meta from specific entity pare $\mathbf {t}_i \in \mathbb {R}^{d}$5 and $\mathbf {t}_i \in \mathbb {R}^{d}$6 . With multiple entity-pair specific relation meta, we generate the final relation meta in current task via averaging all entity-pair specific relation meta in current task, $$\mathit {R}_{\mathcal {T}_r} = \frac{\sum _{i=1}^{K}\mathit {R}_{(h_i, t_i)}}{K}$$ (Eq. 19) Embedding Learner As we want to get gradient meta to make a rapid update on relation meta, we need a score function to evaluate the truth value of entity pairs under specific relations and also the loss function for current task. We apply the key idea of knowledge graph embedding methods in our embedding learner, as they are proved to be effective on evaluating truth value of triples in knowledge graphs. In task $\mathcal {T}_r$ , we firstly calculate the score for each entity pairs $(h_i, t_i)$ in support set $\mathcal {S}_r$ as follows: $$s_{(h_i, t_i)} = \Vert \mathbf {h}_i + {\mathit {R}_{\mathcal {T}_r}} - \mathbf {t}_i \Vert $$ (Eq. 21) where $\Vert \mathbf {x}\Vert $ represents the L2 norm of vector $\mathbf {x}$ . We design the score function inspired by TransE BIBREF6 which assumes the head entity embedding $\mathbf {h}$ , relation embedding $\mathbf {r}$ and tail entity embedding $\mathbf {t}$ for a true triple $(h, r, t)$ satisfying $\mathbf {h} + \mathbf {r} = \mathbf {t}$ . Thus the score function is defined according to the distance between $\mathbf {h} + \mathbf {r} $ and $\mathbf {t}$ . Transferring to our few-show link prediction task, we replace the relation embedding $\mathbf {r}$ with relation meta $\mathbf {x}$0 as there is no direct general relation embeddings in our task and $\mathbf {x}$1 can be regarded as the relation embedding for current task $\mathbf {x}$2 . With score function for each triple, we set the following loss, $$L(\mathcal {S}_r) = \sum _{(h_i, t_i)\in \mathcal {S}_r} [\gamma +s_{(h_i, t_i)}-s_{(h_i, t_i^{\prime })}]_{+}$$ (Eq. 22) where $[x]_{+}$ represents the positive part of $x$ and $\gamma $ represents margin which is a hyperparameter. $s_{(h_i, t_i^{\prime })}$ is the score for negative sample $(h_i, t_i^{\prime })$ corresponding to current positive entity pair $(h_i, t_i) \in \mathcal {S}_r$ , where $(h_i, r, t_i^{\prime }) \notin \mathcal {G}$ . $L(\mathcal {S}_r)$ should be small for task $\mathcal {T}_r$ which represents the model can properly encode truth values of triples. Thus gradients of parameters indicate how should the parameters be updated. Thus we regard the gradient of $\mathit {R}_{\mathcal {T}_r}$ based on $L(\mathcal {S}_r)$ as gradient meta $\mathit {G}_{\mathcal {T}_r}$ : $$\vspace{-2.84526pt} \mathit {G}_{\mathcal {T}_r} = \nabla _{\mathit {R}_{\mathcal {T}_r}} L(\mathcal {S}_r)$$ (Eq. 23) Following the gradient update rule, we make a rapid update on relation meta as follows: $$\mathit {R}^\prime _{\mathcal {T}_r} = \mathit {R}_{\mathcal {T}_r} - \beta \mathit {G}_{\mathcal {T}_r}$$ (Eq. 24) where $\beta $ indicates the step size of gradient meta when operating on relation meta. When scoring the query set by embedding learner, we use updated relation meta. After getting the updated relation meta $\mathit {R}^\prime $ , we transfer it to samples in query set $\mathcal {Q}_r = \lbrace (h_j, t_j) \rbrace $ and calculate their scores and loss of query set, following the same way in support set: $$s_{(h_j, t_j)} = \Vert \mathbf {h}_j + \mathit {R}_{\mathcal {T}_r}^\prime - \mathbf {t}_j \Vert $$ (Eq. 25) $$L(\mathcal {Q}_r) = \sum _{(h_j, t_j)\in \mathcal {Q}_r}[\gamma +s_{(h_j, t_j)}-s_{(h_j, t_j^{\prime })}]_{+}$$ (Eq. 26) where $L(\mathcal {Q}_r)$ is our training objective to be minimized. We use this loss to update the whole model. Training Objective During training, our objective is to minimize the following loss $L$ which is the sum of query loss for all tasks in one minibatch: $$L = \sum _{(\mathcal {S}_r, \mathcal {Q}_r)\in \mathcal {T}_{train}} L(\mathcal {Q}_r)$$ (Eq. 28) Experiments With MetaR, we want to figure out following things: 1) can MetaR accomplish few-shot link prediction task and even perform better than previous model? 2) how much relation-specific meta information contributes to few-shot link prediction? 3) is there any requirement for MetaR to work on few-shot link prediction? To do these, we conduct the experiments on two few-shot link prediction datasets and deeply analyze the experiment results . Datasets and Evaluation Metrics We use two datasets, NELL-One and Wiki-One which are constructed by BIBREF11 . NELL-One and Wiki-One are derived from NELL BIBREF2 and Wikidata BIBREF0 respectively. Furthermore, because these two benchmarks are firstly tested on GMatching which consider both learned embeddings and one-hop graph structures, a background graph is constructed with relations out of training/validation/test sets for obtaining the pre-train entity embeddings and providing the local graph for GMatching. Unlike GMatching using background graph to enhance the representations of entities, our MetaR can be trained without background graph. For NELL-One and Wiki-One which have background graph originally, we can make use of such background graph by fitting it into training tasks or using it to train embeddings to initialize entity representations. Overall, we have three kinds of dataset settings, shown in Table 3 . For setting of BG:In-Train, in order to make background graph included in training tasks, we sample tasks from triples in background graph and original training set, rather than sampling from only original training set. Note that these three settings don't violate the task formulation of few-shot link prediction in KGs. The statistics of NELL-One and Wiki-One are shown in Table 2 . We use two traditional metrics to evaluate different methods on these datasets, MRR and Hits@N. MRR is the mean reciprocal rank and Hits@N is the proportion of correct entities ranked in the top N in link prediction. Implementation During training, mini-batch gradient descent is applied with batch size set as 64 and 128 for NELL-One and Wiki-One respectively. We use Adam BIBREF27 with the initial learning rate as 0.001 to update parameters. We set $\gamma = 1$ and $\beta = 1$ . The number of positive and negative triples in query set is 3 and 10 in NELL-One and Wiki-One. Trained model will be applied on validation tasks each 1000 epochs, and the current model parameters and corresponding performance will be recorded, after stopping, the model that has the best performance on Hits@10 will be treated as final model. For number of training epoch, we use early stopping with 30 patient epochs, which means that we stop the training when the performance on Hits@10 drops 30 times continuously. Following GMatching, the embedding dimension of NELL-One is 100 and Wiki-One is 50. The sizes of two hidden layers in relation-meta learner are 500, 200 and 250, 100 for NELL-One and Wiki-One. Results The results of two few-shot link prediction tasks, including 1-shot and 5-shot, on NELL-One and Wiki-One are shown in Table 4 . The baseline in our experiment is GMatching BIBREF11 , which made the first trial on few-shot link prediction task and is the only method that we can find as baseline. In this table, results of GMatching with different KG embedding initialization are copied from the original paper. Our MetaR is tested on different settings of datasets introduced in Table 3 . In Table 4 , our model performs better with all evaluation metrics on both datasets. Specifically, for 1-shot link prediction, MetaR increases by 33%, 28.1%, 29.2% and 27.8% on MRR, Hits@10, Hits@5 and Hits@1 on NELL-One, and 41.4%, 18.8%, 37.9% and 62.2% on Wiki-One, with average improvement of 29.53% and 40.08% respectively. For 5-shot, MetaR increases by 29.9%, 40.5%, 32.6% and 17.5% on MRR, Hits@10, Hits@5 and Hits@1 on NELL-One with average improvement of 30.13%. Thus for the first question we want to explore, the results of MetaR are no worse than GMatching, indicating that MetaR has the capability of accomplishing few-shot link prediction. In parallel, the impressive improvement compared with GMatching demonstrates that the key idea of MetaR, transferring relation-specific meta information from support set to query set, works well on few-shot link prediction task. Furthermore, compared with GMatching, our MetaR is independent with background knowledge graphs. We test MetaR on 1-shot link prediction in partial NELL-One and Wiki-One which discard the background graph, and get the results of 0.279 and 0.348 on Hits@10 respectively. Such results are still comparable with GMatching in fully datasets with background. Ablation Study We have proved that relation-specific meta information, the key point of MetaR, successfully contributes to few-shot link prediction in previous section. As there are two kinds of relation-specific meta information in this paper, relation meta and gradient meta, we want to figure out how these two kinds of meta information contribute to the performance. Thus, we conduct an ablation study with three settings. The first one is our complete MetaR method denoted as standard. The second one is removing the gradient meta by transferring un-updated relation meta directly from support set to query set without updating it via gradient meta, denoted as -g. The third one is removing the relation meta further which makes the model rebase to a simple TransE embedding model, denoted as -g -r. The result under the third setting is copied from BIBREF11 . It uses the triples from background graph, training tasks and one-shot training triples from validation/test set, so it's neither BG:Pre-Train nor BG:In-Train. We conduct the ablation study on NELL-one with metric Hit@10 and results are shown in Table 5 . Table 5 shows that removing gradient meta decreases 29.3% and 15% on two dataset settings, and further removing relation meta continuous decreases the performance with 55% and 72% compared to the standard results. Thus both relation meta and gradient meta contribute significantly and relation meta contributes more than gradient meta. Without gradient meta and relation meta, there is no relation-specific meta information transferred in the model and it almost doesn't work. This also illustrates that relation-specific meta information is important and effective for few-shot link prediction task. Facts That Affect MetaR's Performance We have proved that both relation meta and gradient meta surely contribute to few-shot link prediction. But is there any requirement for MetaR to ensure the performance on few-shot link prediction? We analyze this from two points based on the results, one is the sparsity of entities and the other is the number of tasks in training set. The sparsity of entities We notice that the best result of NELL-One and Wiki-One appears in different dataset settings. With NELL-One, MetaR performs better on BG:In-Train dataset setting, while with Wiki-One, it performs better on BG:Pre-Train. Performance difference between two dataset settings is more significant on Wiki-One. Most datasets for few-shot task are sparse and the same with NELL-One and Wiki-One, but the entity sparsity in these two datasets are still significantly different, which is especially reflected in the proportion of entities that only appear in one triple in training set, $82.8$ % and $37.1$ % in Wiki-One and NELL-One respectively. Entities only have one triple during training will make MetaR unable to learn good representations for them, because entity embeddings heavily rely on triples related to them in MetaR. Only based on one triple, the learned entity embeddings will include a lot of bias. Knowledge graph embedding method can learn better embeddings than MetaR for those one-shot entities, because entity embeddings can be corrected by embeddings of relations that connect to it, while they can't in MetaR. This is why the best performance occurs in BG:Pre-train setting on Wiki-One, pre-train entity embeddings help MetaR overcome the low-quality on one-shot entities. The number of tasks From the comparison of MetaR's performance between with and without background dataset setting on NELL-One, we find that the number of tasks will affect MetaR's performance significantly. With BG:In-Train, there are 321 tasks during training and MetaR achieves 0.401 on Hits@10, while without background knowledge, there are 51, with 270 less, and MetaR achieves 0.279. This makes it reasonable that why MetaR achieves best performance on BG:In-Train with NELL-One. Even NELL-One has $37.1$ % one-shot entities, adding background knowledge into dataset increases the number of training tasks significantly, which complements the sparsity problem and contributes more to the task. Thus we conclude that both the sparsity of entities and number of tasks will affect performance of MetaR. Generally, with more training tasks, MetaR performs better and for extremely sparse dataset, pre-train entity embeddings are preferred. Conclusion We propose a meta relational learning framework to do few-shot link prediction in KGs, and we design our model to transfer relation-specific meta information from support set to query set. Specifically, using relation meta to transfer common and important information, and using gradient meta to accelerate learning. Compared to GMatching which is the only method in this task, our method MetaR gets better performance and it is also independent with background knowledge graphs. Based on experimental results, we analyze that the performance of MetaR will be affected by the number of training tasks and sparsity of entities. We may consider obtaining more valuable information about sparse entities in few-shot link prediction in KGs in the future. Acknowledgments We want to express gratitude to the anonymous reviewers for their hard work and kind comments, which will further improve our work in the future. This work is funded by NSFC 91846204/61473260, national key research program YS2018YFB140004, and Alibaba CangJingGe(Knowledge Engine) Research Plan.
high-order representation of a relation, loss gradient of relation meta
5fb348b2d7b012123de93e79fd46a7182fd062bd
5fb348b2d7b012123de93e79fd46a7182fd062bd_0
Q: What datasets are used to evaluate the approach? Text: Introduction A knowledge graph is composed by a large amount of triples in the form of $(head\; entity,\, relation,\, tail\; entity)$ ( $(h, r, t)$ in short), encoding knowledge and facts in the world. Many KGs have been proposed BIBREF0 , BIBREF1 , BIBREF2 and applied to various applications BIBREF3 , BIBREF4 , BIBREF5 . Although with huge amount of entities, relations and triples, many KGs still suffer from incompleteness, thus knowledge graph completion is vital for the development of KGs. One of knowledge graph completion tasks is link prediction, predicting new triples based on existing ones. For link prediction, KG embedding methods BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 are promising ways. They learn latent representations, called embeddings, for entities and relations in continuous vector space and accomplish link prediction via calculation with embeddings. The effectiveness of KG embedding methods is promised by sufficient training examples, thus results are much worse for elements with a few instances during training BIBREF10 . However, few-shot problem widely exists in KGs. For example, about 10% of relations in Wikidata BIBREF0 have no more than 10 triples. Relations with a few instances are called few-shot relations. In this paper, we devote to discuss few-shot link prediction in knowledge graphs, predicting tail entity $t$ given head entity $h$ and relation $r$ by only observing $K$ triples about $r$ , usually $K$ is small. Figure 1 depicts an example of 3-shot link prediction in KGs. To do few-shot link prediction, BIBREF11 made the first trial and proposed GMatching, learning a matching metric by considering both learned embeddings and one-hop graph structures, while we try to accomplish few-shot link prediction from another perspective based on the intuition that the most important information to be transferred from a few existing instances to incomplete triples should be the common and shared knowledge within one task. We call such information relation-specific meta information and propose a new framework Meta Relational Learning (MetaR) for few-shot link prediction. For example, in Figure 1 , relation-specific meta information related to the relation CEOof or CountryCapital will be extracted and transferred by MetaR from a few existing instances to incomplete triples. The relation-specific meta information is helpful in the following two perspectives: 1) transferring common relation information from observed triples to incomplete triples, 2) accelerating the learning process within one task by observing only a few instances. Thus we propose two kinds of relation-specific meta information: relation meta and gradient meta corresponding to afore mentioned two perspectives respectively. In our proposed framework MetaR, relation meta is the high-order representation of a relation connecting head and tail entities. Gradient meta is the loss gradient of relation meta which will be used to make a rapid update before transferring relation meta to incomplete triples during prediction. Compared with GMatching BIBREF11 which relies on a background knowledge graph, our MetaR is independent with them, thus is more robust as background knowledge graphs might not be available for few-shot link prediction in real scenarios. We evaluate MetaR with different settings on few-shot link prediction datasets. MetaR achieves state-of-the-art results, indicating the success of transferring relation-specific meta information in few-shot link prediction tasks. In summary, main contributions of our work are three-folds: Related Work One target of MetaR is to learn the representation of entities fitting the few-shot link prediction task and the learning framework is inspired by knowledge graph embedding methods. Furthermore, using loss gradient as one kind of meta information is inspired by MetaNet BIBREF12 and MAML BIBREF13 which explore methods for few-shot learning by meta-learning. From these two points, we regard knowledge graph embedding and meta-learning as two main kinds of related work. Knowledge Graph Embedding Knowledge graph embedding models map relations and entities into continuous vector space. They use a score function to measure the truth value of each triple $(h, r, t)$ . Same as knowledge graph embedding, our MetaR also need a score function, and the main difference is that representation for $r$ is the learned relation meta in MetaR rather than embedding of $r$ as in normal knowledge graph embedding methods. One line of work is started by TransE BIBREF6 with distance score function. TransH BIBREF14 and TransR BIBREF15 are two typical models using different methods to connect head, tail entities and their relations. DistMult BIBREF9 and ComplEx BIBREF8 are derived from RESCAL BIBREF7 , trying to mine latent semantics in different ways. There are also some others like ConvE BIBREF16 using convolutional structure to score triples and models using additional information such as entity types BIBREF17 and relation paths BIBREF18 . BIBREF19 comprehensively summarize the current popular knowledge graph embedding methods. Traditional embedding models are heavily rely on rich training instances BIBREF20 , BIBREF11 , thus are limited to do few-shot link prediction. Our MetaR is designed to fill this vulnerability of existing embedding models. Meta-Learning Meta-learning seeks for the ability of learning quickly from only a few instances within the same concept and adapting continuously to more concepts, which are actually the rapid and incremental learning that humans are very good at. Several meta-learning models have been proposed recently. Generally, there are three kinds of meta-learning methods so far: (1) Metric-based meta-learning BIBREF21 , BIBREF22 , BIBREF23 , BIBREF11 , which tries to learn a matching metric between query and support set generalized to all tasks, where the idea of matching is similar to some nearest neighbors algorithms. Siamese Neural Network BIBREF21 is a typical method using symmetric twin networks to compute the metric of two inputs. GMatching BIBREF11 , the first trial on one-shot link prediction in knowledge graphs, learns a matching metric based on entity embeddings and local graph structures which also can be regarded as a metric-based method. (2) Model-based method BIBREF24 , BIBREF12 , BIBREF25 , which uses a specially designed part like memory to achieve the ability of learning rapidly by only a few training instances. MetaNet BIBREF12 , a kind of memory augmented neural network (MANN), acquires meta information from loss gradient and generalizes rapidly via its fast parameterization. (3) Optimization-based approach BIBREF13 , BIBREF26 , which gains the idea of learning faster by changing the optimization algorithm. Model-Agnostic Meta-Learning BIBREF13 abbreviated as MAML is a model-agnostic algorithm. It firstly updates parameters of task-specific learner, and meta-optimization across tasks is performed over parameters by using above updated parameters, it's like “a gradient through a gradient". As far as we know, work proposed by BIBREF11 is the first research on few-shot learning for knowledge graphs. It's a metric-based model which consists of a neighbor encoder and a matching processor. Neighbor encoder enhances the embedding of entities by their one-hop neighbors, and matching processor performs a multi-step matching by a LSTM block. Task Formulation In this section, we present the formal definition of a knowledge graph and few-shot link prediction task. A knowledge graph is defined as follows: Definition 3.1 (Knowledge Graph $\mathcal {G}$ ) A knowledge graph $\mathcal {G} = \lbrace \mathcal {E}, \mathcal {R}, \mathcal {TP}\rbrace $ . $\mathcal {E}$ is the entity set. $\mathcal {R}$ is the relation set. And $\mathcal {TP} = \lbrace (h, r, t)\in \mathcal {E} \times \mathcal {R} \times \mathcal {E}\rbrace $ is the triple set. And a few-shot link prediction task in knowledge graphs is defined as: Definition 3.2 (Few-shot link prediction task $\mathcal {T}$ ) With a knowledge graph $\mathcal {G} = \lbrace \mathcal {E}, \mathcal {R}, \mathcal {TP}\rbrace $ , given a support set $\mathcal {S}_r = \lbrace (h_i, t_i)\in \mathcal {E} \times \mathcal {E} | (h_i, r, t_i) \in \mathcal {TP} \rbrace $ about relation $r\in \mathcal {R}$ , where $|\mathcal {S}_r | = K$ , predicting the tail entity linked with relation $r$ to head entity $h_j$ , formulated as $r:(h_j, ?)$ , is called K-shot link prediction. As defined above, a few-shot link prediction task is always defined for a specific relation. During prediction, there usually is more than one triple to be predicted, and with support set $\mathcal {S}_r$ , we call the set of all triples to be predicted as query set $\mathcal {Q}_r = \lbrace r:(h_j, ?)\rbrace $ . The goal of a few-shot link prediction method is to gain the capability of predicting new triples about a relation $r$ with only observing a few triples about $r$ . Thus its training process is based on a set of tasks $\mathcal {T}_{train}=\lbrace \mathcal {T}_{i}\rbrace _{i=1}^{M}$ where each task $\mathcal {T}_{i} = \lbrace \mathcal {S}_i, \mathcal {Q}_i\rbrace $ corresponds to an individual few-shot link prediction task with its own support and query set. Its testing process is conducted on a set of new tasks $\mathcal {T}_{test} = \lbrace \mathcal {T}_{j}\rbrace _{j=1}^{N}$ which is similar to $\mathcal {T}_{train}$ , other than that $\mathcal {T}_{j} \in \mathcal {T}_{test}$ should be about relations that have never been seen in $\mathcal {T}_{train}$ . Table 1 gives a concrete example of the data during learning and testing for few-shot link prediction. Method To make one model gain the few-shot link prediction capability, the most important thing is transferring information from support set to query set and there are two questions for us to think about: (1) what is the most transferable and common information between support set and query set and (2) how to learn faster by only observing a few instances within one task. For question (1), within one task, all triples in support set and query set are about the same relation, thus it is naturally to suppose that relation is the key common part between support and query set. For question (2), the learning process is usually conducted by minimizing a loss function via gradient descending, thus gradients reveal how the model's parameters should be changed. Intuitively, we believe that gradients are valuable source to accelerate learning process. Based on these thoughts, we propose two kinds of meta information which are shared between support set and query set to deal with above problems: In order to extract relation meta and gradient mate and incorporate them with knowledge graph embedding to solve few-shot link prediction, our proposal, MetaR, mainly contains two modules: The overview and algorithm of MetaR are shown in Figure 2 and Algorithm "Method" . Next, we introduce each module of MetaR via one few-shot link prediction task $\mathcal {T}_r = \lbrace \mathcal {S}_r, \mathcal {Q}_r\rbrace $ . [tb] 1 Learning of MetaR [1] Training tasks $\mathcal {T}_{train}$ Embedding layer $emb$ ; Parameter of relation-meta learner $\phi $ not done Sample a task $\mathcal {T}_r={\lbrace \mathcal {S}_r, \mathcal {Q}_r\rbrace }$ from $\mathcal {T}_{train}$ Get $\mathit {R}$ from $\mathcal {S}_{r}$ (Equ. 18 , Equ. 19 ) Compute loss in $\mathcal {S}_{r}$ (Equ. 22 ) Get $\mathit {G}$ by gradient of $\mathit {R}$ (Equ. 23 ) Update $emb$0 by $emb$1 (Equ. 24 ) Compute loss in $emb$2 (Equ. 26 ) Update $emb$3 and $emb$4 by loss in $emb$5 Relation-Meta Learner To extract the relation meta from support set, we design a relation-meta learner to learn a mapping from head and tail entities in support set to relation meta. The structure of this relation-meta learner can be implemented as a simple neural network. In task $\mathcal {T}_r$ , the input of relation-meta learner is head and tail entity pairs in support set $\lbrace (h_i, t_i) \in \mathcal {S}_r\rbrace $ . We firstly extract entity-pair specific relation meta via a $L$ -layers fully connected neural network, $$\begin{aligned} \mathbf {x}^0 &= \mathbf {h}_i \oplus \mathbf {t}_i \\ \mathbf {x}^l &= \sigma ({\mathbf {W}^{l}\mathbf {x}^{l-1} + b^l}) \\ \mathit {R}_{(h_i, t_i)} &= {\mathbf {W}^{L}\mathbf {x}^{L-1} + b^L} \end{aligned}$$ (Eq. 18) where $\mathbf {h}_i \in \mathbb {R}^{d}$ and $\mathbf {t}_i \in \mathbb {R}^{d}$ are embeddings of head entity $h_i$ and tail entity $t_i$ with dimension $d$ respectively. $L$ is the number of layers in neural network, and $l \in \lbrace 1, \dots , L-1 \rbrace $ . $\mathbf {W}^l$ and $\mathbf {b}^l$ are weights and bias in layer $l$ . We use LeakyReLU for activation $\mathbf {t}_i \in \mathbb {R}^{d}$0 . $\mathbf {t}_i \in \mathbb {R}^{d}$1 represents the concatenation of vector $\mathbf {t}_i \in \mathbb {R}^{d}$2 and $\mathbf {t}_i \in \mathbb {R}^{d}$3 . Finally, $\mathbf {t}_i \in \mathbb {R}^{d}$4 represent the relation meta from specific entity pare $\mathbf {t}_i \in \mathbb {R}^{d}$5 and $\mathbf {t}_i \in \mathbb {R}^{d}$6 . With multiple entity-pair specific relation meta, we generate the final relation meta in current task via averaging all entity-pair specific relation meta in current task, $$\mathit {R}_{\mathcal {T}_r} = \frac{\sum _{i=1}^{K}\mathit {R}_{(h_i, t_i)}}{K}$$ (Eq. 19) Embedding Learner As we want to get gradient meta to make a rapid update on relation meta, we need a score function to evaluate the truth value of entity pairs under specific relations and also the loss function for current task. We apply the key idea of knowledge graph embedding methods in our embedding learner, as they are proved to be effective on evaluating truth value of triples in knowledge graphs. In task $\mathcal {T}_r$ , we firstly calculate the score for each entity pairs $(h_i, t_i)$ in support set $\mathcal {S}_r$ as follows: $$s_{(h_i, t_i)} = \Vert \mathbf {h}_i + {\mathit {R}_{\mathcal {T}_r}} - \mathbf {t}_i \Vert $$ (Eq. 21) where $\Vert \mathbf {x}\Vert $ represents the L2 norm of vector $\mathbf {x}$ . We design the score function inspired by TransE BIBREF6 which assumes the head entity embedding $\mathbf {h}$ , relation embedding $\mathbf {r}$ and tail entity embedding $\mathbf {t}$ for a true triple $(h, r, t)$ satisfying $\mathbf {h} + \mathbf {r} = \mathbf {t}$ . Thus the score function is defined according to the distance between $\mathbf {h} + \mathbf {r} $ and $\mathbf {t}$ . Transferring to our few-show link prediction task, we replace the relation embedding $\mathbf {r}$ with relation meta $\mathbf {x}$0 as there is no direct general relation embeddings in our task and $\mathbf {x}$1 can be regarded as the relation embedding for current task $\mathbf {x}$2 . With score function for each triple, we set the following loss, $$L(\mathcal {S}_r) = \sum _{(h_i, t_i)\in \mathcal {S}_r} [\gamma +s_{(h_i, t_i)}-s_{(h_i, t_i^{\prime })}]_{+}$$ (Eq. 22) where $[x]_{+}$ represents the positive part of $x$ and $\gamma $ represents margin which is a hyperparameter. $s_{(h_i, t_i^{\prime })}$ is the score for negative sample $(h_i, t_i^{\prime })$ corresponding to current positive entity pair $(h_i, t_i) \in \mathcal {S}_r$ , where $(h_i, r, t_i^{\prime }) \notin \mathcal {G}$ . $L(\mathcal {S}_r)$ should be small for task $\mathcal {T}_r$ which represents the model can properly encode truth values of triples. Thus gradients of parameters indicate how should the parameters be updated. Thus we regard the gradient of $\mathit {R}_{\mathcal {T}_r}$ based on $L(\mathcal {S}_r)$ as gradient meta $\mathit {G}_{\mathcal {T}_r}$ : $$\vspace{-2.84526pt} \mathit {G}_{\mathcal {T}_r} = \nabla _{\mathit {R}_{\mathcal {T}_r}} L(\mathcal {S}_r)$$ (Eq. 23) Following the gradient update rule, we make a rapid update on relation meta as follows: $$\mathit {R}^\prime _{\mathcal {T}_r} = \mathit {R}_{\mathcal {T}_r} - \beta \mathit {G}_{\mathcal {T}_r}$$ (Eq. 24) where $\beta $ indicates the step size of gradient meta when operating on relation meta. When scoring the query set by embedding learner, we use updated relation meta. After getting the updated relation meta $\mathit {R}^\prime $ , we transfer it to samples in query set $\mathcal {Q}_r = \lbrace (h_j, t_j) \rbrace $ and calculate their scores and loss of query set, following the same way in support set: $$s_{(h_j, t_j)} = \Vert \mathbf {h}_j + \mathit {R}_{\mathcal {T}_r}^\prime - \mathbf {t}_j \Vert $$ (Eq. 25) $$L(\mathcal {Q}_r) = \sum _{(h_j, t_j)\in \mathcal {Q}_r}[\gamma +s_{(h_j, t_j)}-s_{(h_j, t_j^{\prime })}]_{+}$$ (Eq. 26) where $L(\mathcal {Q}_r)$ is our training objective to be minimized. We use this loss to update the whole model. Training Objective During training, our objective is to minimize the following loss $L$ which is the sum of query loss for all tasks in one minibatch: $$L = \sum _{(\mathcal {S}_r, \mathcal {Q}_r)\in \mathcal {T}_{train}} L(\mathcal {Q}_r)$$ (Eq. 28) Experiments With MetaR, we want to figure out following things: 1) can MetaR accomplish few-shot link prediction task and even perform better than previous model? 2) how much relation-specific meta information contributes to few-shot link prediction? 3) is there any requirement for MetaR to work on few-shot link prediction? To do these, we conduct the experiments on two few-shot link prediction datasets and deeply analyze the experiment results . Datasets and Evaluation Metrics We use two datasets, NELL-One and Wiki-One which are constructed by BIBREF11 . NELL-One and Wiki-One are derived from NELL BIBREF2 and Wikidata BIBREF0 respectively. Furthermore, because these two benchmarks are firstly tested on GMatching which consider both learned embeddings and one-hop graph structures, a background graph is constructed with relations out of training/validation/test sets for obtaining the pre-train entity embeddings and providing the local graph for GMatching. Unlike GMatching using background graph to enhance the representations of entities, our MetaR can be trained without background graph. For NELL-One and Wiki-One which have background graph originally, we can make use of such background graph by fitting it into training tasks or using it to train embeddings to initialize entity representations. Overall, we have three kinds of dataset settings, shown in Table 3 . For setting of BG:In-Train, in order to make background graph included in training tasks, we sample tasks from triples in background graph and original training set, rather than sampling from only original training set. Note that these three settings don't violate the task formulation of few-shot link prediction in KGs. The statistics of NELL-One and Wiki-One are shown in Table 2 . We use two traditional metrics to evaluate different methods on these datasets, MRR and Hits@N. MRR is the mean reciprocal rank and Hits@N is the proportion of correct entities ranked in the top N in link prediction. Implementation During training, mini-batch gradient descent is applied with batch size set as 64 and 128 for NELL-One and Wiki-One respectively. We use Adam BIBREF27 with the initial learning rate as 0.001 to update parameters. We set $\gamma = 1$ and $\beta = 1$ . The number of positive and negative triples in query set is 3 and 10 in NELL-One and Wiki-One. Trained model will be applied on validation tasks each 1000 epochs, and the current model parameters and corresponding performance will be recorded, after stopping, the model that has the best performance on Hits@10 will be treated as final model. For number of training epoch, we use early stopping with 30 patient epochs, which means that we stop the training when the performance on Hits@10 drops 30 times continuously. Following GMatching, the embedding dimension of NELL-One is 100 and Wiki-One is 50. The sizes of two hidden layers in relation-meta learner are 500, 200 and 250, 100 for NELL-One and Wiki-One. Results The results of two few-shot link prediction tasks, including 1-shot and 5-shot, on NELL-One and Wiki-One are shown in Table 4 . The baseline in our experiment is GMatching BIBREF11 , which made the first trial on few-shot link prediction task and is the only method that we can find as baseline. In this table, results of GMatching with different KG embedding initialization are copied from the original paper. Our MetaR is tested on different settings of datasets introduced in Table 3 . In Table 4 , our model performs better with all evaluation metrics on both datasets. Specifically, for 1-shot link prediction, MetaR increases by 33%, 28.1%, 29.2% and 27.8% on MRR, Hits@10, Hits@5 and Hits@1 on NELL-One, and 41.4%, 18.8%, 37.9% and 62.2% on Wiki-One, with average improvement of 29.53% and 40.08% respectively. For 5-shot, MetaR increases by 29.9%, 40.5%, 32.6% and 17.5% on MRR, Hits@10, Hits@5 and Hits@1 on NELL-One with average improvement of 30.13%. Thus for the first question we want to explore, the results of MetaR are no worse than GMatching, indicating that MetaR has the capability of accomplishing few-shot link prediction. In parallel, the impressive improvement compared with GMatching demonstrates that the key idea of MetaR, transferring relation-specific meta information from support set to query set, works well on few-shot link prediction task. Furthermore, compared with GMatching, our MetaR is independent with background knowledge graphs. We test MetaR on 1-shot link prediction in partial NELL-One and Wiki-One which discard the background graph, and get the results of 0.279 and 0.348 on Hits@10 respectively. Such results are still comparable with GMatching in fully datasets with background. Ablation Study We have proved that relation-specific meta information, the key point of MetaR, successfully contributes to few-shot link prediction in previous section. As there are two kinds of relation-specific meta information in this paper, relation meta and gradient meta, we want to figure out how these two kinds of meta information contribute to the performance. Thus, we conduct an ablation study with three settings. The first one is our complete MetaR method denoted as standard. The second one is removing the gradient meta by transferring un-updated relation meta directly from support set to query set without updating it via gradient meta, denoted as -g. The third one is removing the relation meta further which makes the model rebase to a simple TransE embedding model, denoted as -g -r. The result under the third setting is copied from BIBREF11 . It uses the triples from background graph, training tasks and one-shot training triples from validation/test set, so it's neither BG:Pre-Train nor BG:In-Train. We conduct the ablation study on NELL-one with metric Hit@10 and results are shown in Table 5 . Table 5 shows that removing gradient meta decreases 29.3% and 15% on two dataset settings, and further removing relation meta continuous decreases the performance with 55% and 72% compared to the standard results. Thus both relation meta and gradient meta contribute significantly and relation meta contributes more than gradient meta. Without gradient meta and relation meta, there is no relation-specific meta information transferred in the model and it almost doesn't work. This also illustrates that relation-specific meta information is important and effective for few-shot link prediction task. Facts That Affect MetaR's Performance We have proved that both relation meta and gradient meta surely contribute to few-shot link prediction. But is there any requirement for MetaR to ensure the performance on few-shot link prediction? We analyze this from two points based on the results, one is the sparsity of entities and the other is the number of tasks in training set. The sparsity of entities We notice that the best result of NELL-One and Wiki-One appears in different dataset settings. With NELL-One, MetaR performs better on BG:In-Train dataset setting, while with Wiki-One, it performs better on BG:Pre-Train. Performance difference between two dataset settings is more significant on Wiki-One. Most datasets for few-shot task are sparse and the same with NELL-One and Wiki-One, but the entity sparsity in these two datasets are still significantly different, which is especially reflected in the proportion of entities that only appear in one triple in training set, $82.8$ % and $37.1$ % in Wiki-One and NELL-One respectively. Entities only have one triple during training will make MetaR unable to learn good representations for them, because entity embeddings heavily rely on triples related to them in MetaR. Only based on one triple, the learned entity embeddings will include a lot of bias. Knowledge graph embedding method can learn better embeddings than MetaR for those one-shot entities, because entity embeddings can be corrected by embeddings of relations that connect to it, while they can't in MetaR. This is why the best performance occurs in BG:Pre-train setting on Wiki-One, pre-train entity embeddings help MetaR overcome the low-quality on one-shot entities. The number of tasks From the comparison of MetaR's performance between with and without background dataset setting on NELL-One, we find that the number of tasks will affect MetaR's performance significantly. With BG:In-Train, there are 321 tasks during training and MetaR achieves 0.401 on Hits@10, while without background knowledge, there are 51, with 270 less, and MetaR achieves 0.279. This makes it reasonable that why MetaR achieves best performance on BG:In-Train with NELL-One. Even NELL-One has $37.1$ % one-shot entities, adding background knowledge into dataset increases the number of training tasks significantly, which complements the sparsity problem and contributes more to the task. Thus we conclude that both the sparsity of entities and number of tasks will affect performance of MetaR. Generally, with more training tasks, MetaR performs better and for extremely sparse dataset, pre-train entity embeddings are preferred. Conclusion We propose a meta relational learning framework to do few-shot link prediction in KGs, and we design our model to transfer relation-specific meta information from support set to query set. Specifically, using relation meta to transfer common and important information, and using gradient meta to accelerate learning. Compared to GMatching which is the only method in this task, our method MetaR gets better performance and it is also independent with background knowledge graphs. Based on experimental results, we analyze that the performance of MetaR will be affected by the number of training tasks and sparsity of entities. We may consider obtaining more valuable information about sparse entities in few-shot link prediction in KGs in the future. Acknowledgments We want to express gratitude to the anonymous reviewers for their hard work and kind comments, which will further improve our work in the future. This work is funded by NSFC 91846204/61473260, national key research program YS2018YFB140004, and Alibaba CangJingGe(Knowledge Engine) Research Plan.
NELL-One, Wiki-One
7ff48fe5b7bd6b56553caacc891ce3d7e0070440
7ff48fe5b7bd6b56553caacc891ce3d7e0070440_0
Q: Does their solution involve connecting images and text? Text: Introduction Illiteracy has been one of the most serious pervasive problems all over the world. According to the U. S. Department of Education, the National Center for Education Statistics, approximately 32 million adults in the United States are not able to read, which is about 14% of the entire adult population BIBREF0 . Additionally, 44% of the 2.4 million students in the U. S. federally funded adult education programs are English as a second language (ESL) students, and about 185,000 of them are at the lowest ESL level, beginning literacy BIBREF1 . While low-literate adults lack the ability to read and to understand text, particularly, the low-literate ESL adult learners also face the dual challenge of developing basic literacy skills which includes decoding, comprehending, and producing print, along with English proficiency, represent different nationalities and cultural backgrounds BIBREF2 . Hence, illiteracy is shown as a significant barrier that results in a person's struggling in every aspect of his or her daily life activity. While there have not been any solutions to completely solve the illiteracy problem, recent developments of data science and artificial intelligence have brought a great opportunity to study how to support low-literate people in their lives. In this work, we propose SimplerVoice: a system that is able to generate key messages, and visual description for illiteracy. SimplerVoice could present easier-to-understand representations of complex objects to low-literate adult users, which helps them gain more confidence in navigating their own daily lives. While the recent technology such as Google Goggles, Amazon's Flow, etc. proposed methods to parse the complex objects using image recognition, augmented reality techniques into the objects names, then to search for URLs of the objects information, the main challenges of SimplerVoice are to generate and retrieve simple, yet informative text, and visual description for illiterate people. This includes supporting adult basic education (ABE), and the English as a second language acquisition (SLA) training by performing natural language processing, and information retrieval techniques, such as: automatically generating sensible texts, word-sense-disambiguation and image-sense-disambiguation mechanism, and retrieving the optimal visual components. We propose the overall framework, and demonstrate the system in a case study of grocery shopping where SimplerVoice generates key text, and visual manual of how to use grocery products. The system prototype are also provided, and the empirical evaluation shows that SimplerVoice is able to provide users with simple text, and visual components which adequately convey the product's usage. The organization of the paper is as follows. First, we have a quick review of previous works of text-to-image synthesis field in Section SECREF2 . In Section SECREF3 , we show our system design, including 4 parts as Input Retrieval, Object2Text, Text2Visual, and Output Display, along with the challenges of each components, and the proposed solution. We report the empirical evaluation of the proposed methods using real-world datasets for a case study in Section SECREF4 . Finally, Section SECREF5 concludes this paper, and states future work directions. Related Work In the field of ABE and SLA, researchers have conducted a number of studies to assist low-literate learners in their efforts to acquire literacy and language skills by reading interventions, and providing specific instructions through local education agencies, community colleges and educational organizations BIBREF3 , BIBREF1 . In augmentative and alternative communication (AAC) study, text-to-picture systems were proposed in BIBREF4 , BIBREF5 . BIBREF4 used a lookup table to transliterate each word in a sentence into an icon which resulted in a sequence of icons. Because the resulting icons sequence might be difficult to comprehend, the authors in BIBREF5 introduced a system using a concatenative or ”collage” approach to select and display the pictures corresponding to the text. To generate images from text, the authors in BIBREF6 proposed an approach to automatically generate a large number of images for specified object classes that downloads all contents from a Web search query, then, removes irrelevant components, and re-ranks the remainder. However, the study did not work on action-object interaction classes, which might be needed to describe an object. Another direction is to link the text to a database of pictographs. BIBREF7 introduced a text-to-pictograph translation system that is used in an on-line platform for augmentative and alternative communication. The text-to-pictograph was built, and evaluated on email text messages. Furthermore, an extended study of this work was provided in BIBREF8 which improved the Dutch text-to-pictograph through word sense disambiguation. Recently, there have been studies that proposed to use deep generative adversarial networks to perform text-to-image synthesis BIBREF9 , BIBREF10 . However, these techniques might still have the limitation of scalability, or image resolution restriction. System Design In this section, we describe the system design, and workflow of SimplerVoice (Figure FIGREF1 ). SimplerVoice has 4 main components: input retrieval, object2text, text2visual, and output display. Figure FIGREF1 provides the overall structure of SimplerVoice system. Overview Given an object as the target, SimplerVoice, first, retrieves the target input in either of 3 representations: (1) object's title as text, (2) object's shape as image, or (3) other forms, e.g. object's information from scanned barcode, speech from users, etc. Based on the captured input, the system, then, generates a query string/sequence of text which is the key message describing the object's usage. Due to low-literates' lack of reading capability, the generated text requires not only informativeness, but also simplicity, and clarity. Therefore, we propose to use the "S-V-O" query's canonical representation as below: [Subject] + [Verb-ing] + (with) + [Object Type/Category] The intuition of this query representation is that the generated key message should be able to describe the action of a person using, or interacting with the target object. Moreover, the simple "S-V-O" model has been proposed to use in other studies BIBREF11 , BIBREF12 , BIBREF13 since it is able to provide adequate semantics meaning. The detail of generating the S-V-O query is provided in Section SECREF3 . Once the query is constructed, SimplerVoice converts the query text into visual forms. There is a variety of visual formats to provide users: photos, icons, pictographs, etc. These visual components can be obtained by different means, such as: using search engine, mapping query/ontology to a database of images. However, the key point is to choose the optimal display for illiteracy which is described in Section SECREF12 . The result of SimplerVoice is provided further in Section SECREF4 . Object2Text This section discusses the process of generating key message from the object's input. Based on the retrieved input, we can easily obtain the object's title through searching in database, or using search engine; hence, we assume that the input of object2text is the object's title. The workflow of object2text is provided in Figure FIGREF4 . S-V-O query is constructed by the 3 steps below. In order to find the object type, SimplerVoice, first, builds an ontology-based knowledge tree. Then, the system maps the object with a tree's leaf node based on the object's title. For instance, given the object's title as “Thomas' Plain Mini Bagels", SimplerVoice automatically defines that the object category is “bagel". Note that both the knowledge tree, and the mapping between object and object category are obtained based on text-based searching / crawling web, or through semantic webs' content. Figure FIGREF6 shows an example of the sub-tree for object category "bagel". While the mapped leaf node is the O in our S-V-O model, the parents nodes describe the more general object categories, and the neighbors indicate other objects' types which are similar to the input object. All the input object's type, the direct parents category, and the neighbors' are, then, put in the next step: generating verbs (V). We propose to use 2 methods to generate the suitable verbs for the target object: heuristics-based, and n-grams model. In detail, SimplerVoice has a set of rule-based heuristics for the objects. For instance, if the object belongs to a "food | drink" category, the verb is generated as "eat | drink". Another example is the retrieved "play" verb if input object falls into "toy" category. However, due to the complexity of object's type, heuristics-based approach might not cover all the contexts of object. As to solve this, an n-grams model is applied to generate a set of verbs for the target object. An n-gram is a contiguous sequence of n items from a given speech, or text string. N-grams model has been extensively used for various tasks in text mining, and natural language processing field BIBREF14 , BIBREF15 . Here, we use the Google Books n-grams database BIBREF16 , BIBREF17 to generate a set of verbs corresponding to the input object's usage. Given a noun, n-grams model can provide a set of words that have the highest frequency of appearance followed by the noun in the database of Google Books. For an example, "eaten", "toasted", "are", etc. are the words which are usually used with "bagel". To get the right verb form, after retrieving the words from n-grams model, SimplerVoice performs word stemming BIBREF18 on the n-grams' output. Word-sense disambiguation: In the real-world case, a word could have multiple meanings. This fact may affects the process of retrieving the right verb set. Indeed, word-sense disambiguation has been a challenging problem in the field of nature language processing. An example of the ambiguity is the object "cookie". The word "cookie" has 2 meanings: one is "a small, flat, sweet food made from flour and sugar" (context of biscuit), another is "a piece of information stored on your computer about Internet documents that you have looked at" (context of computing). Each meaning results in different verb lists, such as: "eat", "bake" for biscuit cookie, and "use", "store" for computing cookie. In order to solve the ambiguity, we propose to take advantage of the built ontology tree. In detail, SimplerVoice uses the joint verb set of 3 types of nouns: the input object, the parents, and the neighbors as the 3 noun types are always in the same context of ontology. Equation EQREF8 shows the word-sense disambiguation mechanism with INLINEFORM0 (Object) indicates the verb set of an object generated by heuristics, and n-grams model: DISPLAYFORM0 Low-informative verbs: In order to ensure the quality of generated verbs, SimplerVoice maintains a list of restricted verbs that need to be filtered out. There are a lot of general, and low-informative verbs generated by n-grams model such as "be", "have", "use", etc. as these verbs are highly used in daily sentences/conversation. The restricted verb list could help to ensure the right specificity aspect. Hence, we modify ( EQREF8 ) into ( EQREF9 ). The process of word-sense disambiguation, and low-informative verb filtering is provided in Figure FIGREF10 : DISPLAYFORM0 The approach to generate the subject (S) is similar to the verb (V). SimplerVoice also uses heuristics, and n-grams model to find the suitable actor S. In regard to heuristics method, we apply a rule-based method via the object's title, and object's category to generate S since there are objects only used by a specific group of S. For an example, if the object's title contains the word "woman, women", the S will be "Woman"; of if the object belongs to the "baby" product category, the S will be "Baby". Additionally, n-grams model also generates pronouns that frequently appear with the noun O. The pronouns output could help identify the right subject S, e.g. "she" - "woman, girl", "he" - "man, boy", etc. If there exists both "she", and "he" in the generated pronoun set, the system picks either of them. Text2Visual Once the S-V-O is generated, Text2Visual provides users with visual components that convey the S-V-O text meanings. One simple solution to perform Text2Visual is to utilize existing conventional Web search engines. SimplerVoice retrieves top image results using S-V-O as the search query. However, there could be image sense ambiguity in displaying the result from search engine. For instance, if the object is "Swiss Cheese", user might not distinguish between "Swiss Cheese", and the general "Cheese" images. To solve the image sense ambiguity issue, the authors in BIBREF5 suggests to display multiple images to guide human perception onto the right target object's meaning. Additionally, since SimplerVoice is designed for illiteracy, the system needs to display the optimal visual component suitable for low-literate people. In BIBREF19 , the authors study the effectiveness of different types of audio-visual representations for illiterate computer users. While there is no difference between dynamic and static imagery (mixed result in different use cases), hand-drawn or cartoons are shown to be easier for low-literate users to understand than photorealistic representations. Therefore, SimplerVoice also provides users with pictographs display along with images. We use the Sclera database of pictographs BIBREF20 . Each S-V-O word is mapped with a corresponding Sclera pictograph file. The detail of how to perform the mapping is discussed in BIBREF7 . Intuitively, the process is described as: first, the system manually links a subset of words with pictographs' filenames; then, if the manual link is missing, the word is linked to the close synset using WordNet (Figure FIGREF15 ). Evaluation In this section, we demonstrate the effectiveness of SimplerVoice system in a case study of grocery shopping. The section organization is as follows: first, we describe the real dataset, and setup that SimplerVoice uses; second, we provide the prototype system which is a built application for end-users; finally, we show the results of SimplerVoice along with users feedback. Case Study In the case study of grocery products shopping, we use a database of INLINEFORM0 products' description crawled from multiple sources. Each product description contains 4 fields: UPC code, product's title, ontology path of product category, and URL link of the product. Since it is recommended to utilize various devices of technology, such as computers or smart phones in adult ESL literacy education BIBREF21 , we build a mobile application of SimplerVoice for illiterate users. The goal of SimplerVoice is to support users with key message, & simple visual components of how to use the products given the scanned barcode (UPC code), or products' name retrieved from parsing products images taken by end-users' phone cameras. Section SECREF17 shows our SimplerVoice application description. Prototype System There are 2 means to retrieve the object's input through SimplerVoice application: text filling, or taking photos of barcode / products' labels (Figure FIGREF18 ). SimplerVoice automatically reads the target grocery product's name, and proceeds to the next stage. Based on the built-in ontology tree, SimplerVoice, then, finds the object's category, the parent, and the neighboring nodes. The next step is to generate the S-V-O message (e.g. Table TABREF19 ), and visual description (e.g. Figure FIGREF20 ) of product's usage. Figure FIGREF22 shows an example of the result of SimplerVoice system for product "H-E-B Bakery Cookies by the Pound" from a grocery store: (1) the product description, (2) key messages, and (3) visual components. The product description includes the product's categories searched on the grocery store's website BIBREF22 , the parent node's, and the neighbors - similar products' categories. The S-V-O query, or key message for "H-E-B Bakery Cookies by the Pound" is generated as "Woman eating cookies". Additionally, we support users with language translation into Spanish for convenience, and provides different levels of reading. Each reading level has a different level of difficulty: The higher the reading level is, the more advanced the texts are. The reason of breaking the texts into levels is to encourage low-literate users learning how to read. Next to the key messages are the images, and pictographs. Experiment To evaluate our system, we compared SimplerVoice to the original product description / package (baseline 1) and the top images result from search engines of the same product (baseline 2). Given a set of products, we generated the key message & visual description of each product using 3 approaches below. An example of the 3 approaches is provided in Fig. FIGREF23 . Baseline 1: We captured and displayed the product package photos and the product title text as product description. Baseline 2: The product description was retrieved by search engine using the product titles, and then presented to the users as the top images result from Google and Bing. We also provided the product title along with the images. SimplerVoice: We shown the generated key messages (Tab. TABREF19 ), and visual description including 2 components: photorealistics images and pictographs (Fig. FIGREF20 ) from SimplerVoice system. Intuitively, baseline 1 shows how much information a user would receive from the products' packages without prior knowledge of the products while baseline 2 might provide additional information by showing top images from search engines. With the baseline 2, we attempt to measure whether merely adding "relevant" or "similar" products' images would be sufficient to improve the end-users' ability to comprehend the product's intended use. Moreover, with SimplerVoice, we test if our system could provide users with the proper visual components to help them understand the products' usage based on the proposed techniques, and measure the usefulness of SimplerVoice's generated description. We evaluated the effectiveness & interpretability of 3 above approaches by conducting a controlled user study with 15 subjects who were Vietnamese native and did not speak/comprehend English. A dataset of random 20 U.S. products including products' title, UPC code, and product package images were chosen to be displayed in the user study. Note that the 15 participated subjects had not used the 20 products before and were also not familiar with the packaged products including the chosen 20 products; hence, they were "illiterate" in terms of comprehending English and in terms of having used any of the products although they might be literate in Vietnamese. Each participated user was shown the product description generated from each approach, and was asked to identify what the products were and how to use them. The users' responses were then recorded in Vietnamese and were assigned to a score if they "matched" the correct answer by 3 experts who were bilingual in English and Vietnamese. In this study, we used the "mean opinion score" (MOS) BIBREF23 , BIBREF24 to measure the effectiveness: how similar a response were comparing to the correct product's usage. The MOS score range is 1-5 (1-Bad, 2-Poor, 3-Fair, 4-Good, 5-Excellent) with 1 means incorrect product usage interpretability - the lowest level of effectiveness and 5 means correct product usage interpretability - the highest effectiveness level. The assigned scores corresponding to responses were aggregated over all participated subjects and over the 3 experts. The result of the score is reported in the next section Result. Table TABREF21 shows the MOS scores indicating the performance of 3 approaches. The mean of MOS scores of baseline 1 is the lowest one: 2.57 (the standard deviation (stdev) is 1.17), the baseline 2 mean score is slightly higher than the baseline 1's: 2.86 (the stdev is 1.27), while SimplerVoice evaluation score is the highest one: 4.82 (the stdev is 0.35) which means the most effective approach to provide users with products' usage. Additionally, a paired-samples t-test was conducted to compare the MOS scores of users' responses among all products using baseline 1 and SimplerVoice system. There was a significant difference in the scores for baseline 1 (Mean = 2.57, Stdev = 1.17) and SimplerVoice (Mean = 4.82, Stdev = 0.35); t= -8.18224, p =1.19747e-07. These results show that there is a statistically significant difference in the MOS means between baseline 1 and SimplerVoice and that SimplerVoice performs more effectively than baseline 1 over different types of products. Baseline 1 scores ranges from 1 to 4.25 over all products as some products are easily to guess based on product package images, such as bagels, pretzels, soda, etc. while some products packages might cause confusion, such as shoe dye, wax cube, vinegar, etc. For an example, all participated users were able to recognize the "Always Bagels Cinnamon Raisin Bagels" product as "a type of bread" and its usage as "eating" using baseline 1 while the "ScentSationals Wild Raspberry Fragrance Wax Cubes" product were mostly incorrectly recognized as a type of "candy" for "eating". Baseline 2 scores range over all products is 1 - 4.7. The baseline 2 has higher score than baseline 1 since users were provided more information with the top result product images from search engine. For instance, given the "Fiesta Cinnamon Sticks" product, most users' responses were "a type of pastries - cannoli" for "eating" based on baseline 1. Since baseline 2 provided more photos of cinnamon sticks without the packaging, the users were able to recognize the products as cinnamon. Moreover, the score of baseline 2 is only slightly higher than baseline 1 because search engines mostly return similar images from product package, hence, might provide only little additional information to the participants. SimplerVoice scores ranges from 3.75 to 5 which is higher than baseline 1, and baseline 2. SimplerVoice score has low standard deviation indicating the consistent effectiveness along different types of products. While performing the user study, we also notice that the culture differences is an important factor to the result. For an example, the product has lowest score is the "Heinz Distilled White Vinegar" since there were participated users who have never used vinegar before. These participants are from the rural Northern Vietnam area where people might have not known the vinegar product. Conclusion In this work, we introduce SimplerVoice: a key message & visual description generator system for illiteracy. To our best knowledge, SimplerVoice is the first system framework to combine multiple AI techniques, particularly in the field of natural language processing, and information retrieval, to support low-literate users including low-literate ESL learners building confidence on their own lives, and to encourage them to improve their reading skills. Although awareness by itself does not solve the problem of illiteracy, the system can be put in different contexts for education goals. SimplerVoice might be a valuable tool for both educational systems, and daily usage. The SimplerVoice system was evaluated and shown to achieve higher performance score comparing to other approaches. Moreover, we also introduced the SimplerVoice mobile application and have the application used by participants in the Literacy Coalition of Central Texas's SPARK program BIBREF25 . We received positive end-users' feedback for the prototype, and plan to add more features for SimplerVoice. One of the future work is to improve the input retrieval of the system, so that SimplerVoice can automatically recognize the object through the object's shape. Another direction is to extend the work in other different real-world use cases, and demonstrate its effectiveness on those case studies. Acknowledgments This research was conducted under the auspices of the IBM Science for Social Good initiative. The authors would like to thank Christian O. Harris and Heng Luo for discussions.
Yes
54a2c08aa55c3db9b30ae2922c96528d3f4fc733
54a2c08aa55c3db9b30ae2922c96528d3f4fc733_0
Q: Which model do they use to generate key messages? Text: Introduction Illiteracy has been one of the most serious pervasive problems all over the world. According to the U. S. Department of Education, the National Center for Education Statistics, approximately 32 million adults in the United States are not able to read, which is about 14% of the entire adult population BIBREF0 . Additionally, 44% of the 2.4 million students in the U. S. federally funded adult education programs are English as a second language (ESL) students, and about 185,000 of them are at the lowest ESL level, beginning literacy BIBREF1 . While low-literate adults lack the ability to read and to understand text, particularly, the low-literate ESL adult learners also face the dual challenge of developing basic literacy skills which includes decoding, comprehending, and producing print, along with English proficiency, represent different nationalities and cultural backgrounds BIBREF2 . Hence, illiteracy is shown as a significant barrier that results in a person's struggling in every aspect of his or her daily life activity. While there have not been any solutions to completely solve the illiteracy problem, recent developments of data science and artificial intelligence have brought a great opportunity to study how to support low-literate people in their lives. In this work, we propose SimplerVoice: a system that is able to generate key messages, and visual description for illiteracy. SimplerVoice could present easier-to-understand representations of complex objects to low-literate adult users, which helps them gain more confidence in navigating their own daily lives. While the recent technology such as Google Goggles, Amazon's Flow, etc. proposed methods to parse the complex objects using image recognition, augmented reality techniques into the objects names, then to search for URLs of the objects information, the main challenges of SimplerVoice are to generate and retrieve simple, yet informative text, and visual description for illiterate people. This includes supporting adult basic education (ABE), and the English as a second language acquisition (SLA) training by performing natural language processing, and information retrieval techniques, such as: automatically generating sensible texts, word-sense-disambiguation and image-sense-disambiguation mechanism, and retrieving the optimal visual components. We propose the overall framework, and demonstrate the system in a case study of grocery shopping where SimplerVoice generates key text, and visual manual of how to use grocery products. The system prototype are also provided, and the empirical evaluation shows that SimplerVoice is able to provide users with simple text, and visual components which adequately convey the product's usage. The organization of the paper is as follows. First, we have a quick review of previous works of text-to-image synthesis field in Section SECREF2 . In Section SECREF3 , we show our system design, including 4 parts as Input Retrieval, Object2Text, Text2Visual, and Output Display, along with the challenges of each components, and the proposed solution. We report the empirical evaluation of the proposed methods using real-world datasets for a case study in Section SECREF4 . Finally, Section SECREF5 concludes this paper, and states future work directions. Related Work In the field of ABE and SLA, researchers have conducted a number of studies to assist low-literate learners in their efforts to acquire literacy and language skills by reading interventions, and providing specific instructions through local education agencies, community colleges and educational organizations BIBREF3 , BIBREF1 . In augmentative and alternative communication (AAC) study, text-to-picture systems were proposed in BIBREF4 , BIBREF5 . BIBREF4 used a lookup table to transliterate each word in a sentence into an icon which resulted in a sequence of icons. Because the resulting icons sequence might be difficult to comprehend, the authors in BIBREF5 introduced a system using a concatenative or ”collage” approach to select and display the pictures corresponding to the text. To generate images from text, the authors in BIBREF6 proposed an approach to automatically generate a large number of images for specified object classes that downloads all contents from a Web search query, then, removes irrelevant components, and re-ranks the remainder. However, the study did not work on action-object interaction classes, which might be needed to describe an object. Another direction is to link the text to a database of pictographs. BIBREF7 introduced a text-to-pictograph translation system that is used in an on-line platform for augmentative and alternative communication. The text-to-pictograph was built, and evaluated on email text messages. Furthermore, an extended study of this work was provided in BIBREF8 which improved the Dutch text-to-pictograph through word sense disambiguation. Recently, there have been studies that proposed to use deep generative adversarial networks to perform text-to-image synthesis BIBREF9 , BIBREF10 . However, these techniques might still have the limitation of scalability, or image resolution restriction. System Design In this section, we describe the system design, and workflow of SimplerVoice (Figure FIGREF1 ). SimplerVoice has 4 main components: input retrieval, object2text, text2visual, and output display. Figure FIGREF1 provides the overall structure of SimplerVoice system. Overview Given an object as the target, SimplerVoice, first, retrieves the target input in either of 3 representations: (1) object's title as text, (2) object's shape as image, or (3) other forms, e.g. object's information from scanned barcode, speech from users, etc. Based on the captured input, the system, then, generates a query string/sequence of text which is the key message describing the object's usage. Due to low-literates' lack of reading capability, the generated text requires not only informativeness, but also simplicity, and clarity. Therefore, we propose to use the "S-V-O" query's canonical representation as below: [Subject] + [Verb-ing] + (with) + [Object Type/Category] The intuition of this query representation is that the generated key message should be able to describe the action of a person using, or interacting with the target object. Moreover, the simple "S-V-O" model has been proposed to use in other studies BIBREF11 , BIBREF12 , BIBREF13 since it is able to provide adequate semantics meaning. The detail of generating the S-V-O query is provided in Section SECREF3 . Once the query is constructed, SimplerVoice converts the query text into visual forms. There is a variety of visual formats to provide users: photos, icons, pictographs, etc. These visual components can be obtained by different means, such as: using search engine, mapping query/ontology to a database of images. However, the key point is to choose the optimal display for illiteracy which is described in Section SECREF12 . The result of SimplerVoice is provided further in Section SECREF4 . Object2Text This section discusses the process of generating key message from the object's input. Based on the retrieved input, we can easily obtain the object's title through searching in database, or using search engine; hence, we assume that the input of object2text is the object's title. The workflow of object2text is provided in Figure FIGREF4 . S-V-O query is constructed by the 3 steps below. In order to find the object type, SimplerVoice, first, builds an ontology-based knowledge tree. Then, the system maps the object with a tree's leaf node based on the object's title. For instance, given the object's title as “Thomas' Plain Mini Bagels", SimplerVoice automatically defines that the object category is “bagel". Note that both the knowledge tree, and the mapping between object and object category are obtained based on text-based searching / crawling web, or through semantic webs' content. Figure FIGREF6 shows an example of the sub-tree for object category "bagel". While the mapped leaf node is the O in our S-V-O model, the parents nodes describe the more general object categories, and the neighbors indicate other objects' types which are similar to the input object. All the input object's type, the direct parents category, and the neighbors' are, then, put in the next step: generating verbs (V). We propose to use 2 methods to generate the suitable verbs for the target object: heuristics-based, and n-grams model. In detail, SimplerVoice has a set of rule-based heuristics for the objects. For instance, if the object belongs to a "food | drink" category, the verb is generated as "eat | drink". Another example is the retrieved "play" verb if input object falls into "toy" category. However, due to the complexity of object's type, heuristics-based approach might not cover all the contexts of object. As to solve this, an n-grams model is applied to generate a set of verbs for the target object. An n-gram is a contiguous sequence of n items from a given speech, or text string. N-grams model has been extensively used for various tasks in text mining, and natural language processing field BIBREF14 , BIBREF15 . Here, we use the Google Books n-grams database BIBREF16 , BIBREF17 to generate a set of verbs corresponding to the input object's usage. Given a noun, n-grams model can provide a set of words that have the highest frequency of appearance followed by the noun in the database of Google Books. For an example, "eaten", "toasted", "are", etc. are the words which are usually used with "bagel". To get the right verb form, after retrieving the words from n-grams model, SimplerVoice performs word stemming BIBREF18 on the n-grams' output. Word-sense disambiguation: In the real-world case, a word could have multiple meanings. This fact may affects the process of retrieving the right verb set. Indeed, word-sense disambiguation has been a challenging problem in the field of nature language processing. An example of the ambiguity is the object "cookie". The word "cookie" has 2 meanings: one is "a small, flat, sweet food made from flour and sugar" (context of biscuit), another is "a piece of information stored on your computer about Internet documents that you have looked at" (context of computing). Each meaning results in different verb lists, such as: "eat", "bake" for biscuit cookie, and "use", "store" for computing cookie. In order to solve the ambiguity, we propose to take advantage of the built ontology tree. In detail, SimplerVoice uses the joint verb set of 3 types of nouns: the input object, the parents, and the neighbors as the 3 noun types are always in the same context of ontology. Equation EQREF8 shows the word-sense disambiguation mechanism with INLINEFORM0 (Object) indicates the verb set of an object generated by heuristics, and n-grams model: DISPLAYFORM0 Low-informative verbs: In order to ensure the quality of generated verbs, SimplerVoice maintains a list of restricted verbs that need to be filtered out. There are a lot of general, and low-informative verbs generated by n-grams model such as "be", "have", "use", etc. as these verbs are highly used in daily sentences/conversation. The restricted verb list could help to ensure the right specificity aspect. Hence, we modify ( EQREF8 ) into ( EQREF9 ). The process of word-sense disambiguation, and low-informative verb filtering is provided in Figure FIGREF10 : DISPLAYFORM0 The approach to generate the subject (S) is similar to the verb (V). SimplerVoice also uses heuristics, and n-grams model to find the suitable actor S. In regard to heuristics method, we apply a rule-based method via the object's title, and object's category to generate S since there are objects only used by a specific group of S. For an example, if the object's title contains the word "woman, women", the S will be "Woman"; of if the object belongs to the "baby" product category, the S will be "Baby". Additionally, n-grams model also generates pronouns that frequently appear with the noun O. The pronouns output could help identify the right subject S, e.g. "she" - "woman, girl", "he" - "man, boy", etc. If there exists both "she", and "he" in the generated pronoun set, the system picks either of them. Text2Visual Once the S-V-O is generated, Text2Visual provides users with visual components that convey the S-V-O text meanings. One simple solution to perform Text2Visual is to utilize existing conventional Web search engines. SimplerVoice retrieves top image results using S-V-O as the search query. However, there could be image sense ambiguity in displaying the result from search engine. For instance, if the object is "Swiss Cheese", user might not distinguish between "Swiss Cheese", and the general "Cheese" images. To solve the image sense ambiguity issue, the authors in BIBREF5 suggests to display multiple images to guide human perception onto the right target object's meaning. Additionally, since SimplerVoice is designed for illiteracy, the system needs to display the optimal visual component suitable for low-literate people. In BIBREF19 , the authors study the effectiveness of different types of audio-visual representations for illiterate computer users. While there is no difference between dynamic and static imagery (mixed result in different use cases), hand-drawn or cartoons are shown to be easier for low-literate users to understand than photorealistic representations. Therefore, SimplerVoice also provides users with pictographs display along with images. We use the Sclera database of pictographs BIBREF20 . Each S-V-O word is mapped with a corresponding Sclera pictograph file. The detail of how to perform the mapping is discussed in BIBREF7 . Intuitively, the process is described as: first, the system manually links a subset of words with pictographs' filenames; then, if the manual link is missing, the word is linked to the close synset using WordNet (Figure FIGREF15 ). Evaluation In this section, we demonstrate the effectiveness of SimplerVoice system in a case study of grocery shopping. The section organization is as follows: first, we describe the real dataset, and setup that SimplerVoice uses; second, we provide the prototype system which is a built application for end-users; finally, we show the results of SimplerVoice along with users feedback. Case Study In the case study of grocery products shopping, we use a database of INLINEFORM0 products' description crawled from multiple sources. Each product description contains 4 fields: UPC code, product's title, ontology path of product category, and URL link of the product. Since it is recommended to utilize various devices of technology, such as computers or smart phones in adult ESL literacy education BIBREF21 , we build a mobile application of SimplerVoice for illiterate users. The goal of SimplerVoice is to support users with key message, & simple visual components of how to use the products given the scanned barcode (UPC code), or products' name retrieved from parsing products images taken by end-users' phone cameras. Section SECREF17 shows our SimplerVoice application description. Prototype System There are 2 means to retrieve the object's input through SimplerVoice application: text filling, or taking photos of barcode / products' labels (Figure FIGREF18 ). SimplerVoice automatically reads the target grocery product's name, and proceeds to the next stage. Based on the built-in ontology tree, SimplerVoice, then, finds the object's category, the parent, and the neighboring nodes. The next step is to generate the S-V-O message (e.g. Table TABREF19 ), and visual description (e.g. Figure FIGREF20 ) of product's usage. Figure FIGREF22 shows an example of the result of SimplerVoice system for product "H-E-B Bakery Cookies by the Pound" from a grocery store: (1) the product description, (2) key messages, and (3) visual components. The product description includes the product's categories searched on the grocery store's website BIBREF22 , the parent node's, and the neighbors - similar products' categories. The S-V-O query, or key message for "H-E-B Bakery Cookies by the Pound" is generated as "Woman eating cookies". Additionally, we support users with language translation into Spanish for convenience, and provides different levels of reading. Each reading level has a different level of difficulty: The higher the reading level is, the more advanced the texts are. The reason of breaking the texts into levels is to encourage low-literate users learning how to read. Next to the key messages are the images, and pictographs. Experiment To evaluate our system, we compared SimplerVoice to the original product description / package (baseline 1) and the top images result from search engines of the same product (baseline 2). Given a set of products, we generated the key message & visual description of each product using 3 approaches below. An example of the 3 approaches is provided in Fig. FIGREF23 . Baseline 1: We captured and displayed the product package photos and the product title text as product description. Baseline 2: The product description was retrieved by search engine using the product titles, and then presented to the users as the top images result from Google and Bing. We also provided the product title along with the images. SimplerVoice: We shown the generated key messages (Tab. TABREF19 ), and visual description including 2 components: photorealistics images and pictographs (Fig. FIGREF20 ) from SimplerVoice system. Intuitively, baseline 1 shows how much information a user would receive from the products' packages without prior knowledge of the products while baseline 2 might provide additional information by showing top images from search engines. With the baseline 2, we attempt to measure whether merely adding "relevant" or "similar" products' images would be sufficient to improve the end-users' ability to comprehend the product's intended use. Moreover, with SimplerVoice, we test if our system could provide users with the proper visual components to help them understand the products' usage based on the proposed techniques, and measure the usefulness of SimplerVoice's generated description. We evaluated the effectiveness & interpretability of 3 above approaches by conducting a controlled user study with 15 subjects who were Vietnamese native and did not speak/comprehend English. A dataset of random 20 U.S. products including products' title, UPC code, and product package images were chosen to be displayed in the user study. Note that the 15 participated subjects had not used the 20 products before and were also not familiar with the packaged products including the chosen 20 products; hence, they were "illiterate" in terms of comprehending English and in terms of having used any of the products although they might be literate in Vietnamese. Each participated user was shown the product description generated from each approach, and was asked to identify what the products were and how to use them. The users' responses were then recorded in Vietnamese and were assigned to a score if they "matched" the correct answer by 3 experts who were bilingual in English and Vietnamese. In this study, we used the "mean opinion score" (MOS) BIBREF23 , BIBREF24 to measure the effectiveness: how similar a response were comparing to the correct product's usage. The MOS score range is 1-5 (1-Bad, 2-Poor, 3-Fair, 4-Good, 5-Excellent) with 1 means incorrect product usage interpretability - the lowest level of effectiveness and 5 means correct product usage interpretability - the highest effectiveness level. The assigned scores corresponding to responses were aggregated over all participated subjects and over the 3 experts. The result of the score is reported in the next section Result. Table TABREF21 shows the MOS scores indicating the performance of 3 approaches. The mean of MOS scores of baseline 1 is the lowest one: 2.57 (the standard deviation (stdev) is 1.17), the baseline 2 mean score is slightly higher than the baseline 1's: 2.86 (the stdev is 1.27), while SimplerVoice evaluation score is the highest one: 4.82 (the stdev is 0.35) which means the most effective approach to provide users with products' usage. Additionally, a paired-samples t-test was conducted to compare the MOS scores of users' responses among all products using baseline 1 and SimplerVoice system. There was a significant difference in the scores for baseline 1 (Mean = 2.57, Stdev = 1.17) and SimplerVoice (Mean = 4.82, Stdev = 0.35); t= -8.18224, p =1.19747e-07. These results show that there is a statistically significant difference in the MOS means between baseline 1 and SimplerVoice and that SimplerVoice performs more effectively than baseline 1 over different types of products. Baseline 1 scores ranges from 1 to 4.25 over all products as some products are easily to guess based on product package images, such as bagels, pretzels, soda, etc. while some products packages might cause confusion, such as shoe dye, wax cube, vinegar, etc. For an example, all participated users were able to recognize the "Always Bagels Cinnamon Raisin Bagels" product as "a type of bread" and its usage as "eating" using baseline 1 while the "ScentSationals Wild Raspberry Fragrance Wax Cubes" product were mostly incorrectly recognized as a type of "candy" for "eating". Baseline 2 scores range over all products is 1 - 4.7. The baseline 2 has higher score than baseline 1 since users were provided more information with the top result product images from search engine. For instance, given the "Fiesta Cinnamon Sticks" product, most users' responses were "a type of pastries - cannoli" for "eating" based on baseline 1. Since baseline 2 provided more photos of cinnamon sticks without the packaging, the users were able to recognize the products as cinnamon. Moreover, the score of baseline 2 is only slightly higher than baseline 1 because search engines mostly return similar images from product package, hence, might provide only little additional information to the participants. SimplerVoice scores ranges from 3.75 to 5 which is higher than baseline 1, and baseline 2. SimplerVoice score has low standard deviation indicating the consistent effectiveness along different types of products. While performing the user study, we also notice that the culture differences is an important factor to the result. For an example, the product has lowest score is the "Heinz Distilled White Vinegar" since there were participated users who have never used vinegar before. These participants are from the rural Northern Vietnam area where people might have not known the vinegar product. Conclusion In this work, we introduce SimplerVoice: a key message & visual description generator system for illiteracy. To our best knowledge, SimplerVoice is the first system framework to combine multiple AI techniques, particularly in the field of natural language processing, and information retrieval, to support low-literate users including low-literate ESL learners building confidence on their own lives, and to encourage them to improve their reading skills. Although awareness by itself does not solve the problem of illiteracy, the system can be put in different contexts for education goals. SimplerVoice might be a valuable tool for both educational systems, and daily usage. The SimplerVoice system was evaluated and shown to achieve higher performance score comparing to other approaches. Moreover, we also introduced the SimplerVoice mobile application and have the application used by participants in the Literacy Coalition of Central Texas's SPARK program BIBREF25 . We received positive end-users' feedback for the prototype, and plan to add more features for SimplerVoice. One of the future work is to improve the input retrieval of the system, so that SimplerVoice can automatically recognize the object through the object's shape. Another direction is to extend the work in other different real-world use cases, and demonstrate its effectiveness on those case studies. Acknowledgments This research was conducted under the auspices of the IBM Science for Social Good initiative. The authors would like to thank Christian O. Harris and Heng Luo for discussions.
ontology-based knowledge tree, heuristics-based, n-grams model
ecb680d79e847beb7c1aa590d288a7313908d64a
ecb680d79e847beb7c1aa590d288a7313908d64a_0
Q: What experiments they perform to demonstrate that their approach leads more accurate region based representations? Text: Introduction Vector space embeddings are commonly used to represent entities in fields such as machine learning (ML) BIBREF0, natural language processing (NLP) BIBREF1, information retrieval (IR) BIBREF2 and cognitive science BIBREF3. An important point, however, is that such representations usually represent both individuals and categories as vectors BIBREF4, BIBREF5, BIBREF6. Note that in this paper, we use the term category to denote natural groupings of individuals, as it is used in cognitive science, with individuals referring to the objects from the considered domain of discourse. For example, the individuals carrot and cucumber belong to the vegetable category. We use the term entities as an umbrella term covering both individuals and categories. Given that a category corresponds to a set of individuals (i.e. its instances), modelling them as (possibly imprecise) regions in the embedding space seems more natural than using vectors. In fact, it has been shown that the vector representations of individuals that belong to the same category are indeed often clustered together in learned vector space embeddings BIBREF7, BIBREF8. The view of categories being regions is also common in cognitive science BIBREF3. However, learning region representations of categories is a challenging problem, because we typically only have a handful of examples of individuals that belong to a given category. One common assumption is that natural categories can be modelled using convex regions BIBREF3, which simplifies the estimation problem. For instance, based on this assumption, BIBREF9 modelled categories using Gaussian distributions and showed that these distributions can be used for knowledge base completion. Unfortunately, this strategy still requires a relatively high number of training examples to be successful. However, when learning categories, humans do not only rely on examples. For instance, there is evidence that when learning the meaning of nouns, children rely on the default assumption that these nouns denote mutually exclusive categories BIBREF10. In this paper, we will in particular take advantage of the fact that many natural categories are organized into so-called contrast sets BIBREF11. These are sets of closely related categories which exhaustively cover some sub-domain, and which are assumed to be mutually exclusive; e.g. the set of all common color names, the set $\lbrace \text{fruit},\text{vegetable}\rbrace $ or the set $\lbrace \text{NLP}, \text{IR}, \text{ML}\rbrace $. Categories from the same contrast set often compete for coverage. For instance, we can think of the NLP domain as consisting of research topics that involve processing textual information which are not covered by the IR and ML domains. Categories which compete for coverage in this way are known as conceptual neighbors BIBREF12; e.g. NLP and IR, red and orange, fruit and vegetable. Note that the exact boundary between two conceptual neighbors may be vague (e.g. tomato can be classified as fruit or as vegetable). In this paper, we propose a method for learning region representations of categories which takes advantage of conceptual neighborhood, especially in scenarios where the number of available training examples is small. The main idea is illustrated in Figure FIGREF2, which depicts a situation where we are given some examples of a target category $C$ as well as some related categories $N_1,N_2,N_3,N_4$. If we have to estimate a region from the examples of $C$ alone, the small elliptical region shown in red would be a reasonable choice. More generally, a standard approach would be to estimate a Gaussian distribution from the given examples. However, vector space embeddings typically have hundreds of dimensions, while the number of known examples of the target category is often far lower (e.g. 2 or 3). In such settings we will almost inevitably underestimate the coverage of the category. However, in the example from Figure FIGREF2, if we take into account the knowledge that $N_1,N_2,N_3,N_4$ are conceptual neighbors of $C$, the much larger, shaded region becomes a more natural choice for representing $C$. Indeed, the fact that e.g. $C$ and $N_1$ are conceptual neighbors suggests that any point in between the examples of these categories needs to be contained either in the region representing $C$ or the region representing $N_1$. In the spirit of prototype approaches to categorization BIBREF13, without any further information it makes sense to assume that their boundary is more or less half-way in between the known examples. The contribution of this paper is two-fold. First, we propose a method for identifying conceptual neighbors from text corpora. We essentially treat this problem as a standard text classification problem, by relying on categories with large numbers of training examples to generate a suitable distant supervision signal. Second, we show that the predicted conceptual neighbors can effectively be used to learn better category representations. Related Work In distributional semantics, categories are frequently modelled as vectors. For example, BIBREF14 study the problem of deciding for a word pair $(i,c)$ whether $i$ denotes an instance of the category $c$, which they refer to as instantiation. They treat this problem as a binary classification problem, where e.g. the pair (AAAI, conference) would be a positive example, while (conference, AAAI) and (New York, conference) would be negative examples. Different from our setting, their aim is thus essentially to model the instantiation relation itself, similar in spirit to how hypernymy has been modelled in NLP BIBREF15, BIBREF16. To predict instantiation, they use a simple neural network model which takes as input the word vectors of the input pair $(i,c)$. They also experiment with an approach that instead models a given category as the average of the word vectors of its known instances and found that this led to better results. A few authors have already considered the problem of learning region representations of categories. Most closely related, BIBREF17 model ontology concepts using Gaussian distributions. In BIBREF18 DBLP:conf/ecai/JameelS16, a model is presented which embeds Wikipedia entities such that entities which have the same WikiData type are characterized by some region within a low-dimensional subspace of the embedding. Within the context of knowledge graph embedding, several approaches have been proposed that essentially model semantic types as regions BIBREF19, BIBREF20. A few approaches have also been proposed for modelling word meaning using regions BIBREF21, BIBREF22 or Gaussian distributions BIBREF23. Along similar lines, several authors have proposed approaches inspired by probabilistic topic modelling, which model latent topics using Gaussians BIBREF24 or related distributions BIBREF25. On the other hand, the notion of conceptual neighborhood has been covered in most detail in the field of spatial cognition, starting with the influential work of BIBREF12. In computational linguistics, moreover, this representation framework aligns with lexical semantics traditions where word meaning is constructed in terms of semantic decomposition, i.e. lexical items being minimally decomposed into structured forms (or templates) rather than sets of features BIBREF26, effectively mimicking a sort of conceptual neighbourhood. In Pustejovsky's generative lexicon, a set of “semantic devices” is proposed such that they behave in semantics similarly as grammars do in syntax. Specifically, this framework considers the qualia structure of a lexical unit as a set of expressive semantic distinctions, the most relevant for our purposes being the so-called formal role, which is defined as “that which distinguishes the object within a larger domain”, e.g. shape or color. This semantic interplay between cognitive science and computational linguistics gave way to the term lexical coherence, which has been used for contextualizing the meaning of words in terms of how they relate to their conceptual neighbors BIBREF27, or by providing expressive lexical semantic resources in the form of ontologies BIBREF28. Model Description Our aim is to introduce a model for learning region-based category representations which can take advantage of knowledge about the conceptual neighborhood of that category. Throughout the paper, we focus in particular on modelling categories from the BabelNet taxonomy BIBREF29, although the proposed method can be applied to any resource which (i) organizes categories in a taxonomy and (ii) provides examples of individuals that belong to these categories. Selecting BabelNet as our use case is a natural choice, however, given its large scale and the fact that it integrates many lexical and ontological resources. As the possible conceptual neighbors of a given BabelNet category $C$, we consider all its siblings in the taxonomy, i.e. all categories $C_1,...,C_k$ which share a direct parent with $C$. To select which of these siblings are most likely to be conceptual neighbors, we look at mentions of these categories in a text corpus. As an illustrative example, consider the pair (hamlet,village) and the following sentence: In British geography, a hamlet is considered smaller than a village and ... From this sentence, we can derive that hamlet and village are disjoint but closely related categories, thus suggesting that they are conceptual neighbors. However, training a classifier that can identify conceptual neighbors from such sentences is complicated by the fact that conceptual neighborhood is not covered in any existing lexical resource, to the best of our knowledge, which means that large sets of training examples are not readily available. To address this lack of training data, we rely on a distant supervision strategy. The central insight is that for categories with a large number of known instances, we can use the embeddings of these instances to check whether two categories are conceptual neighbors. In particular, our approach involves the following three steps: Identify pairs of categories that are likely to be conceptual neighbors, based on the vector representations of their known instances. Use the pairs from Step 1 to train a classifier that can recognize sentences which indicate that two categories are conceptual neighbors. Use the classifier from Step 2 to predict which pairs of BabelNet categories are conceptual neighbors and use these predictions to learn category representations. Note that in Step 1 we can only consider BabelNet categories with a large number of instances, while the end result in Step 3 is that we can predict conceptual neighborhood for categories with only few known instances. We now discuss the three aforementioned steps one by one. Model Description ::: Step 1: Predicting Conceptual Neighborhood from Embeddings Our aim here is to generate distant supervision labels for pairs of categories, indicating whether they are likely to be conceptual neighbors. These labels will then be used in Section SECREF12 to train a classifier for predicting conceptual neighborhood from text. Let $A$ and $B$ be siblings in the BabelNet taxonomy. If enough examples of individuals belonging to these categories are provided in BabelNet, we can use these instances to estimate high-quality representations of $A$ and $B$, and thus estimate whether they are likely to be conceptual neighbors. In particular, we split the known instances of $A$ into a training set $I^A_{\textit {train}}$ and test set $I^A_{\textit {test}}$, and similar for $B$. We then train two types of classifiers. The first classifier estimates a Gaussian distribution for each category, using the training instances in $I^A_{\textit {train}}$ and $I^B_{\textit {train}}$ respectively. This should provide us with a reasonable representation of $A$ and $B$ regardless of whether they are conceptual neighbors. In the second approach, we first learn a Gaussian distribution from the joint set of training examples $I^A_{\textit {train}} \cup I^B_{\textit {train}}$ and then train a logistic regression classifier to separate instances from $A$ and $B$. In particular, note that in this way, we directly impose the requirement that the regions modelling $A$ and $B$ are adjacent in the embedding space (intuitively corresponding to two halves of a Gaussian distribution). We can thus expect that the second approach should lead to better predictions than the first approach if $A$ and $B$ are conceptual neighbors and to worse predictions if they are not. In particular, we propose to use the relative performance of the two classifiers as the required distant supervision signal for predicting conceptual neighborhood. We now describe the two classification models in more detail, after which we explain how these models are used to generate the distant supervision labels. Gaussian Classifier The first classifier follows the basic approach from BIBREF17, where Gaussian distributions were similarly used to model WikiData categories. In particular, we estimate the probability that an individual $e$ with vector representation $\mathbf {e}$ is an instance of the category $A$ as follows: where $\lambda _A$ is the prior probability of belonging to category $A$, the likelihood $f(\mathbf {e} | A)$ is modelled as a Gaussian distribution and $f(\mathbf {e})$ will also be modelled as a Gaussian distribution. Intuitively, we think of the Gaussian $f(. | A)$ as defining a soft region, modelling the category $A$. Given the high-dimensional nature of typical vector space embeddings, we use a mean field approximation: Where $d$ is the number of dimensions in the vector space embedding, $e_i$ is the $i^{\textit {th}}$ coordinate of $\mathbf {e}$, and $f_i(. | A)$ is a univariate Gaussian. To estimate the parameters $\mu _i$ and $\sigma _i^2$ of this Gaussian, we use a Bayesian approach with a flat prior: where $G(e_i;\mu _i,\sigma _i^2)$ represents the Gaussian distribution with mean $\mu _i$ and variance $\sigma _i^2$ and NI$\chi ^{2}$ is the normal inverse-$\chi ^{2}$ distribution. In other words, instead of using a single estimate of the mean $\mu $ and variance $\sigma _2$ we average over all plausible choices of these parameters. The use of the normal inverse-$\chi ^{2}$ distribution for the prior on $\mu _i$ and $\sigma _i^2$ is a common choice, which has the advantage that the above integral simplifies to a Student-t distribution. In particular, we have: where we assume $I^A_{\textit {train}}= \lbrace a_1,...,a_n\rbrace $, $a_i^j$ denotes the $i^{\textit {th}}$ coordinate of the vector embedding of $a_j$, $\overline{x_i} = \frac{1}{n}\sum _{j=1}^n a_i^j$ and $t_{n-1}$ is the Student t-distribution with $n-1$ degrees of freedom. The probability $f(\mathbf {e})$ is estimated in a similar way, but using all BabelNet instances. The prior $\lambda _A$ is tuned based on a validation set. Finally, we classify $e$ as a positive example if $P(A|\mathbf {e}) > 0.5$. GLR Classifier. We first train a Gaussian classifier as in Section UNKREF9, but now using the training instances of both $A$ and $B$. Let us denote the probability predicted by this classifier as $P(A\cup B | \textbf {e})$. The intuition is that entities for which this probability is high should either be instances of $A$ or of $B$, provided that $A$ and $B$ are conceptual neighbors. If, on the other hand, $A$ and $B$ are not conceptual neighbors, relying on this assumption is likely to lead to errors (i.e. there may be individuals whose representation is in between $A$ and $B$ which are not instances of either), which is what we need for generating the distant supervision labels. If $P(A\cup B | \textbf {e}) > 0.5$, we assume that $e$ either belongs to $A$ or to $B$. To distinguish between these two cases, we train a logistic regression classifier, using the instances from $I^A_{\textit {train}}$ as positive examples and those from $I^B_{\textit {train}}$ as negative examples. Putting everything together, we thus classify $e$ as a positive example for $A$ if $P(A\cup B | \textbf {e})>0.5$ and $e$ is classified as a positive example by the logistic regression classifier. Similarly, we classfiy $e$ as a positive example for $B$ if $P(A\cup B | \textbf {e})>0.5$ and $e$ is classified as a negative example by the logistic regression classifier. We will refer to this classification model as GLR (Gaussian Logistic Regression). Model Description ::: Step 1: Predicting Conceptual Neighborhood from Embeddings ::: Generating Distant Supervision Labels To generate the distant supervision labels, we consider a ternary classification problem for each pair of siblings $A$ and $B$. In particular, the task is to decide for a given individual $e$ whether it is an instance of $A$, an instance of $B$, or an instance of neither (where only disjoint pairs $A$ and $B$ are considered). For the Gaussian classifier, we predict $A$ iff $P(A|\mathbf {e})>0.5$ and $P(A|\mathbf {e}) > P(B|\mathbf {e})$. For the GLR classifier, we predict $A$ if $P(A\cup B|\mathbf {e}) >0.5$ and the associated logistic regression classifier predicts $A$. The condition for predicting $B$ is analogous. The test examples for this ternary classification problem consist of the elements from $I^A_{\textit {test}}$ and $I^B_{\textit {test}}$, as well as some negative examples (i.e. individuals that are neither instances of $A$ nor $B$). To select these negative examples, we first sample instances from categories that have the same parent as $A$ and $B$, choosing as many such negative examples as we have positive examples. Second, we also sample the same number of negative examples from randomly selected categories in the taxonomy. Let $F^1_{AB}$ be the F1 score achieved by the Gaussian classifier and $F^2_{AB}$ the F1 score of the GLR classifier. Our hypothesis is that $F^1_{AB} \ll F^2_{AB}$ suggests that $A$ and $B$ are conceptual neighbors, while $F^1_{AB} \gg F^2_{AB}$ suggests that they are not. This intuition is captured in the following score: where we consider $A$ and $B$ to be conceptual neighbors if $s_{AB}\gg 0.5$. Model Description ::: Step 2: Predicting Conceptual Neighborhood from Text We now consider the following problem: given two BabelNet categories $A$ and $B$, predict whether they are likely to be conceptual neighbors based on the sentences from a text corpus in which they are both mentioned. To train such a classifier, we use the distant supervision labels from Section SECREF8 as training data. Once this classifier has been trained, we can then use it to predict conceptual neighborhood for categories for which only few instances are known. To find sentences in which both $A$ and $B$ are mentioned, we rely on a disambiguated text corpus in which mentions of BabelNet categories are explicitly tagged. Such a disambiguated corpus can be automatically constructed, using methods such as the one proposed by BIBREF30 mancini-etal-2017-embedding, for instance. For each pair of candidate categories, we thus retrieve all sentences where they co-occur. Next, we represent each extracted sentence as a vector. To this end, we considered two possible strategies: Word embedding averaging: We compute a sentence embedding by simply averaging the word embeddings of each word within the sentence. Despite its simplicity, this approach has been shown to provide competitive results BIBREF31, in line with more expensive and sophisticated methods e.g. based on LSTMs. Contextualized word embeddings: The recently proposed contextualized embeddings BIBREF32, BIBREF33 have already proven successful in a wide range of NLP tasks. Instead of providing a single vector representation for all words irrespective of the context, contextualized embeddings predict a representation for each word occurrence which depends on its context. These representations are usually based on pre-trained language models. In our setting, we extract the contextualized embeddings for the two candidate categories within the sentence. To obtain this contextualized embedding, we used the last layer of the pre-trained language model, which has been shown to be most suitable for capturing semantic information BIBREF34, BIBREF35. We then use the concatenation of these two contextualized embeddings as the representation of the sentence. For both strategies, we average their corresponding sentence-level representations across all sentences in which the same two candidate categories are mentioned. Finally, we train an SVM classifier on the resulting vectors to predict for the pair of siblings $(A,B)$ whether $s_{AB}> 0.5$ holds. Model Description ::: Step 3: Category Induction Let $C$ be a category and assume that $N_1,...,N_k$ are conceptual neighbors of this category. Then we can model $C$ by generalizing the idea underpinning the GLR classifier. In particular, we first learn a Gaussian distribution from all the instances of $C$ and $N_1,...,N_k$. This Gaussian model allows us to estimate the probability $P(C\cup N_1\cup ...\cup N_k \,|\, \mathbf {e})$ that $e$ belongs to one of $C,N_1,...,N_k$. If this probability is sufficiently high (i.e. higher than 0.5), we use a multinomial logistic regression classifier to decide which of these categories $e$ is most likely to belong to. Geometrically, we can think of the Gaussian model as capturing the relevant local domain, while the multinomial logistic regression model carves up this local domain, similar as in Figure FIGREF2. In practice, we do not know with certainty which categories are conceptual neighbors of $C$. Instead, we select the $k$ categories (for some fixed constant $k$), among all the siblings of $C$, which are most likely to be conceptual neighbors, according to the text classifier from Section SECREF12. Experiments The central problem we consider is category induction: given some instances of a category, predict which other individuals are likely to be instances of that category. When enough instances are given, standard approaches such as the Gaussian classifier from Section UNKREF9, or even a simple SVM classifier, can perform well on this task. For many categories, however, we only have access to a few instances, either because the considered ontology is highly incomplete or because the considered category only has few actual instances. The main research question which we want to analyze is whether (predicted) conceptual neighborhood can help to obtain better category induction models in such cases. In Section SECREF16, we first provide more details about the experimental setting that we followed. Section SECREF23 then discusses our main quantitative results. Finally, in Section SECREF26 we present a qualitative analysis. Experiments ::: Experimental setting ::: Taxonomy As explained in Section SECREF3, we used BabelNet BIBREF29 as our reference taxonomy. BabelNet is a large-scale full-fledged taxonomy consisting of heterogeneous sources such as WordNet BIBREF36, Wikidata BIBREF37 and WiBi BIBREF38, making it suitable to test our hypothesis in a general setting. Vector space embeddings. Both the distant labelling method from Section SECREF8 and the category induction model itself need access to vector representations of the considered instances. To this end, we used the NASARI vectors, which have been learned from Wikipedia and are already linked to BabelNet BIBREF1. BabelNet category selection. To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing. To tune the prior probability $\lambda _A$ for these categories, we hold out 10% from the training set as a validation set. The conceptual neighbors among the considered test categories are predicted using the classifier from Section SECREF12. To obtain the distant supervision labels needed to train that classifier, we consider all BabelNet categories with at least 50 instances. This ensures that the distant supervision labels are sufficiently accurate and that there is no overlap with the categories which are used for evaluating the model. Text classifier training. As the text corpus to extract sentences for category pairs we used the English Wikipedia. In particular, we used the dump of November 2014, for which a disambiguated version is available online. This disambiguated version was constructed using the shallow disambiguation algorithm of BIBREF30 mancini-etal-2017-embedding. As explained in Section SECREF12, for each pair of categories we extracted all the sentences where they co-occur, including a maximum window size of 10 tokens between their occurrences, and 10 tokens to the left and right of the first and second category within the sentence, respectively. For the averaging-based sentence representations we used the 300-dimensional pre-trained GloVe word embeddings BIBREF39. To obtain the contextualized representations we used the pre-trained 768-dimensional BERT-base model BIBREF33.. The text classifier is trained on 3,552 categories which co-occur at least once in the same sentence in the Wikipedia corpus, using the corresponding scores $s_{AB}$ as the supervision signal (see Section SECREF12). To inspect how well conceptual neighborhood can be predicted from text, we performed a 10-fold cross validation over the training data, removing for this experiment the unclear cases (i.e., those category pairs with $s_{AB}$ scores between $0.4$ and $0.6$). We also considered a simple baselineWE based on the number of co-occurring sentences for each pairs, which we might expect to be a reasonably strong indicator of conceptual neighborhood, i.e. the more often two categories are mentiond in the same sentence, the more likely that they are conceptual neighbors. The results for this cross-validation experiment are summarized in Table TABREF22. Surprisingly, perhaps, the word vector averaging method seems more robust overall, while being considerably faster than the method using BERT. The results also confirm the intuition that the number of co-occurring sentences is positively correlated with conceptual neighborhood, although the results for this baseline are clearly weaker than those for the proposed classifiers. Baselines. To put the performance of our model in perspective, we consider three baseline methods for category induction. First, we consider the performance of the Gaussian classifier from Section UNKREF9, as a representative example of how well we can model each category when only considering their given instances; this model will be referred to as Gauss. Second, we consider a variant of the proposed model in which we assume that all siblings of the category are conceptual neighbors; this model will be referred to as Multi. Third, we consider a variant of our model in which the neighbors are selected based on similarity. To this end, we represent each BabelNet as their vector from the NASARI space. From the set of siblings of the target category $C$, we then select the $k$ categories whose vector representation is most similar to that of $C$, in terms of cosine similarity. This baseline will be referred to as Similarity$_k$, with $k$ the number of selected neighbors. We refer to our model as SECOND-WEA$_k$ or SECOND-BERT$_k$ (SEmantic categories with COnceptual NeighborhooD), depending on whether the word embedding averaging strategy is used or the method using BERT. Experiments ::: Quantitative Results Our main results for the category induction task are summarized in Table TABREF24. In this table, we show results for different choices of the number of selected conceptual neighbors $k$, ranging from 1 to 5. As can be seen from the table, our approach substantially outperforms all baselines, with Multi being the most competitive baseline. Interestingly, for the Similarity baseline, the higher the number of neighbors, the more the performance approaches that of Multi. The relatively strong performance of Multi shows that using the siblings of a category in the BabelNet taxonomy is in general useful. However, as our results show, better results can be obtained by focusing on the predicted conceptual neighbors only. It is interesting to see that even selecting a single conceptual neighbor is already sufficient to substantially outperform the Gaussian model, although the best results are obtained for $k=4$. Comparing the WEA and BERT variants, it is notable that BERT is more successful at selecting the single best conceptual neighbor (reflected in an F1 score of 47.0 compared to 41.9). However, for $k \ge 2$, the results of the WEA and BERT are largely comparable. Experiments ::: Qualitative Analysis To illustrate how conceptual neighborhood can improve classification results, Fig. FIGREF25 shows the two first principal components of the embeddings of the instances of three BabelNet categories: Songbook, Brochure and Guidebook. All three categories can be considered to be conceptual neighbors. Brochure and Guidebook are closely related categories, and we may expect there to exist borderline cases between them. This can be clearly seen in the figure, where some instances are located almost exactly on the boundary between the two categories. On the other hand, Songbook is slightly more separated in the space. Let us now consider the left-most data point from the Songbook test set, which is essentially an outlier, being more similar to instances of Guidebook than typical Songbook instances. When using a Gaussian model, this data point would not be recognised as a plausible instance. When incorporating the fact that Brochure and Guidebook are conceptual neighbors of Songbook, however, it is more likely to be classified correctly. To illustrate the notion of conceptual neighborhood itself, Table TABREF27 displays some selected category pairs from the training set (i.e. the category pairs that were used to train the text classifier), which intuitively correspond to conceptual neighbors. The left column contains some selected examples of category pairs with a high $s_{AB}$ score of at least 0.9. As these examples illustrate, we found that a high $s_{AB}$ score was indeed often predictive of conceptual neighborhood. As the right column of this table illustrates, there are several category pairs with a lower $s_{AB}$ score of around 0.5 which intuitively still seem to correspond to conceptual neighbors. When looking at category pairs with even lower scores, however, conceptual neighborhood becomes rare. Moreover, while there are several pairs with high scores which are not actually conceptual neighbors (e.g. the pair Actor – Makup Artist), they tend to be categories which are still closely related. This means that the impact of incorrectly treating them as conceptual neighbors on the performance of our method is likely to be limited. On the other hand, when looking at category pairs with a very low confidence score we find many unrelated pairs, which we can expect to be more harmful when considered as conceptual neighbors, as the combined Gaussian will then cover a much larger part of the space. Some examples of such pairs include Primary school – Financial institution, Movie theatre – Housing estate, Corporate title – Pharaoh and Fraternity – Headquarters. Finally, in Tables TABREF28 and TABREF29, we show examples of the top conceptual neighbors that were selected for some categories from the test set. Table TABREF28 shows examples of BabelNet categories for which the F1 score of our SECOND-WEA$_1$ classifier was rather low. As can be seen, the conceptual neighbors that were chosen in these cases are not suitable. For instance, Bachelor's degree is a near-synonym of Undergraduate degree, hence assuming them to be conceptual neighbors would clearly be detrimental. In contrast, when looking at the examples in Table TABREF29, where categories are shown with a higher F1 score, we find examples of conceptual neighbors that are intuitively much more meaningful. Conclusions We have studied the role of conceptual neighborhood for modelling categories, focusing especially on categories with a relatively small number of instances, for which standard modelling approaches are challenging. To this end, we have first introduced a method for predicting conceptual neighborhood from text, by taking advantage of BabelNet to implement a distant supervision strategy. We then used the resulting classifier to identify the most likely conceptual neighbors of a given target category, and empirically showed that incorporating these conceptual neighbors leads to a better performance in a category induction task. In terms of future work, it would be interesting to look at other types of lexical relations that can be predicted from text. One possible strategy would be to predict conceptual betweenness, where a category $B$ is said to be between $A$ and $C$ if $B$ has all the properties that $A$ and $C$ have in common BIBREF40 (e.g. we can think of wine as being conceptually between beer and rum). In particular, if $B$ is predicted to be conceptually between $A$ and $C$ then we would also expect the region modelling $B$ to be between the regions modelling $A$ and $C$. Acknowledgments. Jose Camacho-Collados, Luis Espinosa-Anke and Steven Schockaert were funded by ERC Starting Grant 637277. Zied Bouraoui was supported by CNRS PEPS INS2I MODERN.
To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing.
b622f57c4e429b458978cb8863978d7facab7cfe
b622f57c4e429b458978cb8863978d7facab7cfe_0
Q: How they indentify conceptual neighbours? Text: Introduction Vector space embeddings are commonly used to represent entities in fields such as machine learning (ML) BIBREF0, natural language processing (NLP) BIBREF1, information retrieval (IR) BIBREF2 and cognitive science BIBREF3. An important point, however, is that such representations usually represent both individuals and categories as vectors BIBREF4, BIBREF5, BIBREF6. Note that in this paper, we use the term category to denote natural groupings of individuals, as it is used in cognitive science, with individuals referring to the objects from the considered domain of discourse. For example, the individuals carrot and cucumber belong to the vegetable category. We use the term entities as an umbrella term covering both individuals and categories. Given that a category corresponds to a set of individuals (i.e. its instances), modelling them as (possibly imprecise) regions in the embedding space seems more natural than using vectors. In fact, it has been shown that the vector representations of individuals that belong to the same category are indeed often clustered together in learned vector space embeddings BIBREF7, BIBREF8. The view of categories being regions is also common in cognitive science BIBREF3. However, learning region representations of categories is a challenging problem, because we typically only have a handful of examples of individuals that belong to a given category. One common assumption is that natural categories can be modelled using convex regions BIBREF3, which simplifies the estimation problem. For instance, based on this assumption, BIBREF9 modelled categories using Gaussian distributions and showed that these distributions can be used for knowledge base completion. Unfortunately, this strategy still requires a relatively high number of training examples to be successful. However, when learning categories, humans do not only rely on examples. For instance, there is evidence that when learning the meaning of nouns, children rely on the default assumption that these nouns denote mutually exclusive categories BIBREF10. In this paper, we will in particular take advantage of the fact that many natural categories are organized into so-called contrast sets BIBREF11. These are sets of closely related categories which exhaustively cover some sub-domain, and which are assumed to be mutually exclusive; e.g. the set of all common color names, the set $\lbrace \text{fruit},\text{vegetable}\rbrace $ or the set $\lbrace \text{NLP}, \text{IR}, \text{ML}\rbrace $. Categories from the same contrast set often compete for coverage. For instance, we can think of the NLP domain as consisting of research topics that involve processing textual information which are not covered by the IR and ML domains. Categories which compete for coverage in this way are known as conceptual neighbors BIBREF12; e.g. NLP and IR, red and orange, fruit and vegetable. Note that the exact boundary between two conceptual neighbors may be vague (e.g. tomato can be classified as fruit or as vegetable). In this paper, we propose a method for learning region representations of categories which takes advantage of conceptual neighborhood, especially in scenarios where the number of available training examples is small. The main idea is illustrated in Figure FIGREF2, which depicts a situation where we are given some examples of a target category $C$ as well as some related categories $N_1,N_2,N_3,N_4$. If we have to estimate a region from the examples of $C$ alone, the small elliptical region shown in red would be a reasonable choice. More generally, a standard approach would be to estimate a Gaussian distribution from the given examples. However, vector space embeddings typically have hundreds of dimensions, while the number of known examples of the target category is often far lower (e.g. 2 or 3). In such settings we will almost inevitably underestimate the coverage of the category. However, in the example from Figure FIGREF2, if we take into account the knowledge that $N_1,N_2,N_3,N_4$ are conceptual neighbors of $C$, the much larger, shaded region becomes a more natural choice for representing $C$. Indeed, the fact that e.g. $C$ and $N_1$ are conceptual neighbors suggests that any point in between the examples of these categories needs to be contained either in the region representing $C$ or the region representing $N_1$. In the spirit of prototype approaches to categorization BIBREF13, without any further information it makes sense to assume that their boundary is more or less half-way in between the known examples. The contribution of this paper is two-fold. First, we propose a method for identifying conceptual neighbors from text corpora. We essentially treat this problem as a standard text classification problem, by relying on categories with large numbers of training examples to generate a suitable distant supervision signal. Second, we show that the predicted conceptual neighbors can effectively be used to learn better category representations. Related Work In distributional semantics, categories are frequently modelled as vectors. For example, BIBREF14 study the problem of deciding for a word pair $(i,c)$ whether $i$ denotes an instance of the category $c$, which they refer to as instantiation. They treat this problem as a binary classification problem, where e.g. the pair (AAAI, conference) would be a positive example, while (conference, AAAI) and (New York, conference) would be negative examples. Different from our setting, their aim is thus essentially to model the instantiation relation itself, similar in spirit to how hypernymy has been modelled in NLP BIBREF15, BIBREF16. To predict instantiation, they use a simple neural network model which takes as input the word vectors of the input pair $(i,c)$. They also experiment with an approach that instead models a given category as the average of the word vectors of its known instances and found that this led to better results. A few authors have already considered the problem of learning region representations of categories. Most closely related, BIBREF17 model ontology concepts using Gaussian distributions. In BIBREF18 DBLP:conf/ecai/JameelS16, a model is presented which embeds Wikipedia entities such that entities which have the same WikiData type are characterized by some region within a low-dimensional subspace of the embedding. Within the context of knowledge graph embedding, several approaches have been proposed that essentially model semantic types as regions BIBREF19, BIBREF20. A few approaches have also been proposed for modelling word meaning using regions BIBREF21, BIBREF22 or Gaussian distributions BIBREF23. Along similar lines, several authors have proposed approaches inspired by probabilistic topic modelling, which model latent topics using Gaussians BIBREF24 or related distributions BIBREF25. On the other hand, the notion of conceptual neighborhood has been covered in most detail in the field of spatial cognition, starting with the influential work of BIBREF12. In computational linguistics, moreover, this representation framework aligns with lexical semantics traditions where word meaning is constructed in terms of semantic decomposition, i.e. lexical items being minimally decomposed into structured forms (or templates) rather than sets of features BIBREF26, effectively mimicking a sort of conceptual neighbourhood. In Pustejovsky's generative lexicon, a set of “semantic devices” is proposed such that they behave in semantics similarly as grammars do in syntax. Specifically, this framework considers the qualia structure of a lexical unit as a set of expressive semantic distinctions, the most relevant for our purposes being the so-called formal role, which is defined as “that which distinguishes the object within a larger domain”, e.g. shape or color. This semantic interplay between cognitive science and computational linguistics gave way to the term lexical coherence, which has been used for contextualizing the meaning of words in terms of how they relate to their conceptual neighbors BIBREF27, or by providing expressive lexical semantic resources in the form of ontologies BIBREF28. Model Description Our aim is to introduce a model for learning region-based category representations which can take advantage of knowledge about the conceptual neighborhood of that category. Throughout the paper, we focus in particular on modelling categories from the BabelNet taxonomy BIBREF29, although the proposed method can be applied to any resource which (i) organizes categories in a taxonomy and (ii) provides examples of individuals that belong to these categories. Selecting BabelNet as our use case is a natural choice, however, given its large scale and the fact that it integrates many lexical and ontological resources. As the possible conceptual neighbors of a given BabelNet category $C$, we consider all its siblings in the taxonomy, i.e. all categories $C_1,...,C_k$ which share a direct parent with $C$. To select which of these siblings are most likely to be conceptual neighbors, we look at mentions of these categories in a text corpus. As an illustrative example, consider the pair (hamlet,village) and the following sentence: In British geography, a hamlet is considered smaller than a village and ... From this sentence, we can derive that hamlet and village are disjoint but closely related categories, thus suggesting that they are conceptual neighbors. However, training a classifier that can identify conceptual neighbors from such sentences is complicated by the fact that conceptual neighborhood is not covered in any existing lexical resource, to the best of our knowledge, which means that large sets of training examples are not readily available. To address this lack of training data, we rely on a distant supervision strategy. The central insight is that for categories with a large number of known instances, we can use the embeddings of these instances to check whether two categories are conceptual neighbors. In particular, our approach involves the following three steps: Identify pairs of categories that are likely to be conceptual neighbors, based on the vector representations of their known instances. Use the pairs from Step 1 to train a classifier that can recognize sentences which indicate that two categories are conceptual neighbors. Use the classifier from Step 2 to predict which pairs of BabelNet categories are conceptual neighbors and use these predictions to learn category representations. Note that in Step 1 we can only consider BabelNet categories with a large number of instances, while the end result in Step 3 is that we can predict conceptual neighborhood for categories with only few known instances. We now discuss the three aforementioned steps one by one. Model Description ::: Step 1: Predicting Conceptual Neighborhood from Embeddings Our aim here is to generate distant supervision labels for pairs of categories, indicating whether they are likely to be conceptual neighbors. These labels will then be used in Section SECREF12 to train a classifier for predicting conceptual neighborhood from text. Let $A$ and $B$ be siblings in the BabelNet taxonomy. If enough examples of individuals belonging to these categories are provided in BabelNet, we can use these instances to estimate high-quality representations of $A$ and $B$, and thus estimate whether they are likely to be conceptual neighbors. In particular, we split the known instances of $A$ into a training set $I^A_{\textit {train}}$ and test set $I^A_{\textit {test}}$, and similar for $B$. We then train two types of classifiers. The first classifier estimates a Gaussian distribution for each category, using the training instances in $I^A_{\textit {train}}$ and $I^B_{\textit {train}}$ respectively. This should provide us with a reasonable representation of $A$ and $B$ regardless of whether they are conceptual neighbors. In the second approach, we first learn a Gaussian distribution from the joint set of training examples $I^A_{\textit {train}} \cup I^B_{\textit {train}}$ and then train a logistic regression classifier to separate instances from $A$ and $B$. In particular, note that in this way, we directly impose the requirement that the regions modelling $A$ and $B$ are adjacent in the embedding space (intuitively corresponding to two halves of a Gaussian distribution). We can thus expect that the second approach should lead to better predictions than the first approach if $A$ and $B$ are conceptual neighbors and to worse predictions if they are not. In particular, we propose to use the relative performance of the two classifiers as the required distant supervision signal for predicting conceptual neighborhood. We now describe the two classification models in more detail, after which we explain how these models are used to generate the distant supervision labels. Gaussian Classifier The first classifier follows the basic approach from BIBREF17, where Gaussian distributions were similarly used to model WikiData categories. In particular, we estimate the probability that an individual $e$ with vector representation $\mathbf {e}$ is an instance of the category $A$ as follows: where $\lambda _A$ is the prior probability of belonging to category $A$, the likelihood $f(\mathbf {e} | A)$ is modelled as a Gaussian distribution and $f(\mathbf {e})$ will also be modelled as a Gaussian distribution. Intuitively, we think of the Gaussian $f(. | A)$ as defining a soft region, modelling the category $A$. Given the high-dimensional nature of typical vector space embeddings, we use a mean field approximation: Where $d$ is the number of dimensions in the vector space embedding, $e_i$ is the $i^{\textit {th}}$ coordinate of $\mathbf {e}$, and $f_i(. | A)$ is a univariate Gaussian. To estimate the parameters $\mu _i$ and $\sigma _i^2$ of this Gaussian, we use a Bayesian approach with a flat prior: where $G(e_i;\mu _i,\sigma _i^2)$ represents the Gaussian distribution with mean $\mu _i$ and variance $\sigma _i^2$ and NI$\chi ^{2}$ is the normal inverse-$\chi ^{2}$ distribution. In other words, instead of using a single estimate of the mean $\mu $ and variance $\sigma _2$ we average over all plausible choices of these parameters. The use of the normal inverse-$\chi ^{2}$ distribution for the prior on $\mu _i$ and $\sigma _i^2$ is a common choice, which has the advantage that the above integral simplifies to a Student-t distribution. In particular, we have: where we assume $I^A_{\textit {train}}= \lbrace a_1,...,a_n\rbrace $, $a_i^j$ denotes the $i^{\textit {th}}$ coordinate of the vector embedding of $a_j$, $\overline{x_i} = \frac{1}{n}\sum _{j=1}^n a_i^j$ and $t_{n-1}$ is the Student t-distribution with $n-1$ degrees of freedom. The probability $f(\mathbf {e})$ is estimated in a similar way, but using all BabelNet instances. The prior $\lambda _A$ is tuned based on a validation set. Finally, we classify $e$ as a positive example if $P(A|\mathbf {e}) > 0.5$. GLR Classifier. We first train a Gaussian classifier as in Section UNKREF9, but now using the training instances of both $A$ and $B$. Let us denote the probability predicted by this classifier as $P(A\cup B | \textbf {e})$. The intuition is that entities for which this probability is high should either be instances of $A$ or of $B$, provided that $A$ and $B$ are conceptual neighbors. If, on the other hand, $A$ and $B$ are not conceptual neighbors, relying on this assumption is likely to lead to errors (i.e. there may be individuals whose representation is in between $A$ and $B$ which are not instances of either), which is what we need for generating the distant supervision labels. If $P(A\cup B | \textbf {e}) > 0.5$, we assume that $e$ either belongs to $A$ or to $B$. To distinguish between these two cases, we train a logistic regression classifier, using the instances from $I^A_{\textit {train}}$ as positive examples and those from $I^B_{\textit {train}}$ as negative examples. Putting everything together, we thus classify $e$ as a positive example for $A$ if $P(A\cup B | \textbf {e})>0.5$ and $e$ is classified as a positive example by the logistic regression classifier. Similarly, we classfiy $e$ as a positive example for $B$ if $P(A\cup B | \textbf {e})>0.5$ and $e$ is classified as a negative example by the logistic regression classifier. We will refer to this classification model as GLR (Gaussian Logistic Regression). Model Description ::: Step 1: Predicting Conceptual Neighborhood from Embeddings ::: Generating Distant Supervision Labels To generate the distant supervision labels, we consider a ternary classification problem for each pair of siblings $A$ and $B$. In particular, the task is to decide for a given individual $e$ whether it is an instance of $A$, an instance of $B$, or an instance of neither (where only disjoint pairs $A$ and $B$ are considered). For the Gaussian classifier, we predict $A$ iff $P(A|\mathbf {e})>0.5$ and $P(A|\mathbf {e}) > P(B|\mathbf {e})$. For the GLR classifier, we predict $A$ if $P(A\cup B|\mathbf {e}) >0.5$ and the associated logistic regression classifier predicts $A$. The condition for predicting $B$ is analogous. The test examples for this ternary classification problem consist of the elements from $I^A_{\textit {test}}$ and $I^B_{\textit {test}}$, as well as some negative examples (i.e. individuals that are neither instances of $A$ nor $B$). To select these negative examples, we first sample instances from categories that have the same parent as $A$ and $B$, choosing as many such negative examples as we have positive examples. Second, we also sample the same number of negative examples from randomly selected categories in the taxonomy. Let $F^1_{AB}$ be the F1 score achieved by the Gaussian classifier and $F^2_{AB}$ the F1 score of the GLR classifier. Our hypothesis is that $F^1_{AB} \ll F^2_{AB}$ suggests that $A$ and $B$ are conceptual neighbors, while $F^1_{AB} \gg F^2_{AB}$ suggests that they are not. This intuition is captured in the following score: where we consider $A$ and $B$ to be conceptual neighbors if $s_{AB}\gg 0.5$. Model Description ::: Step 2: Predicting Conceptual Neighborhood from Text We now consider the following problem: given two BabelNet categories $A$ and $B$, predict whether they are likely to be conceptual neighbors based on the sentences from a text corpus in which they are both mentioned. To train such a classifier, we use the distant supervision labels from Section SECREF8 as training data. Once this classifier has been trained, we can then use it to predict conceptual neighborhood for categories for which only few instances are known. To find sentences in which both $A$ and $B$ are mentioned, we rely on a disambiguated text corpus in which mentions of BabelNet categories are explicitly tagged. Such a disambiguated corpus can be automatically constructed, using methods such as the one proposed by BIBREF30 mancini-etal-2017-embedding, for instance. For each pair of candidate categories, we thus retrieve all sentences where they co-occur. Next, we represent each extracted sentence as a vector. To this end, we considered two possible strategies: Word embedding averaging: We compute a sentence embedding by simply averaging the word embeddings of each word within the sentence. Despite its simplicity, this approach has been shown to provide competitive results BIBREF31, in line with more expensive and sophisticated methods e.g. based on LSTMs. Contextualized word embeddings: The recently proposed contextualized embeddings BIBREF32, BIBREF33 have already proven successful in a wide range of NLP tasks. Instead of providing a single vector representation for all words irrespective of the context, contextualized embeddings predict a representation for each word occurrence which depends on its context. These representations are usually based on pre-trained language models. In our setting, we extract the contextualized embeddings for the two candidate categories within the sentence. To obtain this contextualized embedding, we used the last layer of the pre-trained language model, which has been shown to be most suitable for capturing semantic information BIBREF34, BIBREF35. We then use the concatenation of these two contextualized embeddings as the representation of the sentence. For both strategies, we average their corresponding sentence-level representations across all sentences in which the same two candidate categories are mentioned. Finally, we train an SVM classifier on the resulting vectors to predict for the pair of siblings $(A,B)$ whether $s_{AB}> 0.5$ holds. Model Description ::: Step 3: Category Induction Let $C$ be a category and assume that $N_1,...,N_k$ are conceptual neighbors of this category. Then we can model $C$ by generalizing the idea underpinning the GLR classifier. In particular, we first learn a Gaussian distribution from all the instances of $C$ and $N_1,...,N_k$. This Gaussian model allows us to estimate the probability $P(C\cup N_1\cup ...\cup N_k \,|\, \mathbf {e})$ that $e$ belongs to one of $C,N_1,...,N_k$. If this probability is sufficiently high (i.e. higher than 0.5), we use a multinomial logistic regression classifier to decide which of these categories $e$ is most likely to belong to. Geometrically, we can think of the Gaussian model as capturing the relevant local domain, while the multinomial logistic regression model carves up this local domain, similar as in Figure FIGREF2. In practice, we do not know with certainty which categories are conceptual neighbors of $C$. Instead, we select the $k$ categories (for some fixed constant $k$), among all the siblings of $C$, which are most likely to be conceptual neighbors, according to the text classifier from Section SECREF12. Experiments The central problem we consider is category induction: given some instances of a category, predict which other individuals are likely to be instances of that category. When enough instances are given, standard approaches such as the Gaussian classifier from Section UNKREF9, or even a simple SVM classifier, can perform well on this task. For many categories, however, we only have access to a few instances, either because the considered ontology is highly incomplete or because the considered category only has few actual instances. The main research question which we want to analyze is whether (predicted) conceptual neighborhood can help to obtain better category induction models in such cases. In Section SECREF16, we first provide more details about the experimental setting that we followed. Section SECREF23 then discusses our main quantitative results. Finally, in Section SECREF26 we present a qualitative analysis. Experiments ::: Experimental setting ::: Taxonomy As explained in Section SECREF3, we used BabelNet BIBREF29 as our reference taxonomy. BabelNet is a large-scale full-fledged taxonomy consisting of heterogeneous sources such as WordNet BIBREF36, Wikidata BIBREF37 and WiBi BIBREF38, making it suitable to test our hypothesis in a general setting. Vector space embeddings. Both the distant labelling method from Section SECREF8 and the category induction model itself need access to vector representations of the considered instances. To this end, we used the NASARI vectors, which have been learned from Wikipedia and are already linked to BabelNet BIBREF1. BabelNet category selection. To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing. To tune the prior probability $\lambda _A$ for these categories, we hold out 10% from the training set as a validation set. The conceptual neighbors among the considered test categories are predicted using the classifier from Section SECREF12. To obtain the distant supervision labels needed to train that classifier, we consider all BabelNet categories with at least 50 instances. This ensures that the distant supervision labels are sufficiently accurate and that there is no overlap with the categories which are used for evaluating the model. Text classifier training. As the text corpus to extract sentences for category pairs we used the English Wikipedia. In particular, we used the dump of November 2014, for which a disambiguated version is available online. This disambiguated version was constructed using the shallow disambiguation algorithm of BIBREF30 mancini-etal-2017-embedding. As explained in Section SECREF12, for each pair of categories we extracted all the sentences where they co-occur, including a maximum window size of 10 tokens between their occurrences, and 10 tokens to the left and right of the first and second category within the sentence, respectively. For the averaging-based sentence representations we used the 300-dimensional pre-trained GloVe word embeddings BIBREF39. To obtain the contextualized representations we used the pre-trained 768-dimensional BERT-base model BIBREF33.. The text classifier is trained on 3,552 categories which co-occur at least once in the same sentence in the Wikipedia corpus, using the corresponding scores $s_{AB}$ as the supervision signal (see Section SECREF12). To inspect how well conceptual neighborhood can be predicted from text, we performed a 10-fold cross validation over the training data, removing for this experiment the unclear cases (i.e., those category pairs with $s_{AB}$ scores between $0.4$ and $0.6$). We also considered a simple baselineWE based on the number of co-occurring sentences for each pairs, which we might expect to be a reasonably strong indicator of conceptual neighborhood, i.e. the more often two categories are mentiond in the same sentence, the more likely that they are conceptual neighbors. The results for this cross-validation experiment are summarized in Table TABREF22. Surprisingly, perhaps, the word vector averaging method seems more robust overall, while being considerably faster than the method using BERT. The results also confirm the intuition that the number of co-occurring sentences is positively correlated with conceptual neighborhood, although the results for this baseline are clearly weaker than those for the proposed classifiers. Baselines. To put the performance of our model in perspective, we consider three baseline methods for category induction. First, we consider the performance of the Gaussian classifier from Section UNKREF9, as a representative example of how well we can model each category when only considering their given instances; this model will be referred to as Gauss. Second, we consider a variant of the proposed model in which we assume that all siblings of the category are conceptual neighbors; this model will be referred to as Multi. Third, we consider a variant of our model in which the neighbors are selected based on similarity. To this end, we represent each BabelNet as their vector from the NASARI space. From the set of siblings of the target category $C$, we then select the $k$ categories whose vector representation is most similar to that of $C$, in terms of cosine similarity. This baseline will be referred to as Similarity$_k$, with $k$ the number of selected neighbors. We refer to our model as SECOND-WEA$_k$ or SECOND-BERT$_k$ (SEmantic categories with COnceptual NeighborhooD), depending on whether the word embedding averaging strategy is used or the method using BERT. Experiments ::: Quantitative Results Our main results for the category induction task are summarized in Table TABREF24. In this table, we show results for different choices of the number of selected conceptual neighbors $k$, ranging from 1 to 5. As can be seen from the table, our approach substantially outperforms all baselines, with Multi being the most competitive baseline. Interestingly, for the Similarity baseline, the higher the number of neighbors, the more the performance approaches that of Multi. The relatively strong performance of Multi shows that using the siblings of a category in the BabelNet taxonomy is in general useful. However, as our results show, better results can be obtained by focusing on the predicted conceptual neighbors only. It is interesting to see that even selecting a single conceptual neighbor is already sufficient to substantially outperform the Gaussian model, although the best results are obtained for $k=4$. Comparing the WEA and BERT variants, it is notable that BERT is more successful at selecting the single best conceptual neighbor (reflected in an F1 score of 47.0 compared to 41.9). However, for $k \ge 2$, the results of the WEA and BERT are largely comparable. Experiments ::: Qualitative Analysis To illustrate how conceptual neighborhood can improve classification results, Fig. FIGREF25 shows the two first principal components of the embeddings of the instances of three BabelNet categories: Songbook, Brochure and Guidebook. All three categories can be considered to be conceptual neighbors. Brochure and Guidebook are closely related categories, and we may expect there to exist borderline cases between them. This can be clearly seen in the figure, where some instances are located almost exactly on the boundary between the two categories. On the other hand, Songbook is slightly more separated in the space. Let us now consider the left-most data point from the Songbook test set, which is essentially an outlier, being more similar to instances of Guidebook than typical Songbook instances. When using a Gaussian model, this data point would not be recognised as a plausible instance. When incorporating the fact that Brochure and Guidebook are conceptual neighbors of Songbook, however, it is more likely to be classified correctly. To illustrate the notion of conceptual neighborhood itself, Table TABREF27 displays some selected category pairs from the training set (i.e. the category pairs that were used to train the text classifier), which intuitively correspond to conceptual neighbors. The left column contains some selected examples of category pairs with a high $s_{AB}$ score of at least 0.9. As these examples illustrate, we found that a high $s_{AB}$ score was indeed often predictive of conceptual neighborhood. As the right column of this table illustrates, there are several category pairs with a lower $s_{AB}$ score of around 0.5 which intuitively still seem to correspond to conceptual neighbors. When looking at category pairs with even lower scores, however, conceptual neighborhood becomes rare. Moreover, while there are several pairs with high scores which are not actually conceptual neighbors (e.g. the pair Actor – Makup Artist), they tend to be categories which are still closely related. This means that the impact of incorrectly treating them as conceptual neighbors on the performance of our method is likely to be limited. On the other hand, when looking at category pairs with a very low confidence score we find many unrelated pairs, which we can expect to be more harmful when considered as conceptual neighbors, as the combined Gaussian will then cover a much larger part of the space. Some examples of such pairs include Primary school – Financial institution, Movie theatre – Housing estate, Corporate title – Pharaoh and Fraternity – Headquarters. Finally, in Tables TABREF28 and TABREF29, we show examples of the top conceptual neighbors that were selected for some categories from the test set. Table TABREF28 shows examples of BabelNet categories for which the F1 score of our SECOND-WEA$_1$ classifier was rather low. As can be seen, the conceptual neighbors that were chosen in these cases are not suitable. For instance, Bachelor's degree is a near-synonym of Undergraduate degree, hence assuming them to be conceptual neighbors would clearly be detrimental. In contrast, when looking at the examples in Table TABREF29, where categories are shown with a higher F1 score, we find examples of conceptual neighbors that are intuitively much more meaningful. Conclusions We have studied the role of conceptual neighborhood for modelling categories, focusing especially on categories with a relatively small number of instances, for which standard modelling approaches are challenging. To this end, we have first introduced a method for predicting conceptual neighborhood from text, by taking advantage of BabelNet to implement a distant supervision strategy. We then used the resulting classifier to identify the most likely conceptual neighbors of a given target category, and empirically showed that incorporating these conceptual neighbors leads to a better performance in a category induction task. In terms of future work, it would be interesting to look at other types of lexical relations that can be predicted from text. One possible strategy would be to predict conceptual betweenness, where a category $B$ is said to be between $A$ and $C$ if $B$ has all the properties that $A$ and $C$ have in common BIBREF40 (e.g. we can think of wine as being conceptually between beer and rum). In particular, if $B$ is predicted to be conceptually between $A$ and $C$ then we would also expect the region modelling $B$ to be between the regions modelling $A$ and $C$. Acknowledgments. Jose Camacho-Collados, Luis Espinosa-Anke and Steven Schockaert were funded by ERC Starting Grant 637277. Zied Bouraoui was supported by CNRS PEPS INS2I MODERN.
Once this classifier has been trained, we can then use it to predict conceptual neighborhood for categories for which only few instances are known.
f9c5799091e7e35a8133eee4d95004e1b35aea00
f9c5799091e7e35a8133eee4d95004e1b35aea00_0
Q: What experiment result led to conclussion that reducing the number of layers of the decoder does not matter much? Text: Introduction The performance of state-of-the-art MT systems is not perfect, thus, human interventions are still required to correct machine translated texts into publishable quality translations BIBREF0. Automatic post-editing (APE) is a method that aims to automatically correct errors made by MT systems before performing actual human post-editing (PE) BIBREF1, thereby reducing the translators' workload and increasing productivity BIBREF2. APE systems trained on human PE data serve as MT post-processing modules to improve the overall performance. APE can therefore be viewed as a 2nd-stage MT system, translating predictable error patterns in MT output to their corresponding corrections. APE training data minimally involves MT output ($mt$) and the human post-edited ($pe$) version of $mt$, but additionally using the source ($src$) has been shown to provide further benefits BIBREF3, BIBREF4, BIBREF5. To provide awareness of errors in $mt$ originating from $src$, attention mechanisms BIBREF6 allow modeling of non-local dependencies in the input or output sequences, and importantly also global dependencies between them (in our case $src$, $mt$ and $pe$). The transformer architecture BIBREF7 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. The transformer uses positional encoding to encode the input and output sequences, and computes both self- and cross-attention through so-called multi-head attentions, which are facilitated by parallelization. Such multi-head attention allows to jointly attend to information at different positions from different representation subspaces, e.g. utilizing and combining information from $src$, $mt$, and $pe$. In this paper, we present a multi-source neural APE architecture called transference. Our model contains a source encoder which encodes $src$ information, a second encoder ($enc_{src \rightarrow mt}$) which takes the encoded representation from the source encoder ($enc_{src}$), combines this with the self-attention-based encoding of $mt$ ($enc_{mt}$), and prepares a representation for the decoder ($dec_{pe}$) via cross-attention. Our second encoder ($enc_{src \rightarrow mt}$) can also be viewed as a standard transformer decoding block, however, without masking, which acts as an encoder. We thus recombine the different blocks of the transformer architecture and repurpose them for the APE task in a simple yet effective way. The suggested architecture is inspired by the two-step approach professional translators tend to use during post-editing: first, the source segment is compared to the corresponding translation suggestion (similar to what our $enc_{src \rightarrow mt}$ is doing), then corrections to the MT output are applied based on the encountered errors (in the same way that our $dec_{pe}$ uses the encoded representation of $enc_{src \rightarrow mt}$ to produce the final translation). The paper makes the following contributions: (i) we propose a new multi-encoder model for APE that consists only of standard transformer encoding and decoding blocks, (ii) by using a mix of self- and cross-attention we provide a representation of both $src$ and $mt$ for the decoder, allowing it to better capture errors in $mt$ originating from $src$; this advances the state-of-the-art in APE in terms of BLEU and TER, and (iii), we analyze the effect of varying the number of encoder and decoder layers BIBREF8, indicating that the encoders contribute more than decoders in transformer-based neural APE. Related Research Recent advances in APE research are directed towards neural APE, which was first proposed by Pal:2016:ACL and junczysdowmunt-grundkiewicz:2016:WMT for the single-source APE scenario which does not consider $src$, i.e. $mt \rightarrow pe$. In their work, junczysdowmunt-grundkiewicz:2016:WMT also generated a large synthetic training dataset through back translation, which we also use as additional training data. Exploiting source information as an additional input can help neural APE to disambiguate corrections applied at each time step; this naturally leads to multi-source APE ($\lbrace src, mt\rbrace \rightarrow pe$). A multi-source neural APE system can be configured either by using a single encoder that encodes the concatenation of $src$ and $mt$ BIBREF9 or by using two separate encoders for $src$ and $mt$ and passing the concatenation of both encoders' final states to the decoder BIBREF10. A few approaches to multi-source neural APE were proposed in the WMT 2017 APE shared task. Junczysdowmunt:2017:WMT combine both $mt$ and $src$ in a single neural architecture, exploring different combinations of attention mechanisms including soft attention and hard monotonic attention. Chatterjee-EtAl:2017:WMT2 built upon the two-encoder architecture of multi-source models BIBREF10 by means of concatenating both weighted contexts of encoded $src$ and $mt$. Varis-bojar:2017:WMT compared two multi-source models, one using a single encoder with concatenation of $src$ and $mt$ sentences, and a second one using two character-level encoders for $mt$ and $src$ along with a character-level decoder. Recently, in the WMT 2018 APE shared task, several adaptations of the transformer architecture have been presented for multi-source APE. pal-EtAl:2018:WMT proposed an APE model that uses three self-attention-based encoders. They introduce an additional joint encoder that attends over a combination of the two encoded sequences from $mt$ and $src$. tebbifakhr-EtAl:2018:WMT, the NMT-subtask winner of WMT 2018 ($wmt18^{nmt}_{best}$), employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics. shin-lee:2018:WMT propose that each encoder has its own self-attention and feed-forward layer to process each input separately. On the decoder side, they add two additional multi-head attention layers, one for $src \rightarrow mt$ and another for $src \rightarrow pe$. Thereafter another multi-head attention between the output of those attention layers helps the decoder to capture common words in $mt$ which should remain in $pe$. The APE PBSMT-subtask winner of WMT 2018 ($wmt18^{smt}_{best}$) BIBREF11 also presented another transformer-based multi-source APE which uses two encoders and stacks an additional cross-attention component for $src \rightarrow pe$ above the previous cross-attention for $mt \rightarrow pe$. Comparing shin-lee:2018:WMT's approach with the winner system, there are only two differences in the architecture: (i) the cross-attention order of $src \rightarrow mt$ and $src \rightarrow pe$ in the decoder, and (ii) $wmt18^{smt}_{best}$ additionally shares parameters between two encoders. Transference Model for APE We propose a multi-source transformer model called transference ($\lbrace src,mt\rbrace _{tr} \rightarrow pe$, Figure FIGREF1), which takes advantage of both the encodings of $src$ and $mt$ and attends over a combination of both sequences while generating the post-edited sentence. The second encoder, $enc_{src \rightarrow mt}$, makes use of the first encoder $enc_{src}$ and a sub-encoder $enc_{mt}$ for considering $src$ and $mt$. Here, the $enc_{src}$ encoder and the $dec_{pe}$ decoder are equivalent to the original transformer for neural MT. Our $enc_{src \rightarrow mt}$ follows an architecture similar to the transformer's decoder, the difference being that no masked multi-head self-attention is used to process $mt$. One self-attended encoder for $src$, $\mathbf {s}$ = $(s_1, s_2, \ldots , s_k)$, returns a sequence of continuous representations, $enc_{src}$, and a second self-attended sub-encoder for $mt$, $\mathbf {m}$ = $(m_1, m_2, \ldots , m_l)$, returns another sequence of continuous representations, $enc_{mt}$. Self-attention at this point provides the advantage of aggregating information from all of the words, including $src$ and $mt$, and successively generates a new representation per word informed by the entire $src$ and $mt$ context. The internal $enc_{mt}$ representation performs cross-attention over $enc_{src}$ and prepares a final representation ($enc_{src \rightarrow mt}$) for the decoder ($dec_{pe}$). The decoder then generates the $pe$ output in sequence, $\mathbf {p}$ = $(p_1, p_2, \ldots , p_n)$, one word at a time from left to right by attending to previously generated words as well as the final representations ($enc_{src \rightarrow mt}$) generated by the encoder. To summarize, our multi-source APE implementation extends Vaswani:NIPS2017 by introducing an additional encoding block by which $src$ and $mt$ communicate with the decoder. Our proposed approach differs from the WMT 2018 PBSMT winner system in several ways: (i) we use the original transformer's decoder without modifications; (ii) one of our encoder blocks ($enc_{src \rightarrow mt}$) is identical to the transformer's decoder block but uses no masking in the self-attention layer, thus having one self-attention layer and an additional cross-attention for $src \rightarrow mt$; and (iii) in the decoder layer, the cross-attention is performed between the encoded representation from $enc_{src \rightarrow mt}$ and $pe$. Our approach also differs from the WMT 2018 NMT winner system: (i) $wmt18^{nmt}_{best}$ concatenates the encoded representation of two encoders and passes it as the key to the attention layer of the decoder, and (ii), the system additionally employs sequence-level loss functions based on maximum likelihood estimation and minimum risk training in order to avoid exposure bias during training. The main intuition is that our $enc_{src \rightarrow mt}$ attends over the $src$ and $mt$ and informs the $pe$ to better capture, process, and share information between $src$-$mt$-$pe$, which efficiently models error patterns and the corresponding corrections. Our model performs better than past approaches, as the experiment section will show. Experiments We explore our approach on both APE sub-tasks of WMT 2018, where the 1st-stage MT system to which APE is applied is either a phrase-based statistical machine translation (PBSMT) or a neural machine translation (NMT) model. For the PBSMT task, we compare against four baselines: the raw SMT output provided by the 1st-stage PBSMT system, the best-performing systems from WMT APE 2018 ($\mathbf {wmt18^{smt}_{best}}$), which are a single model and an ensemble model by junczysdowmunt-grundkiewicz:2018:WMT, as well as a transformer trying to directly translate from $src$ to $pe$ (Transformer ($\mathbf {src \rightarrow pe}$)), thus performing translation instead of APE. We evaluate the systems using BLEU BIBREF12 and TER BIBREF13. For the NMT task, we consider two baselines: the raw NMT output provided by the 1st-stage NMT system and the best-performing system from the WMT 2018 NMT APE task ($\mathbf {wmt18^{nmt}_{best}}$) BIBREF14. Apart from the multi-encoder transference architecture described above ($\lbrace src,mt\rbrace _{tr} \rightarrow pe$) and ensembling of this architecture, two simpler versions are also analyzed: first, a `mono-lingual' ($\mathbf {mt \rightarrow pe}$) APE model using only parallel $mt$–$pe$ data and therefore only a single encoder, and second, an identical single-encoder architecture, however, using the concatenated $src$ and $mt$ text as input ($\mathbf {\lbrace src+mt\rbrace \rightarrow pe}$) BIBREF9. Experiments ::: Data For our experiments, we use the English–German WMT 2016 BIBREF4, 2017 BIBREF5 and 2018 BIBREF15 APE task data. All these released APE datasets consist of English–German triplets containing source English text ($src$) from the IT domain, the corresponding German translations ($mt$) from a 1st-stage MT system, and the corresponding human-post-edited version ($pe$). The sizes of the datasets (train; dev; test), in terms of number of sentences, are (12,000; 1,000; 2,000), (11,000; 0; 2,000), and (13,442; 1,000; 1,023), for the 2016 PBSMT, the 2017 PBSMT, and the 2018 NMT data, respectively. One should note that for WMT 2018, we carried out experiments only for the NMT sub-task and ignored the data for the PBSMT task. Since the WMT APE datasets are small in size, we use `artificial training data' BIBREF16 containing 4.5M sentences as additional resources, 4M of which are weakly similar to the WMT 2016 training data, while 500K are very similar according to TER statistics. For experimenting on the NMT data, we additionally use the synthetic eScape APE corpus BIBREF17, consisting of $\sim $7M triples. For cleaning this noisy eScape dataset containing many unrelated language words (e.g. Chinese), we perform the following two steps: (i) we use the cleaning process described in tebbifakhr-EtAl:2018:WMT, and (ii) we use the Moses BIBREF18 corpus cleaning scripts with minimum and maximum number of tokens set to 1 and 100, respectively. After cleaning, we perform punctuation normalization, and then use the Moses tokenizer BIBREF18 to tokenize the eScape corpus with `no-escape' option. Finally, we apply true-casing. The cleaned version of the eScape corpus contains $\sim $6.5M triplets. Experiments ::: Experiment Setup To build models for the PBSMT tasks from 2016 and 2017, we first train a generic APE model using all the training data (4M + 500K + 12K + 11K) described in Section SECREF2. Afterwards, we fine-tune the trained model using the 500K artificial and 23K (12K + 11K) real PE training data. We use the WMT 2016 development data (dev2016) containing 1,000 triplets to validate the models during training. To test our system performance, we use the WMT 2016 and 2017 test data (test2016, test2017) as two sub-experiments, each containing 2,000 triplets ($src$, $mt$ and $pe$). We compare the performance of our system with the four different baseline systems described above: raw MT, $wmt18^{smt}_{best}$ single and ensemble, as well as Transformer ($src \rightarrow pe$). Additionally, we check the performance of our model on the WMT 2018 NMT APE task (where unlike in previous tasks, the 1st-stage MT system is provided by NMT): for this, we explore two experimental setups: (i) we use the PBSMT task's APE model as a generic model which is then fine-tuned to a subset (12k) of the NMT data ($\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic, smt}}_{{}}$). One should note that it has been argued that the inclusion of SMT-specific data could be harmful when training NMT APE models BIBREF11. (ii), we train a completely new generic model on the cleaned eScape data ($\sim $6.5M) along with a subset (12K) of the original training data released for the NMT task ($\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic, nmt}}_{{}}$). The aforementioned 12K NMT data are the first 12K of the overall 13.4K NMT data. The remaining 1.4K are used as validation data. The released development set (dev2018) is used as test data for our experiment, alongside the test2018, for which we could only obtain results for a few models by the WMT 2019 task organizers. We also explore an additional fine-tuning step of $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic, nmt}}_{{}}$ towards the 12K NMT data (called $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{}}$), and a model averaging the 8 best checkpoints of $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{}}$, which we call $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{avg}}$. Last, we analyze the importance of our second encoder ($enc_{src \rightarrow mt}$), compared to the source encoder ($enc_{src}$) and the decoder ($dec_{pe}$), by reducing and expanding the amount of layers in the encoders and the decoder. Our standard setup, which we use for fine-tuning, ensembling etc., is fixed to 6-6-6 for $N_{src}$-$N_{mt}$-$N_{pe}$ (cf. Figure FIGREF1), where 6 is the value that was proposed by Vaswani:NIPS2017 for the base model. We investigate what happens in terms of APE performance if we change this setting to 6-6-4 and 6-4-6. To handle out-of-vocabulary words and reduce the vocabulary size, instead of considering words, we consider subword units BIBREF19 by using byte-pair encoding (BPE). In the preprocessing step, instead of learning an explicit mapping between BPEs in the $src$, $mt$ and $pe$, we define BPE tokens by jointly processing all triplets. Thus, $src$, $mt$ and $pe$ derive a single BPE vocabulary. Since $mt$ and $pe$ belong to the same language (German) and $src$ is a close language (English), they naturally share a good fraction of BPE tokens, which reduces the vocabulary size to 28k. Experiments ::: Hyper-parameter Setup We follow a similar hyper-parameter setup for all reported systems. All encoders (for $\lbrace src,mt\rbrace _{tr} \rightarrow pe$), and the decoder, are composed of a stack of $N_{src} = N_{mt} = N_{pe} = 6$ identical layers followed by layer normalization. The learning rate is varied throughout the training process, and increasing for the first training steps $warmup_{steps} = 8000$ and afterwards decreasing as described in BIBREF7. All remaining hyper-parameters are set analogously to those of the transformer's base model, except that we do not perform checkpoint averaging. At training time, the batch size is set to 25K tokens, with a maximum sentence length of 256 subwords. After each epoch, the training data is shuffled. During decoding, we perform beam search with a beam size of 4. We use shared embeddings between $mt$ and $pe$ in all our experiments. Results The results of our four models, single-source ($\mathbf {mt \rightarrow pe}$), multi-source single encoder ($\mathbf {\lbrace src + pe\rbrace \rightarrow pe}$), transference ($\mathbf {\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe}$), and ensemble, in comparison to the four baselines, raw SMT, $\mathbf {wmt18^{smt}_{best}}$ BIBREF11 single and ensemble, as well as Transformer ($\mathbf {src \rightarrow pe}$), are presented in Table TABREF5 for test2016 and test2017. Table TABREF9 reports the results obtained by our transference model ($\mathbf {\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{}}_{{}}}$) on the WMT 2018 NMT data for dev2018 (which we use as a test set) and test2018, compared to the baselines raw NMT and $\mathbf {wmt18^{nmt}_{best}}$. Results ::: Baselines The raw SMT output in Table TABREF5 is a strong black-box PBSMT system (i.e., 1st-stage MT). We report its performance observed with respect to the ground truth ($pe$), i.e., the post-edited version of $mt$. The original PBSMT system scores over 62 BLEU points and below 25 TER on test2016 and test2017. Using a Transformer ($src \rightarrow pe$), we test if APE is really useful, or if potential gains are only achieved due to the good performance of the transformer architecture. While we cannot do a full training of the transformer on the data that the raw MT engine was trained on due to the unavailability of the data, we use our PE datasets in an equivalent experimental setup as for all other models. The results of this system (Exp. 1.2 in Table TABREF5) show that the performance is actually lower across both test sets, -5.52/-9.43 absolute points in BLEU and +5.21/+7.72 absolute in TER, compared to the raw SMT baseline. We report four results from $\mathbf {wmt18^{smt}_{best}}$, (i) $wmt18^{smt}_{best}$ ($single$), which is the core multi-encoder implementation without ensembling but with checkpoint averaging, (ii) $wmt18^{smt}_{best}$ ($x4$) which is an ensemble of four identical `single' models trained with different random initializations. The results of $wmt18^{smt}_{best}$ ($single$) and $wmt18^{smt}_{best}$ ($x4$) (Exp. 1.3 and 1.4) reported in Table TABREF5 are from junczysdowmunt-grundkiewicz:2018:WMT. Since their training procedure slightly differs from ours, we also trained the $wmt18^{smt}_{best}$ system using exactly our experimental setup in order to make a fair comparison. This yields the baselines (iii) $wmt18^{smt,generic}_{best}$ ($single$) (Exp. 1.5), which is similar to $wmt18^{smt}_{best}$ ($single$), however, the training parameters and data are kept in line with our transference general model (Exp. 2.3) and (iv) $wmt18^{smt,ft}_{best}$ ($single$) (Exp. 1.6), which is also trained maintaining the equivalent experimental setup compared to the fine tuned version of the transference general model (Exp. 3.3). Compared to both raw SMT and Transformer ($src \rightarrow pe$) we see strong improvements for this state-of-the-art model, with BLEU scores of at least 68.14 and TER scores of at most 20.98 across the PBSMT testsets. $wmt18^{smt}_{best}$, however, performs better in its original setup (Exp. 1.3 and 1.4) compared to our experimental setup (Exp. 1.5 and 1.6). Results ::: Single-Encoder Transformer for APE The two transformer architectures $\mathbf {mt \rightarrow pe}$ and $\mathbf {\lbrace src+mt\rbrace \rightarrow pe}$ use only a single encoder. Table TABREF5 shows that $\mathbf {mt \rightarrow pe}$ (Exp. 2.1) provides better performance (+4.42 absolute BLEU on test2017) compared to the original SMT, while $\mathbf {\lbrace src+mt\rbrace \rightarrow pe}$ (Exp. 2.2) provides further improvements by additionally using the $src$ information. $\mathbf {\lbrace src+mt\rbrace \rightarrow pe}$ improves over $\mathbf {mt \rightarrow pe}$ by +1.62/+1.35 absolute BLEU points on test2016/test2017. After fine-tuning, both single encoder transformers (Exp. 3.1 and 3.2 in Table TABREF5) show further improvements, +0.87 and +0.31 absolute BLEU points, respectively, for test2017 and a similar improvement for test2016. Results ::: Transference Transformer for APE In contrast to the two models above, our transference architecture uses multiple encoders. To fairly compare to $wmt18^{smt}_{best}$, we retrain the $wmt18^{smt}_{best}$ system with our experimental setup (cf. Exp. 1.5 and 1.6 in Table TABREF5). $wmt18^{smt,generic}_{best}$ (single) is a generic model trained on all the training data; which is afterwards fine-tuned with 500K artificial and 23K real PE data ($wmt18^{smt,ft}_{best}$ (single)). It is to be noted that in terms of performance the data processing method described in junczysdowmunt-grundkiewicz:2018:WMT reported in Exp. 1.3 is better than ours (Exp. 1.6). The fine-tuned version of the $\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe$ model (Exp. 3.3 in Table TABREF5) outperforms $wmt18^{smt}_{best}$ (single) (Exp. 1.3) in BLEU on both test sets, however, the TER score for test2016 increases. One should note that $wmt18^{smt}_{best}$ (single) follows the transformer base model, which is an average of five checkpoints, while our Exp. 3.3 is not. When ensembling the 4 best checkpoints of our $\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe$ model (Exp. 4.1), the result beats the $wmt18^{smt}_{best}$ (x4) system, which is an ensemble of four different randomly initialized $wmt18^{smt}_{best}$ (single) systems. Our $\mathbf {ensemble^{smt} (x3)}$ combines two $\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe$ (Exp. 2.3) models initialized with different random weights with the ensemble of the fine-tuned transference model Exp3.3$^{smt}_{ens4ckpt}$(Exp. 4.1). This ensemble provides the best results for all datasets, providing roughly +1 BLEU point and -0.5 TER when comparing against $wmt18^{smt}_{best}$ (x4). The results on the WMT 2018 NMT datasets (dev2018 and test2018) are presented in Table TABREF9. The raw NMT system serves as one baseline against which we compare the performance of the different models. We evaluate the system hypotheses with respect to the ground truth ($pe$), i.e., the post-edited version of $mt$. The baseline original NMT system scores 76.76 BLEU points and 15.08 TER on dev2018, and 74.73 BLEU points and 16.84 TER on test2018. For the WMT 2018 NMT data we first test our $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic,smt}}_{{}}$ model, which is the model from Exp. 3.3 fine-tuned towards NMT data as described in Section SECREF3. Table TABREF9 shows that our PBSMT APE model fine-tuned towards NMT (Exp. 7) can even slightly improve over the already very strong NMT system by about +0.3 BLEU and -0.1 TER, although these improvements are not statistically significant. The overall results improve when we train our model on eScape and NMT data instead of using the PBSMT model as a basis. Our proposed generic transference model (Exp. 8, $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic,nmt}}_{{}}$ shows statistically significant improvements in terms of BLEU and TER compared to the baseline even before fine-tuning, and further improvements after fine-tuning (Exp. 9, $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{}}$). Finally, after averaging the 8 best checkpoints, our $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{avg}}$ model (Exp. 10) also shows consistent improvements in comparison to the baseline and other experimental setups. Overall our fine-tuned model averaging the 8 best checkpoints achieves +1.02 absolute BLEU points and -0.69 absolute TER improvements over the baseline on test2018. Table TABREF9 also shows the performance of our model compared to the winner system of WMT 2018 ($wmt18^{nmt}_{best}$) for the NMT task BIBREF14. $wmt18^{nmt}_{best}$ scores 14.78 in TER and 77.74 in BLEU on the dev2018 and 16.46 in TER and 75.53 in BLEU on the test2018. In comparison to $wmt18^{nmt}_{best}$, our model (Exp. 10) achieves better scores in TER on both the dev2018 and test2018, however, in terms of BLEU our model scores slightly lower for dev2018, while some improvements are achieved on test2018. The number of layers ($N_{src}$-$N_{mt}$-$N_{pe}$) in all encoders and the decoder for these results is fixed to 6-6-6. In Exp. 5.1, and 5.2 in Table TABREF5, we see the results of changing this setting to 6-6-4 and 6-4-6. This can be compared to the results of Exp. 2.3, since no fine-tuning or ensembling was performed for these three experiments. Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder. Results ::: Analysis of Error Patterns In Table TABREF11, we analyze and compare the best performing SMT ($ensemble^{smt} (x3)$) and NMT ($\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{avg}}$) model outputs with the original MT outputs on the WMT 2017 (SMT) APE test set and on the WMT 2018 (NMT) development set. Improvements are measured in terms of number of words which need to be (i) inserted (In), (ii) deleted (De), (iii) substituted (Su), and (iv) shifted (Sh), as per TER BIBREF13, in order to turn the MT outputs into reference translations. Our model provides promising results by significantly reducing the required number of edits (24% overall for PBSMT task and 3.6% for NMT task) across all edit operations, thereby leading to reduced post-editing effort and hence improving human post-editing productivity. When comparing PBSMT to NMT, we see that stronger improvements are achieved for PBSMT, probably because the raw SMT is worse than the raw NMT. For PBSMT, similar results are achieved for In, De, and Sh, while less gains are obtained in terms of Su. For NMT, In is improved most, followed by Su, De, and last Sh. For shifts in NMT, the APE system even creates further errors, instead of reducing them, which is an issue we aim to prevent in the future. Results ::: Discussion The proposed transference architecture ($\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe$, Exp. 2.3) shows slightly worse results than $wmt18^{smt}_{best}$ (single) (Exp. 1.3) before fine-tuning, and roughly similar results after fine-tuning (Exp. 3.3). After ensembling, however, our transference model (Exp. 4.2) shows consistent improvements when comparing against the best baseline ensemble $wmt18^{smt}_{best}$ (x4) (Exp. 1.4). Due to the unavailability of the sentence-level scores of $wmt18^{smt}_{best}$ (x4), we could not test if the improvements (roughly +1 BLEU, -0.5 TER) are statistically significant. Interestingly, our approach of taking the model optimized for PBSMT and fine-tuning it to the NMT task (Exp. 7) does not hurt the performance as was reported in the previous literature BIBREF11. In contrast, some small, albeit statistically insignificant improvements over the raw NMT baseline were achieved. When we train the transference architecture directly for the NMT task (Exp. 8), we get slightly better and statistically significant improvements compared to raw NMT. Fine-tuning this NMT model further towards the actual NMT data (Exp. 9), as well as performing checkpoint averaging using the 8 best checkpoints improves the results even further. The reasons for the effectiveness of our approach can be summarized as follows. (1) Our $enc_{src \rightarrow mt}$ contains two attention mechanisms: one is self-attention and another is cross-attention. The self-attention layer is not masked here; therefore, the cross-attention layer in $enc_{src \rightarrow mt}$ is informed by both previous and future time-steps from the self-attended representation of $mt$ ($enc_{mt}$) and additionally from $enc_{src}$. As a result, each state representation of $enc_{src \rightarrow mt}$ is learned from the context of $src$ and $mt$. This might produce better representations for $dec_{pe}$ which can access the combined context. In contrast, in $wmt18^{smt}_{best}$, the $dec_{pe}$ accesses representations from $src$ and $mt$ independently, first using the representation from $mt$ and then using that of $src$. (2) The position-wise feed-forward layer in our $enc_{src \rightarrow mt}$ of the transference model requires processing information from two attention modules, while in the case of $wmt18^{smt}_{best}$, the position-wise feed-forward layer in $dec_{pe}$ needs to process information from three attention modules, which may increase the learning difficulty of the feed-forward layer. (3) Since $pe$ is a post-edited version of $mt$, sharing the same language, $mt$ and $pe$ are quite similar compared to $src$. Therefore, attending over a fine-tuned representation from $mt$ along with $src$, which is what we have done in this work, might be a reason for the better results than those achieved by attending over $src$ directly. Evaluating the influence of the depth of our encoders and decoder show that while the decoder depth appears to have limited importance, reducing the encoder depth indeed hurts performance which is in line with domhan-2018-much. Conclusions In this paper, we presented a multi-encoder transformer-based APE model that repurposes the standard transformer blocks in a simple and effective way for the APE task: first, our transference architecture uses a transformer encoder block for $src$, followed by a decoder block without masking on $mt$ that effectively acts as a second encoder combining $src \rightarrow mt$, and feeds this representation into a final decoder block generating $pe$. The proposed model outperforms the best-performing system of WMT 2018 on the test2016, test2017, dev2018, and test2018 data and provides a new state-of-the-art in APE. Taking a departure from traditional transformer-based encoders, which perform self-attention only, our second encoder also performs cross-attention to produce representations for the decoder based on both $src$ and $mt$. We also show that the encoder plays a more pivotal role than the decoder in transformer-based APE, which could also be the case for transformer-based generation tasks in general. Our architecture is generic and can be used for any multi-source task, e.g., multi-source translation or summarization, etc.
Exp. 5.1
04012650a45d56c0013cf45fd9792f43916eaf83
04012650a45d56c0013cf45fd9792f43916eaf83_0
Q: How much is performance hurt when using too small amount of layers in encoder? Text: Introduction The performance of state-of-the-art MT systems is not perfect, thus, human interventions are still required to correct machine translated texts into publishable quality translations BIBREF0. Automatic post-editing (APE) is a method that aims to automatically correct errors made by MT systems before performing actual human post-editing (PE) BIBREF1, thereby reducing the translators' workload and increasing productivity BIBREF2. APE systems trained on human PE data serve as MT post-processing modules to improve the overall performance. APE can therefore be viewed as a 2nd-stage MT system, translating predictable error patterns in MT output to their corresponding corrections. APE training data minimally involves MT output ($mt$) and the human post-edited ($pe$) version of $mt$, but additionally using the source ($src$) has been shown to provide further benefits BIBREF3, BIBREF4, BIBREF5. To provide awareness of errors in $mt$ originating from $src$, attention mechanisms BIBREF6 allow modeling of non-local dependencies in the input or output sequences, and importantly also global dependencies between them (in our case $src$, $mt$ and $pe$). The transformer architecture BIBREF7 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. The transformer uses positional encoding to encode the input and output sequences, and computes both self- and cross-attention through so-called multi-head attentions, which are facilitated by parallelization. Such multi-head attention allows to jointly attend to information at different positions from different representation subspaces, e.g. utilizing and combining information from $src$, $mt$, and $pe$. In this paper, we present a multi-source neural APE architecture called transference. Our model contains a source encoder which encodes $src$ information, a second encoder ($enc_{src \rightarrow mt}$) which takes the encoded representation from the source encoder ($enc_{src}$), combines this with the self-attention-based encoding of $mt$ ($enc_{mt}$), and prepares a representation for the decoder ($dec_{pe}$) via cross-attention. Our second encoder ($enc_{src \rightarrow mt}$) can also be viewed as a standard transformer decoding block, however, without masking, which acts as an encoder. We thus recombine the different blocks of the transformer architecture and repurpose them for the APE task in a simple yet effective way. The suggested architecture is inspired by the two-step approach professional translators tend to use during post-editing: first, the source segment is compared to the corresponding translation suggestion (similar to what our $enc_{src \rightarrow mt}$ is doing), then corrections to the MT output are applied based on the encountered errors (in the same way that our $dec_{pe}$ uses the encoded representation of $enc_{src \rightarrow mt}$ to produce the final translation). The paper makes the following contributions: (i) we propose a new multi-encoder model for APE that consists only of standard transformer encoding and decoding blocks, (ii) by using a mix of self- and cross-attention we provide a representation of both $src$ and $mt$ for the decoder, allowing it to better capture errors in $mt$ originating from $src$; this advances the state-of-the-art in APE in terms of BLEU and TER, and (iii), we analyze the effect of varying the number of encoder and decoder layers BIBREF8, indicating that the encoders contribute more than decoders in transformer-based neural APE. Related Research Recent advances in APE research are directed towards neural APE, which was first proposed by Pal:2016:ACL and junczysdowmunt-grundkiewicz:2016:WMT for the single-source APE scenario which does not consider $src$, i.e. $mt \rightarrow pe$. In their work, junczysdowmunt-grundkiewicz:2016:WMT also generated a large synthetic training dataset through back translation, which we also use as additional training data. Exploiting source information as an additional input can help neural APE to disambiguate corrections applied at each time step; this naturally leads to multi-source APE ($\lbrace src, mt\rbrace \rightarrow pe$). A multi-source neural APE system can be configured either by using a single encoder that encodes the concatenation of $src$ and $mt$ BIBREF9 or by using two separate encoders for $src$ and $mt$ and passing the concatenation of both encoders' final states to the decoder BIBREF10. A few approaches to multi-source neural APE were proposed in the WMT 2017 APE shared task. Junczysdowmunt:2017:WMT combine both $mt$ and $src$ in a single neural architecture, exploring different combinations of attention mechanisms including soft attention and hard monotonic attention. Chatterjee-EtAl:2017:WMT2 built upon the two-encoder architecture of multi-source models BIBREF10 by means of concatenating both weighted contexts of encoded $src$ and $mt$. Varis-bojar:2017:WMT compared two multi-source models, one using a single encoder with concatenation of $src$ and $mt$ sentences, and a second one using two character-level encoders for $mt$ and $src$ along with a character-level decoder. Recently, in the WMT 2018 APE shared task, several adaptations of the transformer architecture have been presented for multi-source APE. pal-EtAl:2018:WMT proposed an APE model that uses three self-attention-based encoders. They introduce an additional joint encoder that attends over a combination of the two encoded sequences from $mt$ and $src$. tebbifakhr-EtAl:2018:WMT, the NMT-subtask winner of WMT 2018 ($wmt18^{nmt}_{best}$), employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics. shin-lee:2018:WMT propose that each encoder has its own self-attention and feed-forward layer to process each input separately. On the decoder side, they add two additional multi-head attention layers, one for $src \rightarrow mt$ and another for $src \rightarrow pe$. Thereafter another multi-head attention between the output of those attention layers helps the decoder to capture common words in $mt$ which should remain in $pe$. The APE PBSMT-subtask winner of WMT 2018 ($wmt18^{smt}_{best}$) BIBREF11 also presented another transformer-based multi-source APE which uses two encoders and stacks an additional cross-attention component for $src \rightarrow pe$ above the previous cross-attention for $mt \rightarrow pe$. Comparing shin-lee:2018:WMT's approach with the winner system, there are only two differences in the architecture: (i) the cross-attention order of $src \rightarrow mt$ and $src \rightarrow pe$ in the decoder, and (ii) $wmt18^{smt}_{best}$ additionally shares parameters between two encoders. Transference Model for APE We propose a multi-source transformer model called transference ($\lbrace src,mt\rbrace _{tr} \rightarrow pe$, Figure FIGREF1), which takes advantage of both the encodings of $src$ and $mt$ and attends over a combination of both sequences while generating the post-edited sentence. The second encoder, $enc_{src \rightarrow mt}$, makes use of the first encoder $enc_{src}$ and a sub-encoder $enc_{mt}$ for considering $src$ and $mt$. Here, the $enc_{src}$ encoder and the $dec_{pe}$ decoder are equivalent to the original transformer for neural MT. Our $enc_{src \rightarrow mt}$ follows an architecture similar to the transformer's decoder, the difference being that no masked multi-head self-attention is used to process $mt$. One self-attended encoder for $src$, $\mathbf {s}$ = $(s_1, s_2, \ldots , s_k)$, returns a sequence of continuous representations, $enc_{src}$, and a second self-attended sub-encoder for $mt$, $\mathbf {m}$ = $(m_1, m_2, \ldots , m_l)$, returns another sequence of continuous representations, $enc_{mt}$. Self-attention at this point provides the advantage of aggregating information from all of the words, including $src$ and $mt$, and successively generates a new representation per word informed by the entire $src$ and $mt$ context. The internal $enc_{mt}$ representation performs cross-attention over $enc_{src}$ and prepares a final representation ($enc_{src \rightarrow mt}$) for the decoder ($dec_{pe}$). The decoder then generates the $pe$ output in sequence, $\mathbf {p}$ = $(p_1, p_2, \ldots , p_n)$, one word at a time from left to right by attending to previously generated words as well as the final representations ($enc_{src \rightarrow mt}$) generated by the encoder. To summarize, our multi-source APE implementation extends Vaswani:NIPS2017 by introducing an additional encoding block by which $src$ and $mt$ communicate with the decoder. Our proposed approach differs from the WMT 2018 PBSMT winner system in several ways: (i) we use the original transformer's decoder without modifications; (ii) one of our encoder blocks ($enc_{src \rightarrow mt}$) is identical to the transformer's decoder block but uses no masking in the self-attention layer, thus having one self-attention layer and an additional cross-attention for $src \rightarrow mt$; and (iii) in the decoder layer, the cross-attention is performed between the encoded representation from $enc_{src \rightarrow mt}$ and $pe$. Our approach also differs from the WMT 2018 NMT winner system: (i) $wmt18^{nmt}_{best}$ concatenates the encoded representation of two encoders and passes it as the key to the attention layer of the decoder, and (ii), the system additionally employs sequence-level loss functions based on maximum likelihood estimation and minimum risk training in order to avoid exposure bias during training. The main intuition is that our $enc_{src \rightarrow mt}$ attends over the $src$ and $mt$ and informs the $pe$ to better capture, process, and share information between $src$-$mt$-$pe$, which efficiently models error patterns and the corresponding corrections. Our model performs better than past approaches, as the experiment section will show. Experiments We explore our approach on both APE sub-tasks of WMT 2018, where the 1st-stage MT system to which APE is applied is either a phrase-based statistical machine translation (PBSMT) or a neural machine translation (NMT) model. For the PBSMT task, we compare against four baselines: the raw SMT output provided by the 1st-stage PBSMT system, the best-performing systems from WMT APE 2018 ($\mathbf {wmt18^{smt}_{best}}$), which are a single model and an ensemble model by junczysdowmunt-grundkiewicz:2018:WMT, as well as a transformer trying to directly translate from $src$ to $pe$ (Transformer ($\mathbf {src \rightarrow pe}$)), thus performing translation instead of APE. We evaluate the systems using BLEU BIBREF12 and TER BIBREF13. For the NMT task, we consider two baselines: the raw NMT output provided by the 1st-stage NMT system and the best-performing system from the WMT 2018 NMT APE task ($\mathbf {wmt18^{nmt}_{best}}$) BIBREF14. Apart from the multi-encoder transference architecture described above ($\lbrace src,mt\rbrace _{tr} \rightarrow pe$) and ensembling of this architecture, two simpler versions are also analyzed: first, a `mono-lingual' ($\mathbf {mt \rightarrow pe}$) APE model using only parallel $mt$–$pe$ data and therefore only a single encoder, and second, an identical single-encoder architecture, however, using the concatenated $src$ and $mt$ text as input ($\mathbf {\lbrace src+mt\rbrace \rightarrow pe}$) BIBREF9. Experiments ::: Data For our experiments, we use the English–German WMT 2016 BIBREF4, 2017 BIBREF5 and 2018 BIBREF15 APE task data. All these released APE datasets consist of English–German triplets containing source English text ($src$) from the IT domain, the corresponding German translations ($mt$) from a 1st-stage MT system, and the corresponding human-post-edited version ($pe$). The sizes of the datasets (train; dev; test), in terms of number of sentences, are (12,000; 1,000; 2,000), (11,000; 0; 2,000), and (13,442; 1,000; 1,023), for the 2016 PBSMT, the 2017 PBSMT, and the 2018 NMT data, respectively. One should note that for WMT 2018, we carried out experiments only for the NMT sub-task and ignored the data for the PBSMT task. Since the WMT APE datasets are small in size, we use `artificial training data' BIBREF16 containing 4.5M sentences as additional resources, 4M of which are weakly similar to the WMT 2016 training data, while 500K are very similar according to TER statistics. For experimenting on the NMT data, we additionally use the synthetic eScape APE corpus BIBREF17, consisting of $\sim $7M triples. For cleaning this noisy eScape dataset containing many unrelated language words (e.g. Chinese), we perform the following two steps: (i) we use the cleaning process described in tebbifakhr-EtAl:2018:WMT, and (ii) we use the Moses BIBREF18 corpus cleaning scripts with minimum and maximum number of tokens set to 1 and 100, respectively. After cleaning, we perform punctuation normalization, and then use the Moses tokenizer BIBREF18 to tokenize the eScape corpus with `no-escape' option. Finally, we apply true-casing. The cleaned version of the eScape corpus contains $\sim $6.5M triplets. Experiments ::: Experiment Setup To build models for the PBSMT tasks from 2016 and 2017, we first train a generic APE model using all the training data (4M + 500K + 12K + 11K) described in Section SECREF2. Afterwards, we fine-tune the trained model using the 500K artificial and 23K (12K + 11K) real PE training data. We use the WMT 2016 development data (dev2016) containing 1,000 triplets to validate the models during training. To test our system performance, we use the WMT 2016 and 2017 test data (test2016, test2017) as two sub-experiments, each containing 2,000 triplets ($src$, $mt$ and $pe$). We compare the performance of our system with the four different baseline systems described above: raw MT, $wmt18^{smt}_{best}$ single and ensemble, as well as Transformer ($src \rightarrow pe$). Additionally, we check the performance of our model on the WMT 2018 NMT APE task (where unlike in previous tasks, the 1st-stage MT system is provided by NMT): for this, we explore two experimental setups: (i) we use the PBSMT task's APE model as a generic model which is then fine-tuned to a subset (12k) of the NMT data ($\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic, smt}}_{{}}$). One should note that it has been argued that the inclusion of SMT-specific data could be harmful when training NMT APE models BIBREF11. (ii), we train a completely new generic model on the cleaned eScape data ($\sim $6.5M) along with a subset (12K) of the original training data released for the NMT task ($\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic, nmt}}_{{}}$). The aforementioned 12K NMT data are the first 12K of the overall 13.4K NMT data. The remaining 1.4K are used as validation data. The released development set (dev2018) is used as test data for our experiment, alongside the test2018, for which we could only obtain results for a few models by the WMT 2019 task organizers. We also explore an additional fine-tuning step of $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic, nmt}}_{{}}$ towards the 12K NMT data (called $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{}}$), and a model averaging the 8 best checkpoints of $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{}}$, which we call $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{avg}}$. Last, we analyze the importance of our second encoder ($enc_{src \rightarrow mt}$), compared to the source encoder ($enc_{src}$) and the decoder ($dec_{pe}$), by reducing and expanding the amount of layers in the encoders and the decoder. Our standard setup, which we use for fine-tuning, ensembling etc., is fixed to 6-6-6 for $N_{src}$-$N_{mt}$-$N_{pe}$ (cf. Figure FIGREF1), where 6 is the value that was proposed by Vaswani:NIPS2017 for the base model. We investigate what happens in terms of APE performance if we change this setting to 6-6-4 and 6-4-6. To handle out-of-vocabulary words and reduce the vocabulary size, instead of considering words, we consider subword units BIBREF19 by using byte-pair encoding (BPE). In the preprocessing step, instead of learning an explicit mapping between BPEs in the $src$, $mt$ and $pe$, we define BPE tokens by jointly processing all triplets. Thus, $src$, $mt$ and $pe$ derive a single BPE vocabulary. Since $mt$ and $pe$ belong to the same language (German) and $src$ is a close language (English), they naturally share a good fraction of BPE tokens, which reduces the vocabulary size to 28k. Experiments ::: Hyper-parameter Setup We follow a similar hyper-parameter setup for all reported systems. All encoders (for $\lbrace src,mt\rbrace _{tr} \rightarrow pe$), and the decoder, are composed of a stack of $N_{src} = N_{mt} = N_{pe} = 6$ identical layers followed by layer normalization. The learning rate is varied throughout the training process, and increasing for the first training steps $warmup_{steps} = 8000$ and afterwards decreasing as described in BIBREF7. All remaining hyper-parameters are set analogously to those of the transformer's base model, except that we do not perform checkpoint averaging. At training time, the batch size is set to 25K tokens, with a maximum sentence length of 256 subwords. After each epoch, the training data is shuffled. During decoding, we perform beam search with a beam size of 4. We use shared embeddings between $mt$ and $pe$ in all our experiments. Results The results of our four models, single-source ($\mathbf {mt \rightarrow pe}$), multi-source single encoder ($\mathbf {\lbrace src + pe\rbrace \rightarrow pe}$), transference ($\mathbf {\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe}$), and ensemble, in comparison to the four baselines, raw SMT, $\mathbf {wmt18^{smt}_{best}}$ BIBREF11 single and ensemble, as well as Transformer ($\mathbf {src \rightarrow pe}$), are presented in Table TABREF5 for test2016 and test2017. Table TABREF9 reports the results obtained by our transference model ($\mathbf {\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{}}_{{}}}$) on the WMT 2018 NMT data for dev2018 (which we use as a test set) and test2018, compared to the baselines raw NMT and $\mathbf {wmt18^{nmt}_{best}}$. Results ::: Baselines The raw SMT output in Table TABREF5 is a strong black-box PBSMT system (i.e., 1st-stage MT). We report its performance observed with respect to the ground truth ($pe$), i.e., the post-edited version of $mt$. The original PBSMT system scores over 62 BLEU points and below 25 TER on test2016 and test2017. Using a Transformer ($src \rightarrow pe$), we test if APE is really useful, or if potential gains are only achieved due to the good performance of the transformer architecture. While we cannot do a full training of the transformer on the data that the raw MT engine was trained on due to the unavailability of the data, we use our PE datasets in an equivalent experimental setup as for all other models. The results of this system (Exp. 1.2 in Table TABREF5) show that the performance is actually lower across both test sets, -5.52/-9.43 absolute points in BLEU and +5.21/+7.72 absolute in TER, compared to the raw SMT baseline. We report four results from $\mathbf {wmt18^{smt}_{best}}$, (i) $wmt18^{smt}_{best}$ ($single$), which is the core multi-encoder implementation without ensembling but with checkpoint averaging, (ii) $wmt18^{smt}_{best}$ ($x4$) which is an ensemble of four identical `single' models trained with different random initializations. The results of $wmt18^{smt}_{best}$ ($single$) and $wmt18^{smt}_{best}$ ($x4$) (Exp. 1.3 and 1.4) reported in Table TABREF5 are from junczysdowmunt-grundkiewicz:2018:WMT. Since their training procedure slightly differs from ours, we also trained the $wmt18^{smt}_{best}$ system using exactly our experimental setup in order to make a fair comparison. This yields the baselines (iii) $wmt18^{smt,generic}_{best}$ ($single$) (Exp. 1.5), which is similar to $wmt18^{smt}_{best}$ ($single$), however, the training parameters and data are kept in line with our transference general model (Exp. 2.3) and (iv) $wmt18^{smt,ft}_{best}$ ($single$) (Exp. 1.6), which is also trained maintaining the equivalent experimental setup compared to the fine tuned version of the transference general model (Exp. 3.3). Compared to both raw SMT and Transformer ($src \rightarrow pe$) we see strong improvements for this state-of-the-art model, with BLEU scores of at least 68.14 and TER scores of at most 20.98 across the PBSMT testsets. $wmt18^{smt}_{best}$, however, performs better in its original setup (Exp. 1.3 and 1.4) compared to our experimental setup (Exp. 1.5 and 1.6). Results ::: Single-Encoder Transformer for APE The two transformer architectures $\mathbf {mt \rightarrow pe}$ and $\mathbf {\lbrace src+mt\rbrace \rightarrow pe}$ use only a single encoder. Table TABREF5 shows that $\mathbf {mt \rightarrow pe}$ (Exp. 2.1) provides better performance (+4.42 absolute BLEU on test2017) compared to the original SMT, while $\mathbf {\lbrace src+mt\rbrace \rightarrow pe}$ (Exp. 2.2) provides further improvements by additionally using the $src$ information. $\mathbf {\lbrace src+mt\rbrace \rightarrow pe}$ improves over $\mathbf {mt \rightarrow pe}$ by +1.62/+1.35 absolute BLEU points on test2016/test2017. After fine-tuning, both single encoder transformers (Exp. 3.1 and 3.2 in Table TABREF5) show further improvements, +0.87 and +0.31 absolute BLEU points, respectively, for test2017 and a similar improvement for test2016. Results ::: Transference Transformer for APE In contrast to the two models above, our transference architecture uses multiple encoders. To fairly compare to $wmt18^{smt}_{best}$, we retrain the $wmt18^{smt}_{best}$ system with our experimental setup (cf. Exp. 1.5 and 1.6 in Table TABREF5). $wmt18^{smt,generic}_{best}$ (single) is a generic model trained on all the training data; which is afterwards fine-tuned with 500K artificial and 23K real PE data ($wmt18^{smt,ft}_{best}$ (single)). It is to be noted that in terms of performance the data processing method described in junczysdowmunt-grundkiewicz:2018:WMT reported in Exp. 1.3 is better than ours (Exp. 1.6). The fine-tuned version of the $\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe$ model (Exp. 3.3 in Table TABREF5) outperforms $wmt18^{smt}_{best}$ (single) (Exp. 1.3) in BLEU on both test sets, however, the TER score for test2016 increases. One should note that $wmt18^{smt}_{best}$ (single) follows the transformer base model, which is an average of five checkpoints, while our Exp. 3.3 is not. When ensembling the 4 best checkpoints of our $\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe$ model (Exp. 4.1), the result beats the $wmt18^{smt}_{best}$ (x4) system, which is an ensemble of four different randomly initialized $wmt18^{smt}_{best}$ (single) systems. Our $\mathbf {ensemble^{smt} (x3)}$ combines two $\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe$ (Exp. 2.3) models initialized with different random weights with the ensemble of the fine-tuned transference model Exp3.3$^{smt}_{ens4ckpt}$(Exp. 4.1). This ensemble provides the best results for all datasets, providing roughly +1 BLEU point and -0.5 TER when comparing against $wmt18^{smt}_{best}$ (x4). The results on the WMT 2018 NMT datasets (dev2018 and test2018) are presented in Table TABREF9. The raw NMT system serves as one baseline against which we compare the performance of the different models. We evaluate the system hypotheses with respect to the ground truth ($pe$), i.e., the post-edited version of $mt$. The baseline original NMT system scores 76.76 BLEU points and 15.08 TER on dev2018, and 74.73 BLEU points and 16.84 TER on test2018. For the WMT 2018 NMT data we first test our $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic,smt}}_{{}}$ model, which is the model from Exp. 3.3 fine-tuned towards NMT data as described in Section SECREF3. Table TABREF9 shows that our PBSMT APE model fine-tuned towards NMT (Exp. 7) can even slightly improve over the already very strong NMT system by about +0.3 BLEU and -0.1 TER, although these improvements are not statistically significant. The overall results improve when we train our model on eScape and NMT data instead of using the PBSMT model as a basis. Our proposed generic transference model (Exp. 8, $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic,nmt}}_{{}}$ shows statistically significant improvements in terms of BLEU and TER compared to the baseline even before fine-tuning, and further improvements after fine-tuning (Exp. 9, $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{}}$). Finally, after averaging the 8 best checkpoints, our $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{avg}}$ model (Exp. 10) also shows consistent improvements in comparison to the baseline and other experimental setups. Overall our fine-tuned model averaging the 8 best checkpoints achieves +1.02 absolute BLEU points and -0.69 absolute TER improvements over the baseline on test2018. Table TABREF9 also shows the performance of our model compared to the winner system of WMT 2018 ($wmt18^{nmt}_{best}$) for the NMT task BIBREF14. $wmt18^{nmt}_{best}$ scores 14.78 in TER and 77.74 in BLEU on the dev2018 and 16.46 in TER and 75.53 in BLEU on the test2018. In comparison to $wmt18^{nmt}_{best}$, our model (Exp. 10) achieves better scores in TER on both the dev2018 and test2018, however, in terms of BLEU our model scores slightly lower for dev2018, while some improvements are achieved on test2018. The number of layers ($N_{src}$-$N_{mt}$-$N_{pe}$) in all encoders and the decoder for these results is fixed to 6-6-6. In Exp. 5.1, and 5.2 in Table TABREF5, we see the results of changing this setting to 6-6-4 and 6-4-6. This can be compared to the results of Exp. 2.3, since no fine-tuning or ensembling was performed for these three experiments. Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder. Results ::: Analysis of Error Patterns In Table TABREF11, we analyze and compare the best performing SMT ($ensemble^{smt} (x3)$) and NMT ($\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{avg}}$) model outputs with the original MT outputs on the WMT 2017 (SMT) APE test set and on the WMT 2018 (NMT) development set. Improvements are measured in terms of number of words which need to be (i) inserted (In), (ii) deleted (De), (iii) substituted (Su), and (iv) shifted (Sh), as per TER BIBREF13, in order to turn the MT outputs into reference translations. Our model provides promising results by significantly reducing the required number of edits (24% overall for PBSMT task and 3.6% for NMT task) across all edit operations, thereby leading to reduced post-editing effort and hence improving human post-editing productivity. When comparing PBSMT to NMT, we see that stronger improvements are achieved for PBSMT, probably because the raw SMT is worse than the raw NMT. For PBSMT, similar results are achieved for In, De, and Sh, while less gains are obtained in terms of Su. For NMT, In is improved most, followed by Su, De, and last Sh. For shifts in NMT, the APE system even creates further errors, instead of reducing them, which is an issue we aim to prevent in the future. Results ::: Discussion The proposed transference architecture ($\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe$, Exp. 2.3) shows slightly worse results than $wmt18^{smt}_{best}$ (single) (Exp. 1.3) before fine-tuning, and roughly similar results after fine-tuning (Exp. 3.3). After ensembling, however, our transference model (Exp. 4.2) shows consistent improvements when comparing against the best baseline ensemble $wmt18^{smt}_{best}$ (x4) (Exp. 1.4). Due to the unavailability of the sentence-level scores of $wmt18^{smt}_{best}$ (x4), we could not test if the improvements (roughly +1 BLEU, -0.5 TER) are statistically significant. Interestingly, our approach of taking the model optimized for PBSMT and fine-tuning it to the NMT task (Exp. 7) does not hurt the performance as was reported in the previous literature BIBREF11. In contrast, some small, albeit statistically insignificant improvements over the raw NMT baseline were achieved. When we train the transference architecture directly for the NMT task (Exp. 8), we get slightly better and statistically significant improvements compared to raw NMT. Fine-tuning this NMT model further towards the actual NMT data (Exp. 9), as well as performing checkpoint averaging using the 8 best checkpoints improves the results even further. The reasons for the effectiveness of our approach can be summarized as follows. (1) Our $enc_{src \rightarrow mt}$ contains two attention mechanisms: one is self-attention and another is cross-attention. The self-attention layer is not masked here; therefore, the cross-attention layer in $enc_{src \rightarrow mt}$ is informed by both previous and future time-steps from the self-attended representation of $mt$ ($enc_{mt}$) and additionally from $enc_{src}$. As a result, each state representation of $enc_{src \rightarrow mt}$ is learned from the context of $src$ and $mt$. This might produce better representations for $dec_{pe}$ which can access the combined context. In contrast, in $wmt18^{smt}_{best}$, the $dec_{pe}$ accesses representations from $src$ and $mt$ independently, first using the representation from $mt$ and then using that of $src$. (2) The position-wise feed-forward layer in our $enc_{src \rightarrow mt}$ of the transference model requires processing information from two attention modules, while in the case of $wmt18^{smt}_{best}$, the position-wise feed-forward layer in $dec_{pe}$ needs to process information from three attention modules, which may increase the learning difficulty of the feed-forward layer. (3) Since $pe$ is a post-edited version of $mt$, sharing the same language, $mt$ and $pe$ are quite similar compared to $src$. Therefore, attending over a fine-tuned representation from $mt$ along with $src$, which is what we have done in this work, might be a reason for the better results than those achieved by attending over $src$ directly. Evaluating the influence of the depth of our encoders and decoder show that while the decoder depth appears to have limited importance, reducing the encoder depth indeed hurts performance which is in line with domhan-2018-much. Conclusions In this paper, we presented a multi-encoder transformer-based APE model that repurposes the standard transformer blocks in a simple and effective way for the APE task: first, our transference architecture uses a transformer encoder block for $src$, followed by a decoder block without masking on $mt$ that effectively acts as a second encoder combining $src \rightarrow mt$, and feeds this representation into a final decoder block generating $pe$. The proposed model outperforms the best-performing system of WMT 2018 on the test2016, test2017, dev2018, and test2018 data and provides a new state-of-the-art in APE. Taking a departure from traditional transformer-based encoders, which perform self-attention only, our second encoder also performs cross-attention to produce representations for the decoder based on both $src$ and $mt$. We also show that the encoder plays a more pivotal role than the decoder in transformer-based APE, which could also be the case for transformer-based generation tasks in general. Our architecture is generic and can be used for any multi-source task, e.g., multi-source translation or summarization, etc.
comparing to the results from reducing the number of layers in the decoder, the BLEU score was 69.93 which is less than 1% in case of test2016 and in case of test2017 it was less by 0.2 %. In terms of TER it had higher score by 0.7 in case of test2016 and 0.1 in case of test2017.
7889ec45b996be0b8bf7360d08f84daf3644f115
7889ec45b996be0b8bf7360d08f84daf3644f115_0
Q: What was previous state of the art model for automatic post editing? Text: Introduction The performance of state-of-the-art MT systems is not perfect, thus, human interventions are still required to correct machine translated texts into publishable quality translations BIBREF0. Automatic post-editing (APE) is a method that aims to automatically correct errors made by MT systems before performing actual human post-editing (PE) BIBREF1, thereby reducing the translators' workload and increasing productivity BIBREF2. APE systems trained on human PE data serve as MT post-processing modules to improve the overall performance. APE can therefore be viewed as a 2nd-stage MT system, translating predictable error patterns in MT output to their corresponding corrections. APE training data minimally involves MT output ($mt$) and the human post-edited ($pe$) version of $mt$, but additionally using the source ($src$) has been shown to provide further benefits BIBREF3, BIBREF4, BIBREF5. To provide awareness of errors in $mt$ originating from $src$, attention mechanisms BIBREF6 allow modeling of non-local dependencies in the input or output sequences, and importantly also global dependencies between them (in our case $src$, $mt$ and $pe$). The transformer architecture BIBREF7 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. The transformer uses positional encoding to encode the input and output sequences, and computes both self- and cross-attention through so-called multi-head attentions, which are facilitated by parallelization. Such multi-head attention allows to jointly attend to information at different positions from different representation subspaces, e.g. utilizing and combining information from $src$, $mt$, and $pe$. In this paper, we present a multi-source neural APE architecture called transference. Our model contains a source encoder which encodes $src$ information, a second encoder ($enc_{src \rightarrow mt}$) which takes the encoded representation from the source encoder ($enc_{src}$), combines this with the self-attention-based encoding of $mt$ ($enc_{mt}$), and prepares a representation for the decoder ($dec_{pe}$) via cross-attention. Our second encoder ($enc_{src \rightarrow mt}$) can also be viewed as a standard transformer decoding block, however, without masking, which acts as an encoder. We thus recombine the different blocks of the transformer architecture and repurpose them for the APE task in a simple yet effective way. The suggested architecture is inspired by the two-step approach professional translators tend to use during post-editing: first, the source segment is compared to the corresponding translation suggestion (similar to what our $enc_{src \rightarrow mt}$ is doing), then corrections to the MT output are applied based on the encountered errors (in the same way that our $dec_{pe}$ uses the encoded representation of $enc_{src \rightarrow mt}$ to produce the final translation). The paper makes the following contributions: (i) we propose a new multi-encoder model for APE that consists only of standard transformer encoding and decoding blocks, (ii) by using a mix of self- and cross-attention we provide a representation of both $src$ and $mt$ for the decoder, allowing it to better capture errors in $mt$ originating from $src$; this advances the state-of-the-art in APE in terms of BLEU and TER, and (iii), we analyze the effect of varying the number of encoder and decoder layers BIBREF8, indicating that the encoders contribute more than decoders in transformer-based neural APE. Related Research Recent advances in APE research are directed towards neural APE, which was first proposed by Pal:2016:ACL and junczysdowmunt-grundkiewicz:2016:WMT for the single-source APE scenario which does not consider $src$, i.e. $mt \rightarrow pe$. In their work, junczysdowmunt-grundkiewicz:2016:WMT also generated a large synthetic training dataset through back translation, which we also use as additional training data. Exploiting source information as an additional input can help neural APE to disambiguate corrections applied at each time step; this naturally leads to multi-source APE ($\lbrace src, mt\rbrace \rightarrow pe$). A multi-source neural APE system can be configured either by using a single encoder that encodes the concatenation of $src$ and $mt$ BIBREF9 or by using two separate encoders for $src$ and $mt$ and passing the concatenation of both encoders' final states to the decoder BIBREF10. A few approaches to multi-source neural APE were proposed in the WMT 2017 APE shared task. Junczysdowmunt:2017:WMT combine both $mt$ and $src$ in a single neural architecture, exploring different combinations of attention mechanisms including soft attention and hard monotonic attention. Chatterjee-EtAl:2017:WMT2 built upon the two-encoder architecture of multi-source models BIBREF10 by means of concatenating both weighted contexts of encoded $src$ and $mt$. Varis-bojar:2017:WMT compared two multi-source models, one using a single encoder with concatenation of $src$ and $mt$ sentences, and a second one using two character-level encoders for $mt$ and $src$ along with a character-level decoder. Recently, in the WMT 2018 APE shared task, several adaptations of the transformer architecture have been presented for multi-source APE. pal-EtAl:2018:WMT proposed an APE model that uses three self-attention-based encoders. They introduce an additional joint encoder that attends over a combination of the two encoded sequences from $mt$ and $src$. tebbifakhr-EtAl:2018:WMT, the NMT-subtask winner of WMT 2018 ($wmt18^{nmt}_{best}$), employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics. shin-lee:2018:WMT propose that each encoder has its own self-attention and feed-forward layer to process each input separately. On the decoder side, they add two additional multi-head attention layers, one for $src \rightarrow mt$ and another for $src \rightarrow pe$. Thereafter another multi-head attention between the output of those attention layers helps the decoder to capture common words in $mt$ which should remain in $pe$. The APE PBSMT-subtask winner of WMT 2018 ($wmt18^{smt}_{best}$) BIBREF11 also presented another transformer-based multi-source APE which uses two encoders and stacks an additional cross-attention component for $src \rightarrow pe$ above the previous cross-attention for $mt \rightarrow pe$. Comparing shin-lee:2018:WMT's approach with the winner system, there are only two differences in the architecture: (i) the cross-attention order of $src \rightarrow mt$ and $src \rightarrow pe$ in the decoder, and (ii) $wmt18^{smt}_{best}$ additionally shares parameters between two encoders. Transference Model for APE We propose a multi-source transformer model called transference ($\lbrace src,mt\rbrace _{tr} \rightarrow pe$, Figure FIGREF1), which takes advantage of both the encodings of $src$ and $mt$ and attends over a combination of both sequences while generating the post-edited sentence. The second encoder, $enc_{src \rightarrow mt}$, makes use of the first encoder $enc_{src}$ and a sub-encoder $enc_{mt}$ for considering $src$ and $mt$. Here, the $enc_{src}$ encoder and the $dec_{pe}$ decoder are equivalent to the original transformer for neural MT. Our $enc_{src \rightarrow mt}$ follows an architecture similar to the transformer's decoder, the difference being that no masked multi-head self-attention is used to process $mt$. One self-attended encoder for $src$, $\mathbf {s}$ = $(s_1, s_2, \ldots , s_k)$, returns a sequence of continuous representations, $enc_{src}$, and a second self-attended sub-encoder for $mt$, $\mathbf {m}$ = $(m_1, m_2, \ldots , m_l)$, returns another sequence of continuous representations, $enc_{mt}$. Self-attention at this point provides the advantage of aggregating information from all of the words, including $src$ and $mt$, and successively generates a new representation per word informed by the entire $src$ and $mt$ context. The internal $enc_{mt}$ representation performs cross-attention over $enc_{src}$ and prepares a final representation ($enc_{src \rightarrow mt}$) for the decoder ($dec_{pe}$). The decoder then generates the $pe$ output in sequence, $\mathbf {p}$ = $(p_1, p_2, \ldots , p_n)$, one word at a time from left to right by attending to previously generated words as well as the final representations ($enc_{src \rightarrow mt}$) generated by the encoder. To summarize, our multi-source APE implementation extends Vaswani:NIPS2017 by introducing an additional encoding block by which $src$ and $mt$ communicate with the decoder. Our proposed approach differs from the WMT 2018 PBSMT winner system in several ways: (i) we use the original transformer's decoder without modifications; (ii) one of our encoder blocks ($enc_{src \rightarrow mt}$) is identical to the transformer's decoder block but uses no masking in the self-attention layer, thus having one self-attention layer and an additional cross-attention for $src \rightarrow mt$; and (iii) in the decoder layer, the cross-attention is performed between the encoded representation from $enc_{src \rightarrow mt}$ and $pe$. Our approach also differs from the WMT 2018 NMT winner system: (i) $wmt18^{nmt}_{best}$ concatenates the encoded representation of two encoders and passes it as the key to the attention layer of the decoder, and (ii), the system additionally employs sequence-level loss functions based on maximum likelihood estimation and minimum risk training in order to avoid exposure bias during training. The main intuition is that our $enc_{src \rightarrow mt}$ attends over the $src$ and $mt$ and informs the $pe$ to better capture, process, and share information between $src$-$mt$-$pe$, which efficiently models error patterns and the corresponding corrections. Our model performs better than past approaches, as the experiment section will show. Experiments We explore our approach on both APE sub-tasks of WMT 2018, where the 1st-stage MT system to which APE is applied is either a phrase-based statistical machine translation (PBSMT) or a neural machine translation (NMT) model. For the PBSMT task, we compare against four baselines: the raw SMT output provided by the 1st-stage PBSMT system, the best-performing systems from WMT APE 2018 ($\mathbf {wmt18^{smt}_{best}}$), which are a single model and an ensemble model by junczysdowmunt-grundkiewicz:2018:WMT, as well as a transformer trying to directly translate from $src$ to $pe$ (Transformer ($\mathbf {src \rightarrow pe}$)), thus performing translation instead of APE. We evaluate the systems using BLEU BIBREF12 and TER BIBREF13. For the NMT task, we consider two baselines: the raw NMT output provided by the 1st-stage NMT system and the best-performing system from the WMT 2018 NMT APE task ($\mathbf {wmt18^{nmt}_{best}}$) BIBREF14. Apart from the multi-encoder transference architecture described above ($\lbrace src,mt\rbrace _{tr} \rightarrow pe$) and ensembling of this architecture, two simpler versions are also analyzed: first, a `mono-lingual' ($\mathbf {mt \rightarrow pe}$) APE model using only parallel $mt$–$pe$ data and therefore only a single encoder, and second, an identical single-encoder architecture, however, using the concatenated $src$ and $mt$ text as input ($\mathbf {\lbrace src+mt\rbrace \rightarrow pe}$) BIBREF9. Experiments ::: Data For our experiments, we use the English–German WMT 2016 BIBREF4, 2017 BIBREF5 and 2018 BIBREF15 APE task data. All these released APE datasets consist of English–German triplets containing source English text ($src$) from the IT domain, the corresponding German translations ($mt$) from a 1st-stage MT system, and the corresponding human-post-edited version ($pe$). The sizes of the datasets (train; dev; test), in terms of number of sentences, are (12,000; 1,000; 2,000), (11,000; 0; 2,000), and (13,442; 1,000; 1,023), for the 2016 PBSMT, the 2017 PBSMT, and the 2018 NMT data, respectively. One should note that for WMT 2018, we carried out experiments only for the NMT sub-task and ignored the data for the PBSMT task. Since the WMT APE datasets are small in size, we use `artificial training data' BIBREF16 containing 4.5M sentences as additional resources, 4M of which are weakly similar to the WMT 2016 training data, while 500K are very similar according to TER statistics. For experimenting on the NMT data, we additionally use the synthetic eScape APE corpus BIBREF17, consisting of $\sim $7M triples. For cleaning this noisy eScape dataset containing many unrelated language words (e.g. Chinese), we perform the following two steps: (i) we use the cleaning process described in tebbifakhr-EtAl:2018:WMT, and (ii) we use the Moses BIBREF18 corpus cleaning scripts with minimum and maximum number of tokens set to 1 and 100, respectively. After cleaning, we perform punctuation normalization, and then use the Moses tokenizer BIBREF18 to tokenize the eScape corpus with `no-escape' option. Finally, we apply true-casing. The cleaned version of the eScape corpus contains $\sim $6.5M triplets. Experiments ::: Experiment Setup To build models for the PBSMT tasks from 2016 and 2017, we first train a generic APE model using all the training data (4M + 500K + 12K + 11K) described in Section SECREF2. Afterwards, we fine-tune the trained model using the 500K artificial and 23K (12K + 11K) real PE training data. We use the WMT 2016 development data (dev2016) containing 1,000 triplets to validate the models during training. To test our system performance, we use the WMT 2016 and 2017 test data (test2016, test2017) as two sub-experiments, each containing 2,000 triplets ($src$, $mt$ and $pe$). We compare the performance of our system with the four different baseline systems described above: raw MT, $wmt18^{smt}_{best}$ single and ensemble, as well as Transformer ($src \rightarrow pe$). Additionally, we check the performance of our model on the WMT 2018 NMT APE task (where unlike in previous tasks, the 1st-stage MT system is provided by NMT): for this, we explore two experimental setups: (i) we use the PBSMT task's APE model as a generic model which is then fine-tuned to a subset (12k) of the NMT data ($\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic, smt}}_{{}}$). One should note that it has been argued that the inclusion of SMT-specific data could be harmful when training NMT APE models BIBREF11. (ii), we train a completely new generic model on the cleaned eScape data ($\sim $6.5M) along with a subset (12K) of the original training data released for the NMT task ($\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic, nmt}}_{{}}$). The aforementioned 12K NMT data are the first 12K of the overall 13.4K NMT data. The remaining 1.4K are used as validation data. The released development set (dev2018) is used as test data for our experiment, alongside the test2018, for which we could only obtain results for a few models by the WMT 2019 task organizers. We also explore an additional fine-tuning step of $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic, nmt}}_{{}}$ towards the 12K NMT data (called $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{}}$), and a model averaging the 8 best checkpoints of $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{}}$, which we call $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{avg}}$. Last, we analyze the importance of our second encoder ($enc_{src \rightarrow mt}$), compared to the source encoder ($enc_{src}$) and the decoder ($dec_{pe}$), by reducing and expanding the amount of layers in the encoders and the decoder. Our standard setup, which we use for fine-tuning, ensembling etc., is fixed to 6-6-6 for $N_{src}$-$N_{mt}$-$N_{pe}$ (cf. Figure FIGREF1), where 6 is the value that was proposed by Vaswani:NIPS2017 for the base model. We investigate what happens in terms of APE performance if we change this setting to 6-6-4 and 6-4-6. To handle out-of-vocabulary words and reduce the vocabulary size, instead of considering words, we consider subword units BIBREF19 by using byte-pair encoding (BPE). In the preprocessing step, instead of learning an explicit mapping between BPEs in the $src$, $mt$ and $pe$, we define BPE tokens by jointly processing all triplets. Thus, $src$, $mt$ and $pe$ derive a single BPE vocabulary. Since $mt$ and $pe$ belong to the same language (German) and $src$ is a close language (English), they naturally share a good fraction of BPE tokens, which reduces the vocabulary size to 28k. Experiments ::: Hyper-parameter Setup We follow a similar hyper-parameter setup for all reported systems. All encoders (for $\lbrace src,mt\rbrace _{tr} \rightarrow pe$), and the decoder, are composed of a stack of $N_{src} = N_{mt} = N_{pe} = 6$ identical layers followed by layer normalization. The learning rate is varied throughout the training process, and increasing for the first training steps $warmup_{steps} = 8000$ and afterwards decreasing as described in BIBREF7. All remaining hyper-parameters are set analogously to those of the transformer's base model, except that we do not perform checkpoint averaging. At training time, the batch size is set to 25K tokens, with a maximum sentence length of 256 subwords. After each epoch, the training data is shuffled. During decoding, we perform beam search with a beam size of 4. We use shared embeddings between $mt$ and $pe$ in all our experiments. Results The results of our four models, single-source ($\mathbf {mt \rightarrow pe}$), multi-source single encoder ($\mathbf {\lbrace src + pe\rbrace \rightarrow pe}$), transference ($\mathbf {\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe}$), and ensemble, in comparison to the four baselines, raw SMT, $\mathbf {wmt18^{smt}_{best}}$ BIBREF11 single and ensemble, as well as Transformer ($\mathbf {src \rightarrow pe}$), are presented in Table TABREF5 for test2016 and test2017. Table TABREF9 reports the results obtained by our transference model ($\mathbf {\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{}}_{{}}}$) on the WMT 2018 NMT data for dev2018 (which we use as a test set) and test2018, compared to the baselines raw NMT and $\mathbf {wmt18^{nmt}_{best}}$. Results ::: Baselines The raw SMT output in Table TABREF5 is a strong black-box PBSMT system (i.e., 1st-stage MT). We report its performance observed with respect to the ground truth ($pe$), i.e., the post-edited version of $mt$. The original PBSMT system scores over 62 BLEU points and below 25 TER on test2016 and test2017. Using a Transformer ($src \rightarrow pe$), we test if APE is really useful, or if potential gains are only achieved due to the good performance of the transformer architecture. While we cannot do a full training of the transformer on the data that the raw MT engine was trained on due to the unavailability of the data, we use our PE datasets in an equivalent experimental setup as for all other models. The results of this system (Exp. 1.2 in Table TABREF5) show that the performance is actually lower across both test sets, -5.52/-9.43 absolute points in BLEU and +5.21/+7.72 absolute in TER, compared to the raw SMT baseline. We report four results from $\mathbf {wmt18^{smt}_{best}}$, (i) $wmt18^{smt}_{best}$ ($single$), which is the core multi-encoder implementation without ensembling but with checkpoint averaging, (ii) $wmt18^{smt}_{best}$ ($x4$) which is an ensemble of four identical `single' models trained with different random initializations. The results of $wmt18^{smt}_{best}$ ($single$) and $wmt18^{smt}_{best}$ ($x4$) (Exp. 1.3 and 1.4) reported in Table TABREF5 are from junczysdowmunt-grundkiewicz:2018:WMT. Since their training procedure slightly differs from ours, we also trained the $wmt18^{smt}_{best}$ system using exactly our experimental setup in order to make a fair comparison. This yields the baselines (iii) $wmt18^{smt,generic}_{best}$ ($single$) (Exp. 1.5), which is similar to $wmt18^{smt}_{best}$ ($single$), however, the training parameters and data are kept in line with our transference general model (Exp. 2.3) and (iv) $wmt18^{smt,ft}_{best}$ ($single$) (Exp. 1.6), which is also trained maintaining the equivalent experimental setup compared to the fine tuned version of the transference general model (Exp. 3.3). Compared to both raw SMT and Transformer ($src \rightarrow pe$) we see strong improvements for this state-of-the-art model, with BLEU scores of at least 68.14 and TER scores of at most 20.98 across the PBSMT testsets. $wmt18^{smt}_{best}$, however, performs better in its original setup (Exp. 1.3 and 1.4) compared to our experimental setup (Exp. 1.5 and 1.6). Results ::: Single-Encoder Transformer for APE The two transformer architectures $\mathbf {mt \rightarrow pe}$ and $\mathbf {\lbrace src+mt\rbrace \rightarrow pe}$ use only a single encoder. Table TABREF5 shows that $\mathbf {mt \rightarrow pe}$ (Exp. 2.1) provides better performance (+4.42 absolute BLEU on test2017) compared to the original SMT, while $\mathbf {\lbrace src+mt\rbrace \rightarrow pe}$ (Exp. 2.2) provides further improvements by additionally using the $src$ information. $\mathbf {\lbrace src+mt\rbrace \rightarrow pe}$ improves over $\mathbf {mt \rightarrow pe}$ by +1.62/+1.35 absolute BLEU points on test2016/test2017. After fine-tuning, both single encoder transformers (Exp. 3.1 and 3.2 in Table TABREF5) show further improvements, +0.87 and +0.31 absolute BLEU points, respectively, for test2017 and a similar improvement for test2016. Results ::: Transference Transformer for APE In contrast to the two models above, our transference architecture uses multiple encoders. To fairly compare to $wmt18^{smt}_{best}$, we retrain the $wmt18^{smt}_{best}$ system with our experimental setup (cf. Exp. 1.5 and 1.6 in Table TABREF5). $wmt18^{smt,generic}_{best}$ (single) is a generic model trained on all the training data; which is afterwards fine-tuned with 500K artificial and 23K real PE data ($wmt18^{smt,ft}_{best}$ (single)). It is to be noted that in terms of performance the data processing method described in junczysdowmunt-grundkiewicz:2018:WMT reported in Exp. 1.3 is better than ours (Exp. 1.6). The fine-tuned version of the $\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe$ model (Exp. 3.3 in Table TABREF5) outperforms $wmt18^{smt}_{best}$ (single) (Exp. 1.3) in BLEU on both test sets, however, the TER score for test2016 increases. One should note that $wmt18^{smt}_{best}$ (single) follows the transformer base model, which is an average of five checkpoints, while our Exp. 3.3 is not. When ensembling the 4 best checkpoints of our $\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe$ model (Exp. 4.1), the result beats the $wmt18^{smt}_{best}$ (x4) system, which is an ensemble of four different randomly initialized $wmt18^{smt}_{best}$ (single) systems. Our $\mathbf {ensemble^{smt} (x3)}$ combines two $\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe$ (Exp. 2.3) models initialized with different random weights with the ensemble of the fine-tuned transference model Exp3.3$^{smt}_{ens4ckpt}$(Exp. 4.1). This ensemble provides the best results for all datasets, providing roughly +1 BLEU point and -0.5 TER when comparing against $wmt18^{smt}_{best}$ (x4). The results on the WMT 2018 NMT datasets (dev2018 and test2018) are presented in Table TABREF9. The raw NMT system serves as one baseline against which we compare the performance of the different models. We evaluate the system hypotheses with respect to the ground truth ($pe$), i.e., the post-edited version of $mt$. The baseline original NMT system scores 76.76 BLEU points and 15.08 TER on dev2018, and 74.73 BLEU points and 16.84 TER on test2018. For the WMT 2018 NMT data we first test our $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic,smt}}_{{}}$ model, which is the model from Exp. 3.3 fine-tuned towards NMT data as described in Section SECREF3. Table TABREF9 shows that our PBSMT APE model fine-tuned towards NMT (Exp. 7) can even slightly improve over the already very strong NMT system by about +0.3 BLEU and -0.1 TER, although these improvements are not statistically significant. The overall results improve when we train our model on eScape and NMT data instead of using the PBSMT model as a basis. Our proposed generic transference model (Exp. 8, $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{generic,nmt}}_{{}}$ shows statistically significant improvements in terms of BLEU and TER compared to the baseline even before fine-tuning, and further improvements after fine-tuning (Exp. 9, $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{}}$). Finally, after averaging the 8 best checkpoints, our $\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{avg}}$ model (Exp. 10) also shows consistent improvements in comparison to the baseline and other experimental setups. Overall our fine-tuned model averaging the 8 best checkpoints achieves +1.02 absolute BLEU points and -0.69 absolute TER improvements over the baseline on test2018. Table TABREF9 also shows the performance of our model compared to the winner system of WMT 2018 ($wmt18^{nmt}_{best}$) for the NMT task BIBREF14. $wmt18^{nmt}_{best}$ scores 14.78 in TER and 77.74 in BLEU on the dev2018 and 16.46 in TER and 75.53 in BLEU on the test2018. In comparison to $wmt18^{nmt}_{best}$, our model (Exp. 10) achieves better scores in TER on both the dev2018 and test2018, however, in terms of BLEU our model scores slightly lower for dev2018, while some improvements are achieved on test2018. The number of layers ($N_{src}$-$N_{mt}$-$N_{pe}$) in all encoders and the decoder for these results is fixed to 6-6-6. In Exp. 5.1, and 5.2 in Table TABREF5, we see the results of changing this setting to 6-6-4 and 6-4-6. This can be compared to the results of Exp. 2.3, since no fine-tuning or ensembling was performed for these three experiments. Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder. Results ::: Analysis of Error Patterns In Table TABREF11, we analyze and compare the best performing SMT ($ensemble^{smt} (x3)$) and NMT ($\lbrace src,mt\rbrace ^{nmt}_{tr} \rightarrow pe^{{ft}}_{{avg}}$) model outputs with the original MT outputs on the WMT 2017 (SMT) APE test set and on the WMT 2018 (NMT) development set. Improvements are measured in terms of number of words which need to be (i) inserted (In), (ii) deleted (De), (iii) substituted (Su), and (iv) shifted (Sh), as per TER BIBREF13, in order to turn the MT outputs into reference translations. Our model provides promising results by significantly reducing the required number of edits (24% overall for PBSMT task and 3.6% for NMT task) across all edit operations, thereby leading to reduced post-editing effort and hence improving human post-editing productivity. When comparing PBSMT to NMT, we see that stronger improvements are achieved for PBSMT, probably because the raw SMT is worse than the raw NMT. For PBSMT, similar results are achieved for In, De, and Sh, while less gains are obtained in terms of Su. For NMT, In is improved most, followed by Su, De, and last Sh. For shifts in NMT, the APE system even creates further errors, instead of reducing them, which is an issue we aim to prevent in the future. Results ::: Discussion The proposed transference architecture ($\lbrace src,mt\rbrace ^{smt}_{tr} \rightarrow pe$, Exp. 2.3) shows slightly worse results than $wmt18^{smt}_{best}$ (single) (Exp. 1.3) before fine-tuning, and roughly similar results after fine-tuning (Exp. 3.3). After ensembling, however, our transference model (Exp. 4.2) shows consistent improvements when comparing against the best baseline ensemble $wmt18^{smt}_{best}$ (x4) (Exp. 1.4). Due to the unavailability of the sentence-level scores of $wmt18^{smt}_{best}$ (x4), we could not test if the improvements (roughly +1 BLEU, -0.5 TER) are statistically significant. Interestingly, our approach of taking the model optimized for PBSMT and fine-tuning it to the NMT task (Exp. 7) does not hurt the performance as was reported in the previous literature BIBREF11. In contrast, some small, albeit statistically insignificant improvements over the raw NMT baseline were achieved. When we train the transference architecture directly for the NMT task (Exp. 8), we get slightly better and statistically significant improvements compared to raw NMT. Fine-tuning this NMT model further towards the actual NMT data (Exp. 9), as well as performing checkpoint averaging using the 8 best checkpoints improves the results even further. The reasons for the effectiveness of our approach can be summarized as follows. (1) Our $enc_{src \rightarrow mt}$ contains two attention mechanisms: one is self-attention and another is cross-attention. The self-attention layer is not masked here; therefore, the cross-attention layer in $enc_{src \rightarrow mt}$ is informed by both previous and future time-steps from the self-attended representation of $mt$ ($enc_{mt}$) and additionally from $enc_{src}$. As a result, each state representation of $enc_{src \rightarrow mt}$ is learned from the context of $src$ and $mt$. This might produce better representations for $dec_{pe}$ which can access the combined context. In contrast, in $wmt18^{smt}_{best}$, the $dec_{pe}$ accesses representations from $src$ and $mt$ independently, first using the representation from $mt$ and then using that of $src$. (2) The position-wise feed-forward layer in our $enc_{src \rightarrow mt}$ of the transference model requires processing information from two attention modules, while in the case of $wmt18^{smt}_{best}$, the position-wise feed-forward layer in $dec_{pe}$ needs to process information from three attention modules, which may increase the learning difficulty of the feed-forward layer. (3) Since $pe$ is a post-edited version of $mt$, sharing the same language, $mt$ and $pe$ are quite similar compared to $src$. Therefore, attending over a fine-tuned representation from $mt$ along with $src$, which is what we have done in this work, might be a reason for the better results than those achieved by attending over $src$ directly. Evaluating the influence of the depth of our encoders and decoder show that while the decoder depth appears to have limited importance, reducing the encoder depth indeed hurts performance which is in line with domhan-2018-much. Conclusions In this paper, we presented a multi-encoder transformer-based APE model that repurposes the standard transformer blocks in a simple and effective way for the APE task: first, our transference architecture uses a transformer encoder block for $src$, followed by a decoder block without masking on $mt$ that effectively acts as a second encoder combining $src \rightarrow mt$, and feeds this representation into a final decoder block generating $pe$. The proposed model outperforms the best-performing system of WMT 2018 on the test2016, test2017, dev2018, and test2018 data and provides a new state-of-the-art in APE. Taking a departure from traditional transformer-based encoders, which perform self-attention only, our second encoder also performs cross-attention to produce representations for the decoder based on both $src$ and $mt$. We also show that the encoder plays a more pivotal role than the decoder in transformer-based APE, which could also be the case for transformer-based generation tasks in general. Our architecture is generic and can be used for any multi-source task, e.g., multi-source translation or summarization, etc.
pal-EtAl:2018:WMT proposed an APE model that uses three self-attention-based encoders, tebbifakhr-EtAl:2018:WMT, the NMT-subtask winner of WMT 2018 ($wmt18^{nmt}_{best}$), employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics., shin-lee:2018:WMT propose that each encoder has its own self-attention and feed-forward layer to process each input separately. , The APE PBSMT-subtask winner of WMT 2018 ($wmt18^{smt}_{best}$) BIBREF11 also presented another transformer-based multi-source APE which uses two encoders and stacks an additional cross-attention component for $src \rightarrow pe$ above the previous cross-attention for $mt \rightarrow pe$.
41e300acec35252e23f239772cecadc0ea986071
41e300acec35252e23f239772cecadc0ea986071_0
Q: What neural machine translation models can learn in terms of transfer learning? Text: Introduction and Motivation Our primary goal is to learn meaning representations of sentences and sentence fragments by looking at the distributional information that is available in parallel corpora of human translations. The basic idea is to use translations into other languages as “semantic mirrors” of the original text, assuming that they represent the same meaning but with different symbols, wordings and linguistic structures. For this we discard any meaning diversions that may happen in translation due to target audience adaptation or other processes that may influence the semantics of the translated texts. We also assume that the material can be divided into meaningful and self-contained units, Bible verses in our case, and focus on the global data-driven model that hopefully can cope with instances that violate our assumptions. Our model is based on the intuition that the huge amount of variation and the cross-lingual differences in language ambiguity make it possible to learn semantic distinctions purely from data. The translations are, thus, used as a naturally occurring signal (or cross-lingual grounding) that can be applied as a form of implicit supervision for the learning procedure, mapping sentences to semantic representations that resolve language-internal ambiguities. With this approach we hope to take a step forward in one of the main goals in artificial intelligence, namely the task of natural language understanding. In this paper, however, we emphasise the use of such models in the discovery of linguistic properties and relationships between languages in particular. Having that in mind, the study may open new directions for collaborations between language technology and general linguistics. But before coming back to this, let us first look at related work and the general principles of distributional semantics with cross-lingual grounding. The use of translations for disambiguation has been explored in various studies. Dyvik BIBREF0 proposes to use word translations to discover lexical semantic fields, Carpuat et al. BIBREF1 discuss the use of parallel corpora for word sense disambiguation, van der Plas and Tiedemann BIBREF2 present work on the extraction of synonyms and Villada and Tiedemann BIBREF3 explore multilingual word alignments to identify idiomatic expressions. The idea of cross-lingual disambiguation is simple. The following example illustrates the effect of disambiguation of idiomatic uses of “put off” through translation into German: Using the general idea of the distributional hypothesis that “you shall know a word by the company it keeps” BIBREF4 , we can now explore how cross-lingual context can serve as the source of information that defines the semantics of given sentences. As common in the field of distributional semantics, we will apply semantic vector space models that describe the meaning of a word or text by mapping it onto a position (a real-valued vector) in some high-dimensional Euclidean space. Various models and algorithms have been proposed in the literature (see, e.g., BIBREF5 , BIBREF6 ) and applied to a number of practical tasks. Predictive models based on neural network classifiers and neural language models BIBREF7 , BIBREF8 have superseded models that are purely based on co-occurrence counts (see BIBREF9 for a comparison of common approaches). Semantic vector spaces show even interesting algebraic properties that reflect semantic compositionality, support vector-based reasoning and can be mapped across languages BIBREF10 , BIBREF11 . Multilingual models have been proposed as well BIBREF12 , BIBREF13 . Neural language models are capable of integrating multiple languages BIBREF14 , which makes it possible to discover relations between them based on the language space learned purely from the data. Our framework will be neural machine translation (NMT) that applies an encoder-decoder architecture, which runs sequentially through a string of input symbols (for example words in a sentence) to map the information to dense vector representations, which will then be used to decode that information in another language. Figure 1 illustrates the general principle with respect to the classical Vauquois triangle of machine translation BIBREF15 . Translation models are precisely the kind of machinery that tries to transfer the meaning expressed in one language into another by analysing (understanding) the input and generating the output. NMT tries to learn that mapping from data and, thus, learns to “understand” some source language in order to produce proper translations in a target language from given examples. Our primary hypothesis is that we can increase the level of abstraction by including a larger diversity in the training data that pushes the model to improve compression of the growing variation and complexity of the task. We will test this hypothesis by training multilingual models over hundreds or even almost a thousand languages to force the MT model to abstract over a large proportion of the World's linguistic diversity. As a biproduct of multilingual models with shared parameters, we will obtain a mapping of languages to a continuous vector space depicting relations between individual languages by means of geometric distances. In this paper, we present our initial findings when training such a model with over 900 languages from a collection of Bible translations and focus on the ability of the model to pick up genetic relations between languages when being forced to cover many languages in one single model. In the following, we will first present the basic architecture of the neural translation model together with the setup for training multilingual models. After that we will discuss our experimental results before concluding the paper with some final comments and prospects for future work. Multilingual Neural Machine Translation Neural machine translation typically applies an end-to-end network architecture that includes one or several layers for encoding an input sentence into an internal dense real-valued vector representation and another layer for decoding that representation into the output of the target language. Various variants of that model have been proposed in the recent literature BIBREF16 , BIBREF17 with the same general idea of compressing a sentence into a representation that captures all necessary aspects of the input to enable proper translation in the decoder. An important requirement is that the model needs to support variable lengths of input and output. This is achieved using recurrent neural networks (RNNs) that naturally support sequences of arbitrary lengths. A common architecture is illustrated in Figure 1 : Discrete input symbols are mapped via numeric word representations (embeddings $E$ ) onto a hidden layer ( $C$ ) of context vectors ( $h$ ), in this case by a bidirectional RNN that reads the sequence in a forward and a reverse mode. The encoding function is often modeled by special memory units and all model parameters are learned during training on example translations. In the simplest case, the final representation (returned after running through the encoding layer) is sent to the decoder, which unrolls the information captured by that internal representation. Note that the illustration in Figure 1 includes an important addition to the model, a so-called attention mechanism. Attention makes it possible to focus on particular regions from the encoded sentence when decoding BIBREF17 and, with this, the representation becomes much more flexible and dynamic and greatly improves the translation of sentences with variable lengths. All parameters of the network are trained on large collections of human translations (parallel corpora) typically by some form of gradient descent (iterative function optimisation) that is backpropagated through the network. The attractive property of such a model is the ability to learn representations that reflect semantic properties of the input language through the task of translation. However, one problem is that translation models can be “lazy” and avoid abstractions if the mapping between source and target language does not require any deep understanding. This is where the idea of multilinguality comes into the picture: If the learning algorithm is confronted with a large linguistic variety then it has to generalize and to forget about language-pair-specific shortcuts. Covering substantial amounts of the world's linguistic diversity as we propose pushes the limits of the approach and strong abstractions in $C$ can be expected. Figure 2 illustrates the intuition behind that idea. Various multilingual extensions of NMT have already been proposed in the literature. The authors of BIBREF18 , BIBREF19 apply multitask learning to train models for multiple languages. Zoph and Knight BIBREF20 propose a multi-source model and BIBREF21 introduces a character-level encoder that is shared across several source languages. In our setup, we will follow the main idea proposed by Johnson et al. BIBREF22 . The authors of that paper suggest a simple addition by means of a language flag on the source language side (see Figure 2 ) to indicate the target language that needs to be produced by the decoder. This flag will be mapped on a dense vector representation and can be used to trigger the generation of the selected language. The authors of the paper argue that the model enables transfer learning and supports the translation between languages that are not explicitly available in training. This ability gives a hint of some kind of vector-based “interlingua”, which is precisely what we are looking for. However, the original paper only looks at a small number of languages and we will scale it up to a larger variation using significantly more languages to train on. More details will be given in the following section. Experiments and Results Our question is whether we can use a standard NMT model with a much larger coverage of the linguistic diversity of the World in order to maximise the variation signalling semantic distinctions that can be picked up by the learning procedures. Figure 3 illustrates our setup based on a model trained on over 900 languages from the multilingual Bible corpus BIBREF23 . We trained the model in various batches and observed the development of the model in terms of translation quality on some small heldout data. The heldout data refers to an unseen language pair, Swedish-Portuguese in our case (in both directions). We selected those languages in order to see the capabilities of the system to translate between rather distant languages for which a reasonable number of closely related languages are in the data collection to improve knowledge transfer. The results demonstrate so far that the network indeed picks up the information about the language to be produced. The decoder successfully switches to the selected language and produces relatively fluent Bible-style text. The adequacy of the translation, however, is rather limited and this is most probably due to the restricted capacity of the network with such a load of information to be covered. Nevertheless, it is exciting to see that such a diverse material can be used in one single model and that it learns to share parameters across all languages. One of the most interesting effects that we can observe is the emerging language space that relates to the language flags in the data. In Figure 4 we plot the language space (using t-SNE BIBREF24 for projecting to two dimensions) coloured by language family for the ten language families / groups with most members in our data set. We can see that languages roughly cluster according to the family they belong to. Note that this is purely learned from the data based on the objective to translate between all of them with a single model. The training procedure learns to map closely related languages near to each other in order to increase knowledge transfer between them. This development is very encouraging and demonstrates the ability of the neural network model to optimise parameter sharing to make most out of the model's capacity. An interesting question coming out of this study is whether such multilingual translation models can be used to learn linguistic properties of the languages involved. Making it possible to measure the distance between individual languages in the emerging structures could be useful in data-driven language typology and other cross-linguistic studies. The results so far, do not reveal a lot of linguistically interesting relations besides the projection of languages onto a global continuous space with real-values distances between them. Nevertheless, quantifying the distance is potentially valuable and provides a more fine-grained relation than discrete relations coming from traditional family trees. It is, however, still an open question what kind of properties are represented by the language embeddings and further studies are necessary to see whether specific linguistic features can be identified and isolated from the distributed representations. There is a growing interest in interpretability of emerging structures and related work already demonstrates the ability of predicting typological features with similar language representations BIBREF25 . Massively parallel data sets make it now possible to study specific typological structures with computational models, for example tense and aspect as in BIBREF26 , and we intend to follow up our initial investigations of NMT-based representations in future research along those lines. We also plan to consider other domains than the one of religious texts but it is difficult to obtain the same coverage of the linguistic space with different material. Unbalanced mixtures will be an option but difficult to train. Resources like the Universal Declarations of Human Rights are an option but, unfortunately, very sparse. Another direction is to explore the inter-lingual variations and language developments using, for example, the alternative translations that exist for some languages in the Bible corpus. However, even here the data is rather sparse and it remains to be seen how reliable any emerging pattern will be. Crucial for the success will be a strong collaboration with scholars from the humanities, which shows the important role of digital humanities as a field. Conclusions In this paper, we present our experiments with highly multilingual translation models. We trained neural MT models on Bible translations of over 900 languages in order to see whether the system is capable of sharing parameters across a large diverse sample of the World's languages. Our motivation is to learn language-independent meaning representations using translations as implicit semantic supervision and cross-lingual grounding. Our pilot study demonstrates that such a model can pick up the relationship between languages purely from the data and the translation objective. We hypothesise that such a data-driven setup can be interesting for cross-linguistic studies and language typology. In the future, we would like to investigate the emerging language space in more detail also in connection with alternative network architectures and training procedures. We believe that empirical methods like this one based on automatic representation learning will have significant impact on studies in linguistics providing an objective way of investigating properties and structures of human languages emerging from data and distributional patterns. Acknowledgements We would like to thank the anonymous reviewers for their valuable comments and suggestions as well as the Academy of Finland for the support of the research presented in the paper with project 314062 from the ICT 2023 call on Computation, Machine Learning and Artificial Intelligence.
Multilingual Neural Machine Translation Models
e70236c876c94dbecd9a665d9ba8cefe7301dcfd
e70236c876c94dbecd9a665d9ba8cefe7301dcfd_0
Q: Did they experiment on the proposed task? Text: Introduction There are more and more NLP scholars focusing on the research of multi-party dialogues, such as multi-party dialogues discourse parsing and multi-party meeting summarization BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. However, the scale of the STAC dataset has limited the research of discourse parsing for multi-party dialogues. On the other hand, as we know, there is no literature working on machine reading comprehension for multi-party dialogues. Considering the connection between the relevance between machine reading comprehension and discourse parsing, we annotate the dataset for two tasks for multi-party dialogues understanding. Our dataset derives from the large scale multi-party dialogues dataset the Ubuntu Chat Corpus BIBREF6. For each dialogue in the corpus, we annotate the discourse structure of the dialogue and propose three questions and find the answer span in the input dialogues. To improve the difficulty of the task, we annotate $ \frac{1}{6}$ to $ \frac{1}{3}$ unanswerable questions and their plausible answers from dialogues. This is a real example from the Ubuntu dataset. Example 1 1. mjg59: Someone should suggest to Mark that the best way to get people to love you is to hire people to work on reverse-engineering closed drivers. 2. jdub: heh 3. daniels $\rightarrow $ mjg59: heh 4. daniels: HELLO 5. daniels $\rightarrow $ mjg59: your job is to entertain me so I don't fall asleep at 2pm and totally destroy my migration to AEST 6. bdale $\rightarrow $ daniels: see you next week? 7. daniels $\rightarrow $ bdale: oh, auug, right. rock. 8. daniels $\rightarrow $ bdale: just drop me an email, or call +61 403 505 896 9. bdale $\rightarrow $ daniels: I arrive Tuesday morning your time, depart Fri morning, will be staying at the Duxton There are mainly two contributions to our corpus: A first large scale multi-part dialogues dataset for discourse parsing. It is a challenging task to parse the discourse structure of multi-party dialogues. Enough training data will be essential to develop more powerful models. We firstly propose the task of machine reading comprehension for multi-party dialogues. Different from existing machine comprehension tasks, multi-party dialogues could be more difficult which needs a graph-based model for representing the dialogues and better understanding the discourse structure in the dialogue. In this paper, I will give a detailed description of our large scale dataset. In section 2, I will introduce Ubuntu corpus. In Section 3 and Section 4, I will introduce the annotation for discourse parsing and machine reading comprehension respectively. In Section 5, I will briefly list some related literature. Ubuntu Corpus Our dataset derives from the large scale multi-party dialogues dataset the Ubuntu Chat Corpus BIBREF6. The Ubuntu dataset is a large scale multi-party dialogues corpus. There are several reasons to choose the Ubuntu dataset as our raw data for annotation. First, Ubuntu dataset is a large multi-party dataset. Recently, BIBREF1 used Ubuntu as their dataset for learning dialogues graph representation. After some preprocessing, there are 38K sessions and 1.75M utterances. In each session, there are 3-10 utterances and 2-7 interlocutors. Second, it is easy to annotate the Ubuntu dataset. The Ubuntu dataset already contains Response-to relations that are discourse relations between different speakers' utterances. For annotating discourse dependencies in dialogues, we only need to annotate relations between the same speaker's utterances and the specific sense of discourse relation. Third, there are many papers doing experiments on the Ubuntu dataset, and the dataset has been widely recognized. The discourse dependency structure of each multi-party dialogue can be regarded as a graph. To learn better graph representation of multi-party dialogues, we adopt the dialogues with 8-15 utterances and 3-7 speakers. To simplify the task, we filter the dialogues with long sentences (more than 20 words). Finally, we obtain 52,053 dialogues and 460,358 utterances. Annotation for discourse parsing in multi-party dialogues This section will explain how to annotate discourse structure in multi-party dialogues. The task of discourse parsing for multi-party dialogues aims to detect discourse relations among utterances. The discourse structure of a multi-party dialogue is a directed acyclic graph (DAG). In the process of annotation of discourse parsing for multi-party dialogues, there are two parts: edges annotation between utterances and specific sense type of discourse relations. The discourse structure of Example 1 is shown in Figure 1. There are four speakers and nine utterances in the sample dialogue. The left part shows the speakers and their utterances and the right part shows the discourse dependency relation arcs. The discourse structure can be seen as a discourse dependency graph. We adopt the same sense hierarchy with the STAC dataset which contains sixteen discourse relations. Annotation for discourse parsing in multi-party dialogues ::: Edges between utterances The edge between two utterances represents that there is the discourse dependency relations between these two utterances. The direction of the edge represents the direction of discourse dependency. In this subsection, what we need to do is to confirm whether two utterances have discourse relation. Like PDTB BIBREF7, we call two utterances as Arg1 and Arg2 respectively. The front utterance is Arg1 and the back utterance is Arg2. For example, there is a multi-party dialogue with 9 utterances in Example 1, utterances 1-9 respectively. The utterance 3 depends on utterance 1, we can draw an edge from utterance 1 to utterance 3. Otherwise, if utterance 1 depends on utterance 2, we can draw an edge from utterance 2 to utterance 1. In most cases, the direction of discourse relations in multi-party dialogues is from the front to the back. The biggest difference between discourse parsing for well-written document and dialogues is that discourse relations can exist on two nonadjacent utterances in dialogues. When we annotate dialogues, we should read dialogues from begin to the end. For each utterance, we should find its one parent node at least from all its previous utterances. We assume that the discourse structure is a connected graph and no utterance is isolated. Annotation for discourse parsing in multi-party dialogues ::: Sense of discourse relations When we find the discourse relation between two utterances, we need continue to confirm the specific relation sense. We adopt the same senses hierarchy with the STAC dataset. There are sixteen discourse relations in the STAC. All relations are listed as follows: Comment, Clarification_question, Elaboration, Acknowledgement, Continuation, Explanation, Conditional, Question-answer_pair, Alternation, Q-Elab, Result, Background, Narration, Correction, Parallel, Contrast. Annotation for machine reading comprehension in multi-party dialogues The task of reading comprehension for multi-party dialogues aims to be beneficial for understanding multi-party dialogues. Different from existing machine reading comprehension tasks, the input of this task is a multi-party dialogue, and we should to answer some questions given the dialogue. We propose three questions for eache dialogue and annotate the span of answers in the input dialogue. As we know, our dataset is the first corpus for multi-party dialogues reading comprehension. We construct following questions and answers for the dialogue in Example 1: Q1: When does Bdale leave? A1: Fri morning Q2: How to get people love Mark in Mjg59's opinion. A2: Hire people to work on reverse-engineering closed drivers. On the other hand, to improve the difficulty of the task, we propose $ \frac{1}{6}$ to $ \frac{1}{3}$ unanswerable questions in our dataset. We annotate unanswerable questions and their plausible answers (PA). Each plausible answer comes from the input dialogue, but is not the answer for the plausible question. Q1: Whis is the email of daniels? PA: +61 403 505 896 Related work In this section, I will introduce several existing multi-party dialogues datasets, and explain why we need to annotated a new dataset. Related work ::: Discourse parsing for multi-party dialogues There is an only corpus of discourse parsing on multi-party chat dialogues: STAC BIBREF8. The corpus derives from online game The Settlers of Catan. The game Settlers is a multi-party, win-lose game. As mentioned above, an example in STAC is shown in Figure 1. More details for STAC corpus are described in BIBREF8. The overview of the STAC is shown in Table 1. From Table 1 we can know that there are about more 10K EDUs and relations and most of EDUs are weakly connected. Each EDU can be regarded as a message or sentence in the dialogues. There are sixteen types of discourse dependency relations in STAC as shown in Section 3.2. Related work ::: Machine reading comprehension Machine reading comprehension is a popular task which aims to help the machine better understand natural language. There are several types of datasets for machine comprehension, including extractive datasets BIBREF9, BIBREF10, answer sentence selection datasets BIBREF11, BIBREF12 and multiple choice datasets BIBREF13, BIBREF14. I will briefly introduce two datasets QuAC BIBREF15and CoQA BIBREF16. QuAC : Question Answering in Context is a two-party dialogues dataset for machine reading comprehension BIBREF15. The dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total). The dialogs involve two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts from the text. CoQA is a large dataset for building conversation question answering systems BIBREF16. Conclusion We propose the scheme for annotating large scale multi-party chat dialogues for discourse parsing and machine comprehension. The main goal of this project is to be beneficial for understanding multi-party dialogues. Our corpus are based on the Ubuntu Chat Corpus. For each multi-party dialogue, we annotate discourse structure and question-answer pairs for the dialogue. As we know, this would be the first large-scale corpus for multi-party dialogues discourse parsing, and we firstly propose the task for multi-party dialogues machine reading comprehension.
No
aa1f605619b2487cc914fc2594c8efe2598d8555
aa1f605619b2487cc914fc2594c8efe2598d8555_0
Q: Is annotation done manually? Text: Introduction There are more and more NLP scholars focusing on the research of multi-party dialogues, such as multi-party dialogues discourse parsing and multi-party meeting summarization BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. However, the scale of the STAC dataset has limited the research of discourse parsing for multi-party dialogues. On the other hand, as we know, there is no literature working on machine reading comprehension for multi-party dialogues. Considering the connection between the relevance between machine reading comprehension and discourse parsing, we annotate the dataset for two tasks for multi-party dialogues understanding. Our dataset derives from the large scale multi-party dialogues dataset the Ubuntu Chat Corpus BIBREF6. For each dialogue in the corpus, we annotate the discourse structure of the dialogue and propose three questions and find the answer span in the input dialogues. To improve the difficulty of the task, we annotate $ \frac{1}{6}$ to $ \frac{1}{3}$ unanswerable questions and their plausible answers from dialogues. This is a real example from the Ubuntu dataset. Example 1 1. mjg59: Someone should suggest to Mark that the best way to get people to love you is to hire people to work on reverse-engineering closed drivers. 2. jdub: heh 3. daniels $\rightarrow $ mjg59: heh 4. daniels: HELLO 5. daniels $\rightarrow $ mjg59: your job is to entertain me so I don't fall asleep at 2pm and totally destroy my migration to AEST 6. bdale $\rightarrow $ daniels: see you next week? 7. daniels $\rightarrow $ bdale: oh, auug, right. rock. 8. daniels $\rightarrow $ bdale: just drop me an email, or call +61 403 505 896 9. bdale $\rightarrow $ daniels: I arrive Tuesday morning your time, depart Fri morning, will be staying at the Duxton There are mainly two contributions to our corpus: A first large scale multi-part dialogues dataset for discourse parsing. It is a challenging task to parse the discourse structure of multi-party dialogues. Enough training data will be essential to develop more powerful models. We firstly propose the task of machine reading comprehension for multi-party dialogues. Different from existing machine comprehension tasks, multi-party dialogues could be more difficult which needs a graph-based model for representing the dialogues and better understanding the discourse structure in the dialogue. In this paper, I will give a detailed description of our large scale dataset. In section 2, I will introduce Ubuntu corpus. In Section 3 and Section 4, I will introduce the annotation for discourse parsing and machine reading comprehension respectively. In Section 5, I will briefly list some related literature. Ubuntu Corpus Our dataset derives from the large scale multi-party dialogues dataset the Ubuntu Chat Corpus BIBREF6. The Ubuntu dataset is a large scale multi-party dialogues corpus. There are several reasons to choose the Ubuntu dataset as our raw data for annotation. First, Ubuntu dataset is a large multi-party dataset. Recently, BIBREF1 used Ubuntu as their dataset for learning dialogues graph representation. After some preprocessing, there are 38K sessions and 1.75M utterances. In each session, there are 3-10 utterances and 2-7 interlocutors. Second, it is easy to annotate the Ubuntu dataset. The Ubuntu dataset already contains Response-to relations that are discourse relations between different speakers' utterances. For annotating discourse dependencies in dialogues, we only need to annotate relations between the same speaker's utterances and the specific sense of discourse relation. Third, there are many papers doing experiments on the Ubuntu dataset, and the dataset has been widely recognized. The discourse dependency structure of each multi-party dialogue can be regarded as a graph. To learn better graph representation of multi-party dialogues, we adopt the dialogues with 8-15 utterances and 3-7 speakers. To simplify the task, we filter the dialogues with long sentences (more than 20 words). Finally, we obtain 52,053 dialogues and 460,358 utterances. Annotation for discourse parsing in multi-party dialogues This section will explain how to annotate discourse structure in multi-party dialogues. The task of discourse parsing for multi-party dialogues aims to detect discourse relations among utterances. The discourse structure of a multi-party dialogue is a directed acyclic graph (DAG). In the process of annotation of discourse parsing for multi-party dialogues, there are two parts: edges annotation between utterances and specific sense type of discourse relations. The discourse structure of Example 1 is shown in Figure 1. There are four speakers and nine utterances in the sample dialogue. The left part shows the speakers and their utterances and the right part shows the discourse dependency relation arcs. The discourse structure can be seen as a discourse dependency graph. We adopt the same sense hierarchy with the STAC dataset which contains sixteen discourse relations. Annotation for discourse parsing in multi-party dialogues ::: Edges between utterances The edge between two utterances represents that there is the discourse dependency relations between these two utterances. The direction of the edge represents the direction of discourse dependency. In this subsection, what we need to do is to confirm whether two utterances have discourse relation. Like PDTB BIBREF7, we call two utterances as Arg1 and Arg2 respectively. The front utterance is Arg1 and the back utterance is Arg2. For example, there is a multi-party dialogue with 9 utterances in Example 1, utterances 1-9 respectively. The utterance 3 depends on utterance 1, we can draw an edge from utterance 1 to utterance 3. Otherwise, if utterance 1 depends on utterance 2, we can draw an edge from utterance 2 to utterance 1. In most cases, the direction of discourse relations in multi-party dialogues is from the front to the back. The biggest difference between discourse parsing for well-written document and dialogues is that discourse relations can exist on two nonadjacent utterances in dialogues. When we annotate dialogues, we should read dialogues from begin to the end. For each utterance, we should find its one parent node at least from all its previous utterances. We assume that the discourse structure is a connected graph and no utterance is isolated. Annotation for discourse parsing in multi-party dialogues ::: Sense of discourse relations When we find the discourse relation between two utterances, we need continue to confirm the specific relation sense. We adopt the same senses hierarchy with the STAC dataset. There are sixteen discourse relations in the STAC. All relations are listed as follows: Comment, Clarification_question, Elaboration, Acknowledgement, Continuation, Explanation, Conditional, Question-answer_pair, Alternation, Q-Elab, Result, Background, Narration, Correction, Parallel, Contrast. Annotation for machine reading comprehension in multi-party dialogues The task of reading comprehension for multi-party dialogues aims to be beneficial for understanding multi-party dialogues. Different from existing machine reading comprehension tasks, the input of this task is a multi-party dialogue, and we should to answer some questions given the dialogue. We propose three questions for eache dialogue and annotate the span of answers in the input dialogue. As we know, our dataset is the first corpus for multi-party dialogues reading comprehension. We construct following questions and answers for the dialogue in Example 1: Q1: When does Bdale leave? A1: Fri morning Q2: How to get people love Mark in Mjg59's opinion. A2: Hire people to work on reverse-engineering closed drivers. On the other hand, to improve the difficulty of the task, we propose $ \frac{1}{6}$ to $ \frac{1}{3}$ unanswerable questions in our dataset. We annotate unanswerable questions and their plausible answers (PA). Each plausible answer comes from the input dialogue, but is not the answer for the plausible question. Q1: Whis is the email of daniels? PA: +61 403 505 896 Related work In this section, I will introduce several existing multi-party dialogues datasets, and explain why we need to annotated a new dataset. Related work ::: Discourse parsing for multi-party dialogues There is an only corpus of discourse parsing on multi-party chat dialogues: STAC BIBREF8. The corpus derives from online game The Settlers of Catan. The game Settlers is a multi-party, win-lose game. As mentioned above, an example in STAC is shown in Figure 1. More details for STAC corpus are described in BIBREF8. The overview of the STAC is shown in Table 1. From Table 1 we can know that there are about more 10K EDUs and relations and most of EDUs are weakly connected. Each EDU can be regarded as a message or sentence in the dialogues. There are sixteen types of discourse dependency relations in STAC as shown in Section 3.2. Related work ::: Machine reading comprehension Machine reading comprehension is a popular task which aims to help the machine better understand natural language. There are several types of datasets for machine comprehension, including extractive datasets BIBREF9, BIBREF10, answer sentence selection datasets BIBREF11, BIBREF12 and multiple choice datasets BIBREF13, BIBREF14. I will briefly introduce two datasets QuAC BIBREF15and CoQA BIBREF16. QuAC : Question Answering in Context is a two-party dialogues dataset for machine reading comprehension BIBREF15. The dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total). The dialogs involve two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts from the text. CoQA is a large dataset for building conversation question answering systems BIBREF16. Conclusion We propose the scheme for annotating large scale multi-party chat dialogues for discourse parsing and machine comprehension. The main goal of this project is to be beneficial for understanding multi-party dialogues. Our corpus are based on the Ubuntu Chat Corpus. For each multi-party dialogue, we annotate discourse structure and question-answer pairs for the dialogue. As we know, this would be the first large-scale corpus for multi-party dialogues discourse parsing, and we firstly propose the task for multi-party dialogues machine reading comprehension.
Yes
9f2634c142dc4ad2c68135dbb393ecdfd23af13f
9f2634c142dc4ad2c68135dbb393ecdfd23af13f_0
Q: How large is the proposed dataset? Text: Introduction There are more and more NLP scholars focusing on the research of multi-party dialogues, such as multi-party dialogues discourse parsing and multi-party meeting summarization BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. However, the scale of the STAC dataset has limited the research of discourse parsing for multi-party dialogues. On the other hand, as we know, there is no literature working on machine reading comprehension for multi-party dialogues. Considering the connection between the relevance between machine reading comprehension and discourse parsing, we annotate the dataset for two tasks for multi-party dialogues understanding. Our dataset derives from the large scale multi-party dialogues dataset the Ubuntu Chat Corpus BIBREF6. For each dialogue in the corpus, we annotate the discourse structure of the dialogue and propose three questions and find the answer span in the input dialogues. To improve the difficulty of the task, we annotate $ \frac{1}{6}$ to $ \frac{1}{3}$ unanswerable questions and their plausible answers from dialogues. This is a real example from the Ubuntu dataset. Example 1 1. mjg59: Someone should suggest to Mark that the best way to get people to love you is to hire people to work on reverse-engineering closed drivers. 2. jdub: heh 3. daniels $\rightarrow $ mjg59: heh 4. daniels: HELLO 5. daniels $\rightarrow $ mjg59: your job is to entertain me so I don't fall asleep at 2pm and totally destroy my migration to AEST 6. bdale $\rightarrow $ daniels: see you next week? 7. daniels $\rightarrow $ bdale: oh, auug, right. rock. 8. daniels $\rightarrow $ bdale: just drop me an email, or call +61 403 505 896 9. bdale $\rightarrow $ daniels: I arrive Tuesday morning your time, depart Fri morning, will be staying at the Duxton There are mainly two contributions to our corpus: A first large scale multi-part dialogues dataset for discourse parsing. It is a challenging task to parse the discourse structure of multi-party dialogues. Enough training data will be essential to develop more powerful models. We firstly propose the task of machine reading comprehension for multi-party dialogues. Different from existing machine comprehension tasks, multi-party dialogues could be more difficult which needs a graph-based model for representing the dialogues and better understanding the discourse structure in the dialogue. In this paper, I will give a detailed description of our large scale dataset. In section 2, I will introduce Ubuntu corpus. In Section 3 and Section 4, I will introduce the annotation for discourse parsing and machine reading comprehension respectively. In Section 5, I will briefly list some related literature. Ubuntu Corpus Our dataset derives from the large scale multi-party dialogues dataset the Ubuntu Chat Corpus BIBREF6. The Ubuntu dataset is a large scale multi-party dialogues corpus. There are several reasons to choose the Ubuntu dataset as our raw data for annotation. First, Ubuntu dataset is a large multi-party dataset. Recently, BIBREF1 used Ubuntu as their dataset for learning dialogues graph representation. After some preprocessing, there are 38K sessions and 1.75M utterances. In each session, there are 3-10 utterances and 2-7 interlocutors. Second, it is easy to annotate the Ubuntu dataset. The Ubuntu dataset already contains Response-to relations that are discourse relations between different speakers' utterances. For annotating discourse dependencies in dialogues, we only need to annotate relations between the same speaker's utterances and the specific sense of discourse relation. Third, there are many papers doing experiments on the Ubuntu dataset, and the dataset has been widely recognized. The discourse dependency structure of each multi-party dialogue can be regarded as a graph. To learn better graph representation of multi-party dialogues, we adopt the dialogues with 8-15 utterances and 3-7 speakers. To simplify the task, we filter the dialogues with long sentences (more than 20 words). Finally, we obtain 52,053 dialogues and 460,358 utterances. Annotation for discourse parsing in multi-party dialogues This section will explain how to annotate discourse structure in multi-party dialogues. The task of discourse parsing for multi-party dialogues aims to detect discourse relations among utterances. The discourse structure of a multi-party dialogue is a directed acyclic graph (DAG). In the process of annotation of discourse parsing for multi-party dialogues, there are two parts: edges annotation between utterances and specific sense type of discourse relations. The discourse structure of Example 1 is shown in Figure 1. There are four speakers and nine utterances in the sample dialogue. The left part shows the speakers and their utterances and the right part shows the discourse dependency relation arcs. The discourse structure can be seen as a discourse dependency graph. We adopt the same sense hierarchy with the STAC dataset which contains sixteen discourse relations. Annotation for discourse parsing in multi-party dialogues ::: Edges between utterances The edge between two utterances represents that there is the discourse dependency relations between these two utterances. The direction of the edge represents the direction of discourse dependency. In this subsection, what we need to do is to confirm whether two utterances have discourse relation. Like PDTB BIBREF7, we call two utterances as Arg1 and Arg2 respectively. The front utterance is Arg1 and the back utterance is Arg2. For example, there is a multi-party dialogue with 9 utterances in Example 1, utterances 1-9 respectively. The utterance 3 depends on utterance 1, we can draw an edge from utterance 1 to utterance 3. Otherwise, if utterance 1 depends on utterance 2, we can draw an edge from utterance 2 to utterance 1. In most cases, the direction of discourse relations in multi-party dialogues is from the front to the back. The biggest difference between discourse parsing for well-written document and dialogues is that discourse relations can exist on two nonadjacent utterances in dialogues. When we annotate dialogues, we should read dialogues from begin to the end. For each utterance, we should find its one parent node at least from all its previous utterances. We assume that the discourse structure is a connected graph and no utterance is isolated. Annotation for discourse parsing in multi-party dialogues ::: Sense of discourse relations When we find the discourse relation between two utterances, we need continue to confirm the specific relation sense. We adopt the same senses hierarchy with the STAC dataset. There are sixteen discourse relations in the STAC. All relations are listed as follows: Comment, Clarification_question, Elaboration, Acknowledgement, Continuation, Explanation, Conditional, Question-answer_pair, Alternation, Q-Elab, Result, Background, Narration, Correction, Parallel, Contrast. Annotation for machine reading comprehension in multi-party dialogues The task of reading comprehension for multi-party dialogues aims to be beneficial for understanding multi-party dialogues. Different from existing machine reading comprehension tasks, the input of this task is a multi-party dialogue, and we should to answer some questions given the dialogue. We propose three questions for eache dialogue and annotate the span of answers in the input dialogue. As we know, our dataset is the first corpus for multi-party dialogues reading comprehension. We construct following questions and answers for the dialogue in Example 1: Q1: When does Bdale leave? A1: Fri morning Q2: How to get people love Mark in Mjg59's opinion. A2: Hire people to work on reverse-engineering closed drivers. On the other hand, to improve the difficulty of the task, we propose $ \frac{1}{6}$ to $ \frac{1}{3}$ unanswerable questions in our dataset. We annotate unanswerable questions and their plausible answers (PA). Each plausible answer comes from the input dialogue, but is not the answer for the plausible question. Q1: Whis is the email of daniels? PA: +61 403 505 896 Related work In this section, I will introduce several existing multi-party dialogues datasets, and explain why we need to annotated a new dataset. Related work ::: Discourse parsing for multi-party dialogues There is an only corpus of discourse parsing on multi-party chat dialogues: STAC BIBREF8. The corpus derives from online game The Settlers of Catan. The game Settlers is a multi-party, win-lose game. As mentioned above, an example in STAC is shown in Figure 1. More details for STAC corpus are described in BIBREF8. The overview of the STAC is shown in Table 1. From Table 1 we can know that there are about more 10K EDUs and relations and most of EDUs are weakly connected. Each EDU can be regarded as a message or sentence in the dialogues. There are sixteen types of discourse dependency relations in STAC as shown in Section 3.2. Related work ::: Machine reading comprehension Machine reading comprehension is a popular task which aims to help the machine better understand natural language. There are several types of datasets for machine comprehension, including extractive datasets BIBREF9, BIBREF10, answer sentence selection datasets BIBREF11, BIBREF12 and multiple choice datasets BIBREF13, BIBREF14. I will briefly introduce two datasets QuAC BIBREF15and CoQA BIBREF16. QuAC : Question Answering in Context is a two-party dialogues dataset for machine reading comprehension BIBREF15. The dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total). The dialogs involve two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts from the text. CoQA is a large dataset for building conversation question answering systems BIBREF16. Conclusion We propose the scheme for annotating large scale multi-party chat dialogues for discourse parsing and machine comprehension. The main goal of this project is to be beneficial for understanding multi-party dialogues. Our corpus are based on the Ubuntu Chat Corpus. For each multi-party dialogue, we annotate discourse structure and question-answer pairs for the dialogue. As we know, this would be the first large-scale corpus for multi-party dialogues discourse parsing, and we firstly propose the task for multi-party dialogues machine reading comprehension.
we obtain 52,053 dialogues and 460,358 utterances
77e57d19a0d48f46de8cbf857f5e5284bca0df2b
77e57d19a0d48f46de8cbf857f5e5284bca0df2b_0
Q: How large is the dataset? Text: Introduction Analyzing and generating natural language texts requires the capturing of two important aspects of language: what is said and how it is said. In the literature, much more attention has been paid to studies on what is said. However, recently, capturing how it is said, such as stylistic variations, has also proven to be useful for natural language processing tasks such as classification, analysis, and generation BIBREF1 , BIBREF2 , BIBREF3 . This paper studies the stylistic variations of words in the context of the representation learning of words. The lack of subjective or objective definitions is a major difficulty in studying style BIBREF4 . Previous attempts have been made to define a selected aspect of the notion of style (e.g., politeness) BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 ; however, it is not straightforward to create strict guidelines for identifying the stylistic profile of a given text. The systematic evaluations of style-sensitive word representations and the learning of style-sensitive word representations in a supervised manner are hampered by this. In addition, there is another trend of research forward controlling style-sensitive utterance generation without defining the style dimensions BIBREF11 , BIBREF12 ; however, this line of research considers style to be something associated with a given specific character, i.e., a persona, and does not aim to capture the stylistic variation space. The contributions of this paper are three-fold. (1) We propose a novel architecture that acquires style-sensitive word vectors (Figure 1 ) in an unsupervised manner. (2) We construct a novel dataset for style, which consists of pairs of style-sensitive words with each pair scored according to its stylistic similarity. (3) We demonstrate that our word vectors capture the stylistic similarity between two words successfully. In addition, our training script and dataset are available on https://jqk09a.github.io/style-sensitive-word-vectors/. Style-sensitive Word Vector The key idea is to extend the continuous bag of words (CBOW) BIBREF0 by distinguishing nearby contexts and wider contexts under the assumption that a style persists throughout every single utterance in a dialog. We elaborate on it in this section. Notation Let $w_{t}$ denote the target word (token) in the corpora and $\mathcal {U}_t = \lbrace w_1, \dots , w_{t-1}, w_t, w_{t+1},\dots , w_{\vert \mathcal {U}_t \vert }\rbrace $ denote the utterance (word sequence) including $w_t$ . Here, $w_{t+d}$ or $w_{t-d} \in \mathcal {U}_t$ is a context word of $w_t$ (e.g., $w_{t+1}$ is the context word next to $w_{t}$ ), where $d\in \mathbb {N}_{>0}$ is the distance between the context words and the target word $w_t$ . For each word (token) $w$ , bold face $\mbox{$v$}_{w}$ and $\tilde{\mbox{$v$}}_{w}$ denote the vector of $w$ and the vector predicting the word $w$ . Let $\mathcal {V}$ denote the vocabulary. Baseline Model (CBOW-near-ctx) First, we give an overview of CBOW, which is our baseline model. CBOW predicts the target word $w_t$ given nearby context words in a window with width $\delta $ : $$ := \left\lbrace w_{t\pm d} \in \mathcal {U}_t \mid 1\le d \le \delta \right\rbrace $$ (Eq. 4) The set $$ contains in total at most $2\delta $ words, including $\delta $ words to the left and $\delta $ words to the right of a target word. Specifically, we train the word vectors $\tilde{\mbox{$v$}}_{w_t}$ and $\mbox{$v$}_c$ ( $c\in $ ) by maximizing the following prediction probability: $$P(w_t|) \propto \exp \biggl (\!\tilde{\mbox{$v$}}_{w_t} \cdot \frac{1}{\vert \vert }\!\!\!\! [r]{\sum _{\;\;c\in }} \mbox{$v$}_c\!\biggr ) \text{.}$$ (Eq. 5) The CBOW captures both semantic and syntactic word similarity through the training using nearby context words. We refer to this form of CBOW as CBOW-near-ctx. Note that, in the implementation of BIBREF13 , the window width $\delta $ is sampled from a uniform distribution; however, in this work, we fixed $\delta $ for simplicity. Hereafter, throughout our experiments, we turn off the random resizing of $\delta $ . Learning Style with Utterance-size Context Window (CBOW-all-ctx) CBOW is designed to learn the semantic and syntactic aspects of words from their nearby context BIBREF13 . However, an interesting problem is determining the location where the stylistic aspects of words can be captured. To address this problem, we start with the assumption that a style persists throughout each single utterance in a dialog, that is, the stylistic profile of a word in an utterance must be consistent with other words in the same utterance. Based on this assumption, we propose extending CBOW to use all the words in an utterance as context, $$ := \lbrace w_{t\pm d} \in \mathcal {U}_t \mid 1\le d\rbrace \text{,}$$ (Eq. 7) instead of only the nearby words. Namely, we expand the context window from a fixed width to the entire utterance. This training strategy is expected to lead to learned word vectors that are more sensitive to style rather than to other aspects. We refer to this version as CBOW-all-ctx. Learning the Style and Syntactic/Semantic Separately To learn the stylistic aspect more exclusively, we further extended the learning strategy. First, remember that using nearby context is effective for learning word vectors that capture semantic and syntactic similarities. However, this means that using the nearby context can lead the word vectors to capture some aspects other than style. Therefore, as the first extension, we propose excluding the nearby context $$ from all the context $$ . In other words, we use the distant context words only: $$\! := \setminus = \left\lbrace w_{t\pm d} \in \mathcal {U}_t \mid \delta < d \right\rbrace \!\text{.}\!$$ (Eq. 9) We expect that training with this type of context will lead to word vectors containing the style-sensitive information only. We refer to this method as CBOW-dist-ctx. As the second extension to distill off aspects other than style, we use both nearby and all contexts ( $$ and $$ ). As Figure 2 shows, both the vector $\mbox{$v$}_{w}$ and $\tilde{\mbox{$v$}}_w$ of each word $w\in \mathcal {V}$ are divided into two vectors: $$\mbox{$v$}_w = \mbox{$x$}_w \oplus \mbox{$y$}_w,\;\; \tilde{\mbox{$v$}}_w = \tilde{\mbox{$x$}}_w \oplus \tilde{\mbox{$y$}}_w \text{,}$$ (Eq. 10) where $\oplus $ denotes vector concatenation. Vectors $\mbox{$x$}_{w}$ and $\tilde{\mbox{$x$}}_w$ indicate the style-sensitive part of $\mbox{$v$}_w$ and $\tilde{\mbox{$v$}}_w$ respectively. Vectors $\mbox{$y$}_w$ and $\tilde{\mbox{$y$}}_w$ indicate the syntactic/semantic-sensitive part of $\mbox{$v$}_w$ and $\tilde{\mbox{$v$}}_w$ respectively. For training, when the context words are near the target word ( $$ ), we update both the style-sensitive vectors ( $\mbox{$x$}_{w}$0 , $\mbox{$x$}_{w}$1 ) and the syntactic/semantic-sensitive vectors ( $\mbox{$x$}_{w}$2 , $\mbox{$x$}_{w}$3 ), i.e., $\mbox{$x$}_{w}$4 , $\mbox{$x$}_{w}$5 . Conversely, when the context words are far from the target word ( $\mbox{$x$}_{w}$6 ), we only update the style-sensitive vectors ( $\mbox{$x$}_{w}$7 , $\mbox{$x$}_{w}$8 ). Formally, the prediction probability is calculated as follows: $$P_1^{}(w_{t}|) &\propto \exp \biggl (\!\tilde{\mbox{$v$}}_{w_t} \cdot \frac{1}{\vert \vert }\!\!\!\! [r]{\sum _{\;\;c\in }} \mbox{$v$}_c\!\biggr ) \text{,} \\ P_2^{}(w_{t}|) &\propto \exp \biggl (\!\tilde{\mbox{$x$}}_{w_t} \cdot \frac{1}{\vert \vert }\!\!\!\! [r]{\sum _{\;\;c\in }} \mbox{$x$}_c\!\biggr ) \text{.}$$ (Eq. 11) At the time of learning, two prediction probabilities (loss functions) are alternately computed, and the word vectors are updated. We refer to this method using the two-fold contexts separately as the CBOW-sep-ctx. Experiments We investigated which word vectors capture the stylistic, syntactic, and semantic similarities. Settings We collected Japanese fictional stories from the Web to construct the dataset. The dataset contains approximately 30M utterances of fictional characters. We separated the data into a 99%–1% split for training and testing. In Japanese, the function words at the end of the sentence often exhibit style (e.g., desu+wa, desu+ze;) therefore, we used an existing lexicon of multi-word functional expressions BIBREF14 . Overall, the vocabulary size $\vert \mathcal {V} \vert $ was 100K. We chose the dimensions of both the style-sensitive and the syntactic/semantic-sensitive vectors to be 300, and the dimensions of the baseline CBOWs were 300. The learning rate was adjusted individually for each part in $\lbrace \mbox{$x$}_w, \mbox{$y$}_w, \tilde{\mbox{$x$}}_w, \tilde{\mbox{$y$}}_w\rbrace $ such that “the product of the learning rate and the expectation of the number of updates” was a fixed constant. We ran the optimizer with its default settings from the implementation of BIBREF0 . The training stopped after 10 epochs. We fixed the nearby window width to $\delta =5$ . Stylistic Similarity Evaluation To verify that our models capture the stylistic similarity, we evaluated our style-sensitive vector $\mbox{$x$}_{w_t}$ by comparing to other word vectors on a novel artificial task matching human stylistic similarity judgments. For this evaluation, we constructed a novel dataset with human judgments on the stylistic similarity between word pairs by performing the following two steps. First, we collected only style-sensitive words from the test corpus because some words are strongly associated with stylistic aspects BIBREF15 , BIBREF16 and, therefore, annotating random words for stylistic similarity is inefficient. We asked crowdsourced workers to select style-sensitive words in utterances. Specifically, for the crowdsourced task of picking “style-sensitive” words, we provided workers with a word-segmented utterance and asked them to pick words that they expected to be altered within different situational contexts (e.g., characters, moods, purposes, and the background cultures of the speaker and listener.). Then, we randomly sampled $1,000$ word pairs from the selected words and asked 15 workers to rate each of the pairs on five scales (from $-2$ : “The style of the pair is different” to $+2$ : “The style of the pair is similar”), inspired by the syntactic/semantic similarity dataset BIBREF17 , BIBREF18 . Finally, we picked only word pairs featuring clear worker agreement in which more than 10 annotators rated the pair with the same sign, which consisted of random pairs of highly agreeing style-sensitive words. Consequently, we obtained 399 word pairs with similarity scores. To our knowledge, this is the first study that created an evaluation dataset to measure the lexical stylistic similarity. In the task of selecting style-sensitive words, the pairwise inter-annotator agreement was moderate (Cohen's kappa $\kappa $ is $0.51$ ). In the rating task, the pairwise inter-annotator agreement for two classes ( $\lbrace -2, -1\rbrace $ or $\lbrace +1, +2\rbrace $ ) was fair (Cohen's kappa $\kappa $ is $0.23$ ). These statistics suggest that, at least in Japanese, native speakers share a sense of style-sensitivity of words and stylistic similarity between style-sensitive words. We used this evaluation dataset to compute the Spearman rank correlation ( $\rho _{style}$ ) between the cosine similarity scores between the learned word vectors $\cos (\mbox{$v$}_{w}, \mbox{$v$}_{w^{\prime }})$ and the human judgements. Table 1 shows the results on its left side. First, our proposed model, CBOW-all-ctx outperformed the baseline CBOW-near-ctx. Furthermore, the $\mbox{$x$}$ of CBOW-dist-ctx and CBOW-sep-ctx demonstrated better correlations for stylistic similarity judgments ( $\rho _{style}=56.1$ and $51.3$ , respectively). Even though the $\mbox{$x$}$ of CBOW-sep-ctx was trained with the same context window as CBOW-all-ctx, the style-sensitivity was boosted by introducing joint training with the near context. CBOW-dist-ctx, which uses only the distant context, slightly outperforms CBOW-sep-ctx. These results indicate the effectiveness of training using a wider context window. Syntactic and Semantic Evaluation We further investigated the properties of each model using the following criterion: (1) the model's ability to capture the syntactic aspect was assessed through a task predicting part of speech (POS) and (2) the model's ability to capture the semantic aspect was assessed through a task calculating the correlation with human judgments for semantic similarity. First, we tested the ability to capture syntactic similarity of each model by checking whether the POS of each word was the same as the POS of a neighboring word in the vector space. Specifically, we calculated SyntaxAcc@ $N$ defined as follows: $$\frac{1}{\vert \mathcal {V} \vert N}\sum _{w\in \mathcal {V}}\sum _{\,w^{\prime }\in \mathcal {N}(w)} \hspace{-4.0pt}\mathbb {I}[\mathrm {POS}(w) \!=\! \mathrm {POS}(w^{\prime })] \text{,}\!$$ (Eq. 24) where $\mathbb {I}[\text{condition}] = 1$ if the condition is true and $\mathbb {I}[\text{conditon}] = 0$ otherwise, the function $\mathrm {POS}(w)$ returns the actual POS tag of the word $w$ , and $\mathcal {N}(w)$ denotes the set of the $N$ top similar words $\lbrace w^{\prime }\rbrace $ to $w$ w.r.t. $\cos (\mbox{$v$}_w,\mbox{$v$}_{w^{\prime }})$ in each vector space. Table 1 shows SyntaxAcc@ $N$ with $N = 5$ and 10. For both $N$ , the $\mbox{$y$}$ (the syntactic/semantic part) of CBOW-near-ctx, CBOW-all-ctx and CBOW-sep-ctx achieved similarly good. Interestingly, even though the $\mbox{$x$}$ of CBOW-sep-ctx used the same context as that of CBOW-all-ctx, the syntactic sensitivity of $\mbox{$x$}$ was suppressed. We speculate that the syntactic sensitivity was distilled off by the other part of the CBOW-sep-ctx vector, i.e., $\mbox{$y$}$ learned using only the near context, which captured more syntactic information. In the next section, we analyze CBOW-sep-ctx for the different characteristics of $\mbox{$x$}$ and $\mbox{$y$}$ . To test the model's ability to capture the semantic similarity, we also measured correlations with the Japanese Word Similarity Dataset (JWSD) BIBREF19 , which consists of $4,\!000$ Japanese word pairs annotated with semantic similarity scores by human workers. For each model, we calculate and show the Spearman rank correlation score ( $\rho _{sem}$ ) between the cosine similarity score $\cos (\mbox{$v$}_w, \mbox{$v$}_{w^{\prime }})$ and the human judgements on JWSD in Table 1 . CBOW-dist-ctx has the lowest score ( $\rho _{sem}\!=\!15.9$ ); however, surprisingly, the stylistic vector $\mbox{$x$}_{w_t}$ has the highest score ( $\rho _{sem}\!=\!28.9$ ), while both vectors have a high $\rho _{style}$ . This result indicates that the proposed stylistic vector $\mbox{$x$}_{w_t}$ captures not only the stylistic similarity but also the captures semantic similarity, contrary to our expectations (ideally, we want the stylistic vector to capture only the stylistic similarity). We speculate that this is because not only the style but also the topic is often consistent in single utterances. For example, “UTF8ipxmサンタ (Santa Clause)” and “UTF8ipxmトナカイ (reindeer)” are topically relevant words and these words tend to appear in a single utterance. Therefore, stylistic vectors $\lbrace \mbox{$x$}_{w}\rbrace $ using all the context words in an utterance also capture the topic relatedness. In addition, JWSD contains topic-related word pairs and synonym pairs; therefore the word vectors that capture the topic similarity have higher $\rho _{sem}$0 . We will discuss this point in the next section. Analysis of Trained Word Vectors Finally, to further understand what types of features our CBOW-sep-ctx model acquired, we show some words with the four most similar words in Table 2 . Here, for English readers, we also report a result for English. The English result also shows an example of the performance of our model on another language. The left side of Table 2 (for stylistic vector $\mbox{$x$}$ ) shows the results. We found that the Japanese word “UTF8ipxm拙者 (I; classical)” is similar to “UTF8ipxmござる (be; classical)” or words containing it (the second row of Table 2 ). The result looks reasonable, because words such as “UTF8ipxm拙者 (I; classical)” and “UTF8ipxmござる (be; classical)” are typically used by Japanese Samurai or Ninja. We can see that the vectors captured the similarity of these words, which are stylistically consistent across syntactic and semantic varieties. Conversely, the right side of the table (for the syntactic/semantic vector $\mbox{$y$}$ ) shows that the word “UTF8ipxm拙者 (I; classical)” is similar to the personal pronoun (e.g., “UTF8ipxm僕 (I; male, childish)”). We further confirmed that 15 the top similar words are also personal pronouns (even though they are not shown due to space limitations). These results indicate that the proposed CBOW-sep-ctx model jointly learns two different types of lexical similarities, i.e., the stylistic and syntactic/semantic similarities in the different parts of the vectors. However, our stylistic vector also captured the topic similarity, such as “UTF8ipxmサンタ (Santa Clause)” and “UTF8ipxmトナカイ (reindeer)” (the fourth row of Table 2 ). Therefore, there is still room for improvement in capturing the stylistic similarity. Conclusions and Future Work This paper presented the unsupervised learning of style-sensitive word vectors, which extends CBOW by distinguishing nearby contexts and wider contexts. We created a novel dataset for style, where the stylistic similarity between word pairs was scored by human. Our experiment demonstrated that our method leads word vectors to distinguish the stylistic aspect and other semantic or syntactic aspects. In addition, we also found that our training cannot help confusing some styles and topics. A future direction will be to addressing the issue by further introducing another context such as a document or dialog-level context windows, where the topics are often consistent but the styles are not. Acknowledgments This work was supported by JSPS KAKENHI Grant Number 15H01702. We thank our anonymous reviewers for their helpful comments and suggestions.
30M utterances
50c8b821191339043306fd28e6cda2db400704f9
50c8b821191339043306fd28e6cda2db400704f9_0
Q: How is the dataset created? Text: Introduction Analyzing and generating natural language texts requires the capturing of two important aspects of language: what is said and how it is said. In the literature, much more attention has been paid to studies on what is said. However, recently, capturing how it is said, such as stylistic variations, has also proven to be useful for natural language processing tasks such as classification, analysis, and generation BIBREF1 , BIBREF2 , BIBREF3 . This paper studies the stylistic variations of words in the context of the representation learning of words. The lack of subjective or objective definitions is a major difficulty in studying style BIBREF4 . Previous attempts have been made to define a selected aspect of the notion of style (e.g., politeness) BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 ; however, it is not straightforward to create strict guidelines for identifying the stylistic profile of a given text. The systematic evaluations of style-sensitive word representations and the learning of style-sensitive word representations in a supervised manner are hampered by this. In addition, there is another trend of research forward controlling style-sensitive utterance generation without defining the style dimensions BIBREF11 , BIBREF12 ; however, this line of research considers style to be something associated with a given specific character, i.e., a persona, and does not aim to capture the stylistic variation space. The contributions of this paper are three-fold. (1) We propose a novel architecture that acquires style-sensitive word vectors (Figure 1 ) in an unsupervised manner. (2) We construct a novel dataset for style, which consists of pairs of style-sensitive words with each pair scored according to its stylistic similarity. (3) We demonstrate that our word vectors capture the stylistic similarity between two words successfully. In addition, our training script and dataset are available on https://jqk09a.github.io/style-sensitive-word-vectors/. Style-sensitive Word Vector The key idea is to extend the continuous bag of words (CBOW) BIBREF0 by distinguishing nearby contexts and wider contexts under the assumption that a style persists throughout every single utterance in a dialog. We elaborate on it in this section. Notation Let $w_{t}$ denote the target word (token) in the corpora and $\mathcal {U}_t = \lbrace w_1, \dots , w_{t-1}, w_t, w_{t+1},\dots , w_{\vert \mathcal {U}_t \vert }\rbrace $ denote the utterance (word sequence) including $w_t$ . Here, $w_{t+d}$ or $w_{t-d} \in \mathcal {U}_t$ is a context word of $w_t$ (e.g., $w_{t+1}$ is the context word next to $w_{t}$ ), where $d\in \mathbb {N}_{>0}$ is the distance between the context words and the target word $w_t$ . For each word (token) $w$ , bold face $\mbox{$v$}_{w}$ and $\tilde{\mbox{$v$}}_{w}$ denote the vector of $w$ and the vector predicting the word $w$ . Let $\mathcal {V}$ denote the vocabulary. Baseline Model (CBOW-near-ctx) First, we give an overview of CBOW, which is our baseline model. CBOW predicts the target word $w_t$ given nearby context words in a window with width $\delta $ : $$ := \left\lbrace w_{t\pm d} \in \mathcal {U}_t \mid 1\le d \le \delta \right\rbrace $$ (Eq. 4) The set $$ contains in total at most $2\delta $ words, including $\delta $ words to the left and $\delta $ words to the right of a target word. Specifically, we train the word vectors $\tilde{\mbox{$v$}}_{w_t}$ and $\mbox{$v$}_c$ ( $c\in $ ) by maximizing the following prediction probability: $$P(w_t|) \propto \exp \biggl (\!\tilde{\mbox{$v$}}_{w_t} \cdot \frac{1}{\vert \vert }\!\!\!\! [r]{\sum _{\;\;c\in }} \mbox{$v$}_c\!\biggr ) \text{.}$$ (Eq. 5) The CBOW captures both semantic and syntactic word similarity through the training using nearby context words. We refer to this form of CBOW as CBOW-near-ctx. Note that, in the implementation of BIBREF13 , the window width $\delta $ is sampled from a uniform distribution; however, in this work, we fixed $\delta $ for simplicity. Hereafter, throughout our experiments, we turn off the random resizing of $\delta $ . Learning Style with Utterance-size Context Window (CBOW-all-ctx) CBOW is designed to learn the semantic and syntactic aspects of words from their nearby context BIBREF13 . However, an interesting problem is determining the location where the stylistic aspects of words can be captured. To address this problem, we start with the assumption that a style persists throughout each single utterance in a dialog, that is, the stylistic profile of a word in an utterance must be consistent with other words in the same utterance. Based on this assumption, we propose extending CBOW to use all the words in an utterance as context, $$ := \lbrace w_{t\pm d} \in \mathcal {U}_t \mid 1\le d\rbrace \text{,}$$ (Eq. 7) instead of only the nearby words. Namely, we expand the context window from a fixed width to the entire utterance. This training strategy is expected to lead to learned word vectors that are more sensitive to style rather than to other aspects. We refer to this version as CBOW-all-ctx. Learning the Style and Syntactic/Semantic Separately To learn the stylistic aspect more exclusively, we further extended the learning strategy. First, remember that using nearby context is effective for learning word vectors that capture semantic and syntactic similarities. However, this means that using the nearby context can lead the word vectors to capture some aspects other than style. Therefore, as the first extension, we propose excluding the nearby context $$ from all the context $$ . In other words, we use the distant context words only: $$\! := \setminus = \left\lbrace w_{t\pm d} \in \mathcal {U}_t \mid \delta < d \right\rbrace \!\text{.}\!$$ (Eq. 9) We expect that training with this type of context will lead to word vectors containing the style-sensitive information only. We refer to this method as CBOW-dist-ctx. As the second extension to distill off aspects other than style, we use both nearby and all contexts ( $$ and $$ ). As Figure 2 shows, both the vector $\mbox{$v$}_{w}$ and $\tilde{\mbox{$v$}}_w$ of each word $w\in \mathcal {V}$ are divided into two vectors: $$\mbox{$v$}_w = \mbox{$x$}_w \oplus \mbox{$y$}_w,\;\; \tilde{\mbox{$v$}}_w = \tilde{\mbox{$x$}}_w \oplus \tilde{\mbox{$y$}}_w \text{,}$$ (Eq. 10) where $\oplus $ denotes vector concatenation. Vectors $\mbox{$x$}_{w}$ and $\tilde{\mbox{$x$}}_w$ indicate the style-sensitive part of $\mbox{$v$}_w$ and $\tilde{\mbox{$v$}}_w$ respectively. Vectors $\mbox{$y$}_w$ and $\tilde{\mbox{$y$}}_w$ indicate the syntactic/semantic-sensitive part of $\mbox{$v$}_w$ and $\tilde{\mbox{$v$}}_w$ respectively. For training, when the context words are near the target word ( $$ ), we update both the style-sensitive vectors ( $\mbox{$x$}_{w}$0 , $\mbox{$x$}_{w}$1 ) and the syntactic/semantic-sensitive vectors ( $\mbox{$x$}_{w}$2 , $\mbox{$x$}_{w}$3 ), i.e., $\mbox{$x$}_{w}$4 , $\mbox{$x$}_{w}$5 . Conversely, when the context words are far from the target word ( $\mbox{$x$}_{w}$6 ), we only update the style-sensitive vectors ( $\mbox{$x$}_{w}$7 , $\mbox{$x$}_{w}$8 ). Formally, the prediction probability is calculated as follows: $$P_1^{}(w_{t}|) &\propto \exp \biggl (\!\tilde{\mbox{$v$}}_{w_t} \cdot \frac{1}{\vert \vert }\!\!\!\! [r]{\sum _{\;\;c\in }} \mbox{$v$}_c\!\biggr ) \text{,} \\ P_2^{}(w_{t}|) &\propto \exp \biggl (\!\tilde{\mbox{$x$}}_{w_t} \cdot \frac{1}{\vert \vert }\!\!\!\! [r]{\sum _{\;\;c\in }} \mbox{$x$}_c\!\biggr ) \text{.}$$ (Eq. 11) At the time of learning, two prediction probabilities (loss functions) are alternately computed, and the word vectors are updated. We refer to this method using the two-fold contexts separately as the CBOW-sep-ctx. Experiments We investigated which word vectors capture the stylistic, syntactic, and semantic similarities. Settings We collected Japanese fictional stories from the Web to construct the dataset. The dataset contains approximately 30M utterances of fictional characters. We separated the data into a 99%–1% split for training and testing. In Japanese, the function words at the end of the sentence often exhibit style (e.g., desu+wa, desu+ze;) therefore, we used an existing lexicon of multi-word functional expressions BIBREF14 . Overall, the vocabulary size $\vert \mathcal {V} \vert $ was 100K. We chose the dimensions of both the style-sensitive and the syntactic/semantic-sensitive vectors to be 300, and the dimensions of the baseline CBOWs were 300. The learning rate was adjusted individually for each part in $\lbrace \mbox{$x$}_w, \mbox{$y$}_w, \tilde{\mbox{$x$}}_w, \tilde{\mbox{$y$}}_w\rbrace $ such that “the product of the learning rate and the expectation of the number of updates” was a fixed constant. We ran the optimizer with its default settings from the implementation of BIBREF0 . The training stopped after 10 epochs. We fixed the nearby window width to $\delta =5$ . Stylistic Similarity Evaluation To verify that our models capture the stylistic similarity, we evaluated our style-sensitive vector $\mbox{$x$}_{w_t}$ by comparing to other word vectors on a novel artificial task matching human stylistic similarity judgments. For this evaluation, we constructed a novel dataset with human judgments on the stylistic similarity between word pairs by performing the following two steps. First, we collected only style-sensitive words from the test corpus because some words are strongly associated with stylistic aspects BIBREF15 , BIBREF16 and, therefore, annotating random words for stylistic similarity is inefficient. We asked crowdsourced workers to select style-sensitive words in utterances. Specifically, for the crowdsourced task of picking “style-sensitive” words, we provided workers with a word-segmented utterance and asked them to pick words that they expected to be altered within different situational contexts (e.g., characters, moods, purposes, and the background cultures of the speaker and listener.). Then, we randomly sampled $1,000$ word pairs from the selected words and asked 15 workers to rate each of the pairs on five scales (from $-2$ : “The style of the pair is different” to $+2$ : “The style of the pair is similar”), inspired by the syntactic/semantic similarity dataset BIBREF17 , BIBREF18 . Finally, we picked only word pairs featuring clear worker agreement in which more than 10 annotators rated the pair with the same sign, which consisted of random pairs of highly agreeing style-sensitive words. Consequently, we obtained 399 word pairs with similarity scores. To our knowledge, this is the first study that created an evaluation dataset to measure the lexical stylistic similarity. In the task of selecting style-sensitive words, the pairwise inter-annotator agreement was moderate (Cohen's kappa $\kappa $ is $0.51$ ). In the rating task, the pairwise inter-annotator agreement for two classes ( $\lbrace -2, -1\rbrace $ or $\lbrace +1, +2\rbrace $ ) was fair (Cohen's kappa $\kappa $ is $0.23$ ). These statistics suggest that, at least in Japanese, native speakers share a sense of style-sensitivity of words and stylistic similarity between style-sensitive words. We used this evaluation dataset to compute the Spearman rank correlation ( $\rho _{style}$ ) between the cosine similarity scores between the learned word vectors $\cos (\mbox{$v$}_{w}, \mbox{$v$}_{w^{\prime }})$ and the human judgements. Table 1 shows the results on its left side. First, our proposed model, CBOW-all-ctx outperformed the baseline CBOW-near-ctx. Furthermore, the $\mbox{$x$}$ of CBOW-dist-ctx and CBOW-sep-ctx demonstrated better correlations for stylistic similarity judgments ( $\rho _{style}=56.1$ and $51.3$ , respectively). Even though the $\mbox{$x$}$ of CBOW-sep-ctx was trained with the same context window as CBOW-all-ctx, the style-sensitivity was boosted by introducing joint training with the near context. CBOW-dist-ctx, which uses only the distant context, slightly outperforms CBOW-sep-ctx. These results indicate the effectiveness of training using a wider context window. Syntactic and Semantic Evaluation We further investigated the properties of each model using the following criterion: (1) the model's ability to capture the syntactic aspect was assessed through a task predicting part of speech (POS) and (2) the model's ability to capture the semantic aspect was assessed through a task calculating the correlation with human judgments for semantic similarity. First, we tested the ability to capture syntactic similarity of each model by checking whether the POS of each word was the same as the POS of a neighboring word in the vector space. Specifically, we calculated SyntaxAcc@ $N$ defined as follows: $$\frac{1}{\vert \mathcal {V} \vert N}\sum _{w\in \mathcal {V}}\sum _{\,w^{\prime }\in \mathcal {N}(w)} \hspace{-4.0pt}\mathbb {I}[\mathrm {POS}(w) \!=\! \mathrm {POS}(w^{\prime })] \text{,}\!$$ (Eq. 24) where $\mathbb {I}[\text{condition}] = 1$ if the condition is true and $\mathbb {I}[\text{conditon}] = 0$ otherwise, the function $\mathrm {POS}(w)$ returns the actual POS tag of the word $w$ , and $\mathcal {N}(w)$ denotes the set of the $N$ top similar words $\lbrace w^{\prime }\rbrace $ to $w$ w.r.t. $\cos (\mbox{$v$}_w,\mbox{$v$}_{w^{\prime }})$ in each vector space. Table 1 shows SyntaxAcc@ $N$ with $N = 5$ and 10. For both $N$ , the $\mbox{$y$}$ (the syntactic/semantic part) of CBOW-near-ctx, CBOW-all-ctx and CBOW-sep-ctx achieved similarly good. Interestingly, even though the $\mbox{$x$}$ of CBOW-sep-ctx used the same context as that of CBOW-all-ctx, the syntactic sensitivity of $\mbox{$x$}$ was suppressed. We speculate that the syntactic sensitivity was distilled off by the other part of the CBOW-sep-ctx vector, i.e., $\mbox{$y$}$ learned using only the near context, which captured more syntactic information. In the next section, we analyze CBOW-sep-ctx for the different characteristics of $\mbox{$x$}$ and $\mbox{$y$}$ . To test the model's ability to capture the semantic similarity, we also measured correlations with the Japanese Word Similarity Dataset (JWSD) BIBREF19 , which consists of $4,\!000$ Japanese word pairs annotated with semantic similarity scores by human workers. For each model, we calculate and show the Spearman rank correlation score ( $\rho _{sem}$ ) between the cosine similarity score $\cos (\mbox{$v$}_w, \mbox{$v$}_{w^{\prime }})$ and the human judgements on JWSD in Table 1 . CBOW-dist-ctx has the lowest score ( $\rho _{sem}\!=\!15.9$ ); however, surprisingly, the stylistic vector $\mbox{$x$}_{w_t}$ has the highest score ( $\rho _{sem}\!=\!28.9$ ), while both vectors have a high $\rho _{style}$ . This result indicates that the proposed stylistic vector $\mbox{$x$}_{w_t}$ captures not only the stylistic similarity but also the captures semantic similarity, contrary to our expectations (ideally, we want the stylistic vector to capture only the stylistic similarity). We speculate that this is because not only the style but also the topic is often consistent in single utterances. For example, “UTF8ipxmサンタ (Santa Clause)” and “UTF8ipxmトナカイ (reindeer)” are topically relevant words and these words tend to appear in a single utterance. Therefore, stylistic vectors $\lbrace \mbox{$x$}_{w}\rbrace $ using all the context words in an utterance also capture the topic relatedness. In addition, JWSD contains topic-related word pairs and synonym pairs; therefore the word vectors that capture the topic similarity have higher $\rho _{sem}$0 . We will discuss this point in the next section. Analysis of Trained Word Vectors Finally, to further understand what types of features our CBOW-sep-ctx model acquired, we show some words with the four most similar words in Table 2 . Here, for English readers, we also report a result for English. The English result also shows an example of the performance of our model on another language. The left side of Table 2 (for stylistic vector $\mbox{$x$}$ ) shows the results. We found that the Japanese word “UTF8ipxm拙者 (I; classical)” is similar to “UTF8ipxmござる (be; classical)” or words containing it (the second row of Table 2 ). The result looks reasonable, because words such as “UTF8ipxm拙者 (I; classical)” and “UTF8ipxmござる (be; classical)” are typically used by Japanese Samurai or Ninja. We can see that the vectors captured the similarity of these words, which are stylistically consistent across syntactic and semantic varieties. Conversely, the right side of the table (for the syntactic/semantic vector $\mbox{$y$}$ ) shows that the word “UTF8ipxm拙者 (I; classical)” is similar to the personal pronoun (e.g., “UTF8ipxm僕 (I; male, childish)”). We further confirmed that 15 the top similar words are also personal pronouns (even though they are not shown due to space limitations). These results indicate that the proposed CBOW-sep-ctx model jointly learns two different types of lexical similarities, i.e., the stylistic and syntactic/semantic similarities in the different parts of the vectors. However, our stylistic vector also captured the topic similarity, such as “UTF8ipxmサンタ (Santa Clause)” and “UTF8ipxmトナカイ (reindeer)” (the fourth row of Table 2 ). Therefore, there is still room for improvement in capturing the stylistic similarity. Conclusions and Future Work This paper presented the unsupervised learning of style-sensitive word vectors, which extends CBOW by distinguishing nearby contexts and wider contexts. We created a novel dataset for style, where the stylistic similarity between word pairs was scored by human. Our experiment demonstrated that our method leads word vectors to distinguish the stylistic aspect and other semantic or syntactic aspects. In addition, we also found that our training cannot help confusing some styles and topics. A future direction will be to addressing the issue by further introducing another context such as a document or dialog-level context windows, where the topics are often consistent but the styles are not. Acknowledgments This work was supported by JSPS KAKENHI Grant Number 15H01702. We thank our anonymous reviewers for their helpful comments and suggestions.
We collected Japanese fictional stories from the Web
dee7383a92c78ea49859a2d5ff2a9d0a794c1f0f
dee7383a92c78ea49859a2d5ff2a9d0a794c1f0f_0
Q: What is binary variational dropout? Text: Introduction Recurrent neural networks (RNNs) are among the most powerful models for natural language processing, speech recognition, question-answering systems and other problems with sequential data BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . For complex tasks such as machine translation BIBREF5 or speech recognition BIBREF3 modern RNN architectures incorporate a huge number of parameters. To use these models on portable devices with limited memory, for instance, smartphones, the model compression is desired. High compression level may also lead to an acceleration of RNNs. In addition, compression regularizes RNNs and helps to avoid overfitting. There are a lot of RNNs compression methods based on specific weight matrix representations BIBREF7 , BIBREF8 or sparsification via pruning BIBREF9 . In this paper we focus on RNNs compression via sparsification. Most of the methods from this group are heuristic and require time-consuming hyperparameters tuning. Recently Molchanov et. al. dmolch proposed a principled method based on variational dropout for sparsification of fully connected and convolutional networks. A probabilistic model was described in which parameters controlling sparsity are tuned automatically during neural network training. This model called Sparse Variational Dropout (Sparse VD) leads to extremely sparse solutions without a significant quality drop. However, this technique was not previously investigated for RNNs. In this paper we apply Sparse VD to recurrent neural networks. To take into account the specifics of RNNs we rely on some insights underlined in the paper by Gal & Ghahramani gal where they explain the proper way to use binary dropout in RNNs from the Bayesian point of view. In the experiments we show that LSTMs with Sparse VD yield high sparsity level with just a slight drop in quality. We achieved 99.5% sparsity level on sentiment analysis task and up to 87.6% in character level language modeling experiment. Bayesian Neural Networks Consider a neural network with weights $\omega $ modeling the dependency of the target variables $y=\lbrace y^1, \dots , y^\ell \rbrace $ on the corresponding input objects $X = \lbrace x^1, \dots , x^\ell \rbrace $ . In a Bayesian neural network the weights $\omega $ are treated as random variables. With the prior distribution $p(\omega )$ we search for the posterior distribution $p(\omega |X, y)$ that will help to find expected target value during inference. In the case of neural networks, true posterior is usually intractable but it can be approximated by some parametric distribution $q_\lambda (\omega )$ . The quality of this approximation is measured by the KL-divergence $KL(q_\lambda (\omega )||p(\omega |X, y))$ . The optimal parameter $\lambda $ can be found by maximization of the variational lower bound w.r.t. $\lambda $ : $$\mathcal {L}=\sum _{i=1}^\ell \mathbb {E}_{q_\lambda (\omega )} \log p(y^i|x^i, \omega ) - KL(q_\lambda (\omega )||p(\omega ))$$ (Eq. 2) The expected log-likelihood term in ( 2 ) is usually approximated by Monte-Carlo sampling. To make the MC estimation unbiased, the weights are parametrized by a deterministic function: $\omega = g(\lambda , \xi )$ , where $\xi $ is sampled from some non-parametric distribution (the reparameterization trick BIBREF10 ). The KL-divergence term in ( 2 ) acts as a regularizer and is usually computed or approximated analytically. Sparse Variational Dropout Dropout BIBREF11 is a standard technique for regularization of neural networks. It implies that inputs of each layer are multiplied by a randomly generated noise vector. The elements of this vector are usually sampled from Bernoulli or Gaussian distribution with the parameters tuned using cross-validation. Kingma et al. kingma interpreted Gaussian dropout from a Bayesian perspective that allowed to tune dropout rate automatically during model training. Later this model was extended to sparsify fully connected and convolutional neural networks resulting in a model called Sparse Variational Dropout (Sparse VD) BIBREF0 . Consider one dense layer of a feed-forward neural network with an input of the size $n$ , an output of the size $m$ and a weight matrix $W$ . Following Kingma et al. kingma, in Sparse VD the prior on the weights is a fully factorized log-uniform distribution $p(|w_{ij}|) \propto \frac{1}{|w_{ij}|}$ and the posterior is searched in the form of fully factorized normal distribution: $$q(w_{ij}|m_{ij}, \alpha _{ij}) = \mathcal {N}(m_{ij}, \alpha _{ij} m^2_{ij}).$$ (Eq. 4) Employment of such form of the posterior distribution is equivalent to putting multiplicative BIBREF12 or additive BIBREF0 normal noise on the weights in the following manner: $$w_{ij} = m_{ij} \xi _{ij}, \quad \xi _{ij}\sim \mathcal {N}(1, \alpha _{ij}),$$ (Eq. 5) $$w_{ij} = m_{ij} + \epsilon _{ij}, \quad \epsilon _{ij}\sim \mathcal {N}(0, \sigma ^2_{ij}), \quad \alpha _{ij} = \frac{\sigma ^2_{ij}}{m^2_{ij}}.$$ (Eq. 6) The representation ( 6 ) is called additive reparameterization BIBREF0 . It reduces the variance of the gradients of $\mathcal {L}$ w. r. t. $m_{ij}$ . Moreover, since a sum of normal distributions is a normal distribution with computable parameters, the noise may be applied to the preactivation (input vector times weight matrix $W$ ) instead of $W$ . This trick is called the local reparameterization trick BIBREF13 , BIBREF12 and it reduces the variance of the gradients even further and makes training more efficient. In Sparse VD optimization of the variational lower bound ( 2 ) is performed w. r. t. $\lbrace M, \log \sigma \rbrace $ . The KL-divergence factorizes over the weights and its terms depend only on $\alpha _{ij}$ because of the specific choice of the prior BIBREF12 : $$KL(q(w_{ij}|m_{ij}, \alpha _{ij})||p(w_{ij}))=k(\alpha _{ij}).$$ (Eq. 7) Each term can be approximated as follows BIBREF0 : $${\begin{array}{c}k(\alpha ) \approx 0.64 \sigma (1.87 + 1.49\log \alpha )-\\ \:\:\:\,- 0.5 \log (1 + \alpha ^{-1}) + C. \end{array}}$$ (Eq. 8) KL-divergence term encourages large values of $\alpha _{ij}$ . If $\alpha _{ij} \rightarrow \infty $ for a weight $w_{ij}$ , the posterior over this weight is a high-variance normal distribution and it is beneficial for model to put $m_{ij} = 0$ as well as $\sigma _{ij}=\alpha _{ij} m^2_{ij}=0$ to avoid inaccurate predictions. As a result, the posterior over $w_{ij}$ approaches zero-centered $\delta $ -function, the weight does not affect the network's output and can be ignored. Dropout for Recurrent Neural Networks Yet another Bayesian model was proposed by Gal & Ghahramani bindrop to explain the binary dropout. On this base, a recipe how to apply a binary dropout to the RNNs properly was proposed by Gal & Ghahramani gal. The recurrent neural network takes a sequence $x = [x_0, \dots , x_T]$ , $x_t\in \mathbb {R}^n$ as an input and maps it into the sequence of hidden states: $${\begin{array}{c}h_{t} = f_h(x_t, h_{t-1}) = g_h(x_{t} W^x + h_{t-1} W^h + b_1)\\ h_i \in \mathbb {R}^m, \, h_0 = \bar{0} \end{array}}$$ (Eq. 10) Throughout the paper, we assume that the output of the RNN depends only on the last hidden state: $$y = f_y(h_T) = g_y(h_T W^y + b_2).$$ (Eq. 11) Here $g_h$ and $g_y$ are some nonlinear functions. However, all the techniques we discuss further can be easily applied to the more complex setting, e. g. language model with several outputs for one input sequence (one output for each time step). Gal & Ghahramani gal considered RNNs as Bayesian networks. The prior on the recurrent layer weights $\omega =\lbrace W^x, W^h\rbrace $ is a fully factorized standard normal distribution. The posterior is factorized over the rows of weights, and each factor is searched as a mixture of two normal distributions: $ q(w^x_k|m^x_k) = p^x \mathcal {N}(0, \sigma ^2 I) + (1-p^x) \mathcal {N}(m^x_k, \sigma ^2 I),\quad \: $ $$q(w^h_j|m^h_j) = p^h \mathcal {N}(0, \sigma ^2 I) + (1-p^h) \mathcal {N}(m^h_j, \sigma ^2 I),$$ (Eq. 12) Under assumption $\sigma \approx 0$ sampling the row of weights from such posterior means putting all the weights from this row either to 0 (drop the corresponding input neuron) or to some learned values. Thus this model is a probabilistic analog of binary dropout with dropout rates $p^x$ and $p^h$ . After unfolding the recurrence in the network, the maximization of the variational lower bound for such model looks as follows: $$\sum _{i=1}^\ell \int q(\omega |M) \log \Bigl (y^i\big |f_y\bigl (f_h(x^i_T, f_h(\dots f_h (x^i_1, h^i_0))\bigr )\Bigr ) d \omega - \\ -KL\Bigl (q(\omega |M)\big \Vert p(\omega )\Bigr ) \rightarrow \max _{M}$$ (Eq. 13) Each integral in the first part of ( 13 ) is estimated with MC integration with a single sample $\hat{\omega }_i \sim q(\omega |M)$ . To make this estimation unbiased: (a) the weights sample $\hat{\omega }_i$ should remain the same for all time steps $t=\overline{1, T}$ for a fixed object; (b) dropout rates $p^x$ and $p^h$ should be fixed because the distribution we are sampling from depends on them. The KL-divergence term from ( 13 ) is approximately equivalent to $L_2$ regularization of the variational parameters $M$ . Finally, this probabilistic model leads to the following dropout application in RNNs: we sample a binary mask for the input and hidden neurons, one mask per object for all moments of time, and optimize the $L_2$ -regularized log-likelihood with the dropout rates and the weight of $L_2$ -regularization chosen using cross-validation. Also, the same dropout technique may be applied to forward connections in RNNs, for example in embedding and dense layers BIBREF1 . The same technique can be applied to more complex architectures like LSTM in which the information flow between input and hidden units is controlled by the gate elements: $$i = sigm(h_{t-1}W^h_i + x_t W^x_i) \quad o = sigm(h_{t-1}W^h_o + x_t W^x_o)$$ (Eq. 14) $$f = sigm(h_{t-1}W^h_f + x_t W^x_f) \quad g = tanh(h_{t-1} W^h_g + x_t W^x_g)$$ (Eq. 15) Here binary dropout masks for input and hidden neurons are generated 4 times: individually for each of the gates $i,o,f$ and input modulation $g$ . Variational Dropout for RNN sparsification Dropout for RNNs proposed by Gal & Ghahramani gal helps to avoid overfitting but is very sensitive to the choice of the dropout rates. On the other hand, Sparse VD allows automatic tuning of the Gaussian dropout parameters individually for each weight which results in the model sparsification. We combine these two techniques to sparsify and regularize RNNs. Following Molchanov et al. dmolch, we use the fully factorized log-uniform prior and approximate the posterior with a fully factorized normal distribution over the weights $\omega =\lbrace W^x, W^h\rbrace $ : $${\begin{array}{c}q(w^x_{ki}|m^x_{ki}, \sigma ^x_{ki}) = \mathcal {N}\bigl (m^x_{ki}, {\sigma ^x_{ki}}^2\bigr ), \:\\ q(w^h_{ji}|m^h_{ji}, \sigma ^h_{ji}) = \mathcal {N}\bigl (m^h_{ji}, {\sigma ^h_{ji}}^2\bigr ), \end{array}}$$ (Eq. 17) where $\sigma ^x_{ki}$ and $\sigma ^h_{ji}$ have the same meaning as in additive reparameterization ( 6 ). To train the model, we maximize the variational lower bound approximation $$\sum _{i=1}^\ell \int q(\omega |M, \sigma ) \log \Bigl (y^i\big |f_y\bigl (f_h(x^i_T, f_h(\dots f_h (x^i_1, h^i_0))\bigr )\Bigr ) d \omega - \\ - \sum _{k,i=1}^{n,m} k\biggl (\frac{{\sigma ^x_{ki}}^2}{{m^x_{ki}}^2}\biggr ) - \sum _{j,i=1}^{m,m} k\biggl (\frac{{\sigma ^h_{ji}}^2}{{m^h_{ji}}^2}\biggr )$$ (Eq. 18) w. r. t. $\lbrace M, \log \sigma \rbrace $ using stochastic mini-batch methods. Here the recurrence in the expected log-likelihood term is unfolded as in ( 13 ) and the KL is approximated using ( 8 ). The integral in ( 18 ) is estimated with a single sample $\hat{\omega }_i \sim q(\omega |M, \alpha )$ per input sequence. We use the reparameterization trick (for unbiased integral estimation) and additive reparameterization (for gradients variance reduction) to sample both input-to-hidden and hidden-to-hidden weight matrices $\widehat{W}^x, \widehat{W}^h$ . To reduce the variance of the gradients and for more computational efficiency we also apply the local reparameterization trick to input-to-hidden matrix $\widehat{W}^x$ moving the noise from the weights to the preactivations: $${\begin{array}{c}(x_t \widehat{W}^x)_j = \sum _{k=1}^n x_{t,k} m^x_{kj} + \epsilon _j \sqrt{\sum _{k=1}^n x^2_{t,k} {\sigma ^x_{kj}}^2}\:, \\ \epsilon _j \sim \mathcal {N}(0, 1). \end{array}}$$ (Eq. 19) As a result, only 2-dimensional noise on input-to-hidden connections is required for each mini-batch: we generate one noise vector of length $m$ for each object in a mini-batch. The local reparameterization trick cannot be applied to the hidden-to-hidden matrix $W^h$ . We use the same sample $\widehat{W}^h$ for all moments of time, therefore in the multiplication $h_{t-1} \widehat{W}^h$ the vector $h_{t-1}$ depends on $\widehat{W}^h$ and the rule about the sum of normally distributed random variables cannot be applied. Since usage of 3-dimensional noise (2 dimensions of $\widehat{W}^h$ and a mini-batch size) is too resource-consuming we sample one noise matrix for all objects in a mini-batch for efficiency: $$\hat{w}^h_{ji}=m^h_{ji}+\sigma ^j_{ji}\epsilon ^h_{ji},\quad \epsilon ^h_{ji} \sim \mathcal {N}(0, 1).$$ (Eq. 20) The final framework works as follows: we sample Gaussian additive noise on the input-to-hidden preactivations (one per input sequence) and hidden-to-hidden weight matrix (one per mini-batch), optimize the variational lower bound ( 18 ) w. r. t. $\lbrace M, \log \sigma \rbrace $ , and for many weights we obtain the posterior in the form of a zero-centered $\delta $ -function because the KL-divergence encourages sparsity. These weights can then be safely removed from the model. In LSTM the same prior-posterior pair is consisered for all input-to-hidden and hidden-to-hidden matrices and all computations stay the same. The noise matrices for input-to-hidden and hidden-to-hidden connections are generated individually for each of the gates $i,o,f$ and input modulation $g$ . Experiments We perform experiments with LSTM as the most popular recurrent architecture nowadays. We use Theano BIBREF14 and Lasagne BIBREF15 for implementation. The source code will be available soon at https://github.com/tipt0p/SparseBayesianRNN. We demonstrate the effectiveness of our approach on two diverse problems: Character Level Language Modeling and Sentiment Analysis. Our results show that Sparse Variational Dropout leads to a high level of sparsity in recurrent models without a significant quality drop. We use the dropout technique of Gal & Ghahramani gal as a baseline because it is the most similar dropout technique to our approach and denote it VBD (variational binary dropout). According to Molchanov et al. dmolch, training neural networks with Sparse Variational Dropout from a random initialization is troublesome, as a lot of weights may become pruned away before they could possibly learn something useful from the data. We observe the same effect in our experiments with LSTMs, especially with more complex models. LSTM trained from a random initialization may have high sparsity level, but also have a noticeable quality drop. To overcome this issue we start from pre-trained models that we obtain by training networks without Sparse Variational Dropout for several epochs. Weights in models with Sparse Variational Dropout cannot converge exactly to zero because of the stochastic nature of the training procedure. To obtain sparse networks we explicitly put weights with high corresponding dropout rates to 0 during testing as in Molchanov et al. dmolch. We use the value $\log \alpha = 3$ as a threshold. For all weights that we sparsify using Sparse Variational Dropout, we initialize $\log {\sigma ^2}$ with -6. We optimize our networks using Adam BIBREF16 . Networks without any dropout overfit for both our tasks, therefore, we present results for them with early stopping. Throughout experiments we use the mean values of the weights to evaluate the model quality (we do not sample weights from posterior on the evaluating phase). This is a common practice when working with dropout. Sentiment Analysis Following Gal & Ghahramani gal we evaluated our approach on the sentiment analysis regression task. The dataset is constructed based on Cornell film reviews corpus collected by Pang & Lee regrdata. It consists of approximately 10 thousands non-overlapping segments of 200 words from the reviews. The task is to predict corresponding film scores from 0 to 1. We use the provided train and test partitions. We use networks with one embedding layer of 128 units, one LSTM layer of 128 hidden units, and finally, a fully connected layer applied to the last output of the LSTM (resulting in a scalar output). All weights are initialized in the same way as in Gal & Ghahramani gal. We train our networks using batches of size 128 and a learning rate of 0.001 for 1000 epochs. We also clip the gradients with threshold 0.1. For all layers with VBD we use dropout rate 0.3 and weight decay $10^{-3}$ (these parameters are chosen using cross validation). As a baseline, we train the network without any dropout and with VBD on all layers. In this experiment, our goal is to check the applicability of Sparse VD for recurrent networks, therefore we apply it only to LSTM layer. For embedding and dense layers we use VBD. We try both start training of the network with Sparse VD from random initialization and from two different pre-trained models. The first pre-trained model is obtained after 4 epochs of training of the network without any dropout. The second one is obtained after 200 epochs of training of the network with VBD on all layers. We choose number of pretraining epochs using models quality on cross-validation. The results are shown in Table 1 . In this task our approach achieves extremely high sparsity level both from random initialization and from pre-trained models. Sparse VD networks trained from pre-trained models achieve even better quality than baselines. Note that models already have this sparsity level after approximately 20 epochs. Character Level Language Modeling Following Mikolov et al. mikolov11 we use the Penn Treebank Corpus to train our Language Model (LM). The dataset contains approximately 6 million characters and a vocabulary of 50 characters. We use the provided train, validation and test partitions. We use networks with one LSTM layer of 1000 hidden units to solve the character level LM task. All weight matrices of the networks are initialized orthogonally and all biases are initialized with zeros. Initial values of hidden and cell elements are trainable and also initialized with zeros. We train our networks on non-overlapping sequences of 100 characters in batches of 64 using a learning rate of 0.002 for 50 epochs, and clip gradients with threshold 1. For all layers with VBD we use dropout rate 0.25 and do not use weight decay (these parameters are chosen using quality of VDB model on validation set). As a baseline, we train the network without any dropout and with VBD only on recurrent weights (hidden-to-hidden). Semeniuta et al. semeniuta16 showed that for this particular task applying dropout for feed-forward connections additionally to VBD on recurrent ones does not improve the network quality. We observe the same effect in our experiments. In this experiment we try to sparsify both LSTM and dense layers therefore we apply Sparse VD for all layers. We try both start training of the network with Sparse VD from random initialization and from two different pre-trained models. The first pre-trained model is obtained after 11 epochs of training of the network without any dropout. The second one is obtained after 50 epochs of training of the network with VBD on recurrent connections. We choose the number of pretraining epochs using models quality on validation set. The results are shown in Table 2 . Here we do not achieve such extreme sparsity level as in the previous experiment. This effect may be a consequence of the higher complexity of the task. Also in LM problem we have several outputs for one input sequence (one output for each time step) instead of one output in Sentiment regression. As a result the log-likelihood part of the loss function is much stronger for LM task and regularizer can not sparsify the network so effectively. Here we see that the balance between the likelihood and the regularizer varies a lot for different tasks with RNNs and should be explored futher. Fig. 1 and 2 show the progress of test quality and network sparsity level through the training process. Sparse VD network trained from random initialization underfits and therefore has a slight quality drop in comparison to baseline network without regularization. Sparse VD networks trained from pre-trained models achieve much higher quality but have lower sparsity levels than the one trained from random initialization. Better pretrained models are harder to sparsify. The quality of the model pretrained with VBD drops on the first epoches while the sparsity grows, and the model does not fully recover later. Regularization of RNNs Deep neural networks often suffer from overfitting, and different regularization techniques are used to improve their generalization ability. Dropout BIBREF11 is a popular method of neural networks regularization. The first successful implementations of this method for RNNs BIBREF17 , BIBREF18 applied dropout only for feed-forward connections and not recurrent ones. Introducing dropout in recurrent connections may lead to a better regularization technique but its straightforward implementation may results in underfitting and memory loss through time BIBREF19 . Several ways of dropout application for recurrent connections in LSTM were proposed recently BIBREF20 , BIBREF1 , BIBREF19 . These methods inject binary noise into different parts of LSTM units. Semeniuta et al. semeniuta16 shows that proper implementation of dropout for recurrent connections is important not only for effective regularization but also to avoid vanishing gradients. Bayer et al. bayer13 successfully applied fast dropout BIBREF13 , a deterministic approximation of dropout, to RNNs. Krueger et al. zoneout introduced zoneout which forces some hidden units to maintain their previous values, like in feedforward stochastic depth networks BIBREF21 . Compression of RNNs Reducing RNN size is an important and rapidly developing area of research. One possible concept is to represent large RNN weight matrix by some approximation of the smaller size. For example, Tjandra et. al. tjandra use Tensor Train decomposition of the weight matrices and Le et al. kroneker approximate this matrix with Kronecker product. Hubara et. al itay limit the weights and activations to binary values proposing a way how to compute gradients w. r. t. them. Another concept is to start with a large network and to reduce its size during or after training. The most popular approach here is pruning: the weights of the RNN are cut off on some threshold. Narang et al. pruning choose threshold using several hyperparameters that control the frequency, the rate and the duration of the weights eliminating. Discussion and future work When applying Sparse VD to RNNs we rely on the dropout for RNNs proposed by Gal & Ghahramani gal. The reason is that this dropout technique for RNNs is the closest one to Sparse VD approach. However, there are several other dropout methods for recurrent networks that outperform this baseline BIBREF19 , BIBREF22 . Comparison with them is our future work. Combining Sparse VD with these latest dropout recipes is also an interesting research direction. The challenge here is that the noise should be put on the neurons or gates instead of the weights as in our model. However, there are several recent papers BIBREF23 , BIBREF24 where group sparsity methods are proposed for fully connected and convolutional networks. These methods can be used to solve the underlined problem. The comparison of our approach with other RNN sparsification techniques is still a work-in-progress. It would be interesting to perform this comparison on larger networks, for example, for speech recognition task. One more curious direction of the research is to sparsify not only recurrent layer but an embedding layer too. It may have a lot of parameters in the tasks with large dictionary, such as word based language modeling. Acknowledgements We would like to thank Dmitry Molchanov and Arsenii Ashukha for valuable feedback. Nadezhda Chirkova has been supported by Russian Academic Excellence Project `5-100', and Ekaterina Lobacheva has been supported by Russian Science Foundation grant 17-71-20072. We would also like to thank the Department of Algorithms and Theory of Programming, Faculty of Innovation and High Technology in Moscow Institute of Physics and Technology for provided computational resources.
the dropout technique of Gal & Ghahramani gal
a458c649a793588911cef4c421f95117d0b9c472
a458c649a793588911cef4c421f95117d0b9c472_0
Q: Which strategies show the most promise in deterring these attacks? Text: Introduction Nowadays, DNNs have solved masses of significant practical problems in various areas like computer vision BIBREF0 , BIBREF1 , audio BIBREF2 , BIBREF3 , natural language processing (NLP) BIBREF4 , BIBREF5 etc. Due to the great success, systems based on DNN are widely deployed in physical world, including some sensitive security tasks. However, Szegedy et al. BIBREF6 found an interesting fact that a crafted input with small perturbations could easily fool DNN models. This kind of inputs is called adversarial examples. Certainly, with the development of theory and practice, the definitions of adversarial examples BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 are varied. But these definitions have two cores in common. One is that the perturbations are small and the ability of fooling DNN models is the other. It naturally raises a question why adversarial examples exist in DNNs. The reason why they are vulnerable to adversarial examples is probably because of DNNs’ linear nature. Goodfellow et al. BIBREF7 then gave this explanation after adversarial examples arose. Researchers therefore treat adversarial examples as a security problem and pay much attention to works of adversarial attacks and defenses BIBREF10 , BIBREF11 . In recent years, category of adversarial examples becomes diverse, varying from image to audio and others. That means almost all deployed systems based on DNN are under the potential threat of adversarial attacks. For example, sign recognition system BIBREF12 , object recognition system BIBREF13 , audio recognition or control system BIBREF14 , BIBREF15 , BIBREF16 and malware detection system BIBREF17 , BIBREF18 are all hard to defend against this kind of attack. Of course, systems for NLP tasks are also under the threat of adversarial examples, like text classification, sentiment analysis, question answering system, recommendation system and so on. In real life, people are increasingly inclined to search for related comments before shopping, eating or watching film and the corresponding items with recommendation score will be given at the same time. The higher the score is, the more likely it is to be accepted by humans. These recommendation apps mainly take advantage of sentiment analysis with others’ previous comments BIBREF19 . Thus attackers could generate adversarial examples based on natural comments to smear competitors (see Fig.1 for instance) or do malicious recommendations for shoddy goods with the purpose of profit or other malicious intents. Apart from mentioned above, adversarial examples can also poison network environment and hinder detection of malicious information BIBREF20 , BIBREF21 , BIBREF22 . Hence, it is significant to know how adversarial attacks conduct and what measures can defend against them to make DNNs more robust. This paper presents a comprehensive survey on adversarial attacks and defenses in text domain to make interested readers have a better understanding of this concept. It presents the following contributions: The remainder of this paper is organized as follows: we first give some background about adversarial examples in section "Background" . In section "Adversarial Attacks in Text" , we review the adversarial attacks for text classification and other real-world NLP tasks. The researches with the central topic of defense are introduced in section "Defenses against Adversarial Attacks in text" and "Testing and verification as the important defenses against adversarial attacks" . One of them is on existing defense methods in text and the other is about how to improve the robustness of DNNs from another point of view. The discussion and conclusion of the article is in section "Discussion of Challenges and Future Direction" and "Conclusion" . Background In this section, we describe some research background on the textual adversarial examples, including representation of symbol and attack types and scenarios. Adversarial Example Formulation The function of a pre-trained text classification model $\textbf {\emph {F}}$ is mapping from input set to the label set. For a clean text example $\emph {x}$ , it is correctly classified by $\textbf {\emph {F}}$ to ground truth label $\emph {y} \in \textbf {\emph {Y}}$ , where $\textbf {\emph {Y}}$ including $\lbrace 1, 2, \ldots , k\rbrace $ is a label set of k classes. An attacker aims at adding small perturbations in $\emph {x}$ to generate adversarial example $\emph {x}^{^{\prime }}$ , so that $\textbf {\emph {F}}(\emph {x}^{’}) = \emph {y}^{’}(\emph {y} \ne \emph {y}^{’})$ . Generally speaking, a good $\emph {x}^{^{\prime }}$ should not only be misclassified by $\emph {x}$0 , but also imperceptible to humans, robust to transformations as well as resilient to existing defenses depending on the adversarial goals BIBREF24 . Hence, constraint conditions (e.g. semantic similarity, distance metric, etc.) are appended to make $\emph {x}$1 be indistinguishable from $\emph {x}$2 in some works and exploit it to cause classification errors like Fig. 1 . Types of Adversarial Attack Why adversarial examples pose greater concern may be due to the fact that adversarial attacks can be easily conducted on DNNs, even though attackers have no knowledge of target model. Accordingly, attacks can be categorized by the level of authorization about the model. Black-box. A more detailed division can be done in black-box attack, resulting in black-box attack with or without probing. In the former scenario, adversaries can probe target model by observing outputs, even if they do not know much about the model. This case can also be called a gray-box attack. In the latter scenario, adversaries have little or no knowledge on target model and they can not probe it. Under this condition, adversaries generally train their own models and utilize the transferability BIBREF7 , BIBREF25 of adversarial examples to carry out an attack. White-box. In white-box attack, adversaries have full access to target model and they can know all about architectures, parameters and weights of the model. Certainly, both white-box and black-box attacks can not change the model and training data. According to the purpose of the adversary, adversarial attacks can be categorized as targeted attack and non-targeted attack. Targeted attack. In this case, the generated adversarial example $\emph {x}^{^{\prime }}$ is purposeful classified as class t which is the target of an adversary. Non-targeted attack. In this case, the adversary only wants to fool the model and the result $\emph {y}^{^{\prime }}$ can be any class except for ground truth $\emph {y}$ . Metric There exists an important issue that the generated adversarial texts should not only be able to fool target models, but also need to keep the perturbations imperceptible. In other words, good adversarial examples should convey the same semantic meaning with the original ones so that metric measures are required to ensure this case. We describe different kinds of measures to evaluate the utility of adversarial examples in image and text. Then we analyze the reasons why metric measures in image are not suitable in text. In image, almost all recent studies on adversarial attacks adopt $L_{p}$ distance as a distance metric to quantify the imperceptibility and similarity of adversarial examples. The generalized term for $L_{p}$ distance is as follows: $$\Vert \triangle x \Vert _{p}=\@root p \of {\sum _{i=1}^{n} |x^{^{\prime }}-x|^{p}}$$ (Eq. 9) where $\triangle x$ represents the perturbations. This equation is a definition of a set of distances where p could be 0, 1, $\infty $ and so on. Specially, $L_{0}$ BIBREF26 , BIBREF27 , BIBREF28 , $L_{2}$ BIBREF29 , BIBREF28 , BIBREF30 , BIBREF31 and $L_{\infty }$ BIBREF6 , BIBREF7 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 are the three most frequently used norms in adversarial images. $L_{0}$ distance evaluates the number of changed pixels before and after modifications. It seems like edit distance, but it may not directly work in text. Because results of altered words in text are varied. Some of them are similar to original words and the others may be contrary, even though the $L_{0}$ distance of them is same. $L_{2}$ distance is the Euclidean distance. The original Euclidean distance is the beeline from one point to another in Euclidean space. As the mapping of image, text or others to it, Euclidean space becomes a metric space to calculate the similarity between two objects represented as the vector. $L_{\infty }$ distance measures the maximum change as follows: $$\Vert \triangle x \Vert _{\infty }=\max (|x_{1}^{^{\prime }}-x_{1}|,\ldots ,|x_{n}^{^{\prime }}-x_{n}|)$$ (Eq. 13) Although $L_{\infty }$ distance is thought to be the optimal distance metric to use in some work, but it may fail in text. The altered words may not exist in pre-trained dictionary so that they are considered to be unknown words and their word vectors are also unknown. As a result, $L_{\infty }$ distance is hard to calculate. There are also other metric measures(e.g. structural similarity BIBREF35 , perturbation sensitivity BIBREF36 ) which are typical methods for image. Some of them are considered to be more effective than $L_{p}$ distance, but they con not directly used too. In order to overcome the metric problem in adversarial texts, some measures are presented and we describe five of them which have been demonstrated in the pertinent literature. Euclidean Distance. In text, for two given word vectors $\vec{m}=(m_1, m_2, \ldots , m_k)$ and $\vec{n}=(n_1, n_2, \ldots , n_k)$ , the Euclidean distance is: $$D\left(\vec{m},\vec{n}\right)\!=\!\sqrt{(m_1\!-\!n_1)^2\!+\!\ldots \!+\!(m_k\!-\!n_k)^2}$$ (Eq. 15) Euclidean distance is more used for the metric of adversarial images BIBREF29 , BIBREF28 , BIBREF30 , BIBREF31 than texts with a generalized term called $L_{2}$ norm or $L_{2}$ distance. Cosine Similarity. Cosine similarity is also a computational method for semantic similarity based on word vector by the cosine value of the angle between two vectors. Compared with Euclidean distance, the cosine distance pays more attention to the difference in direction between two vectors. The more consistent the directions of two vectors are, the greater the similarity is. For two given word vectors $\vec{m}$ and $\vec{n}$ , the cosine similarity is: $$D\left(\vec{m}, \vec{n}\right) = \frac{\vec{m} \cdot \vec{n}}{\Vert m \Vert \cdot \Vert n \Vert } = \frac{\sum \limits _{i=1}^k m_i \times n_i}{\sqrt{\sum \limits _{i=1}^k (m_i)^2} \times \sqrt{\sum \limits _{i=1}^k (n_i)^2}}$$ (Eq. 16) But the limitation is that the dimensions of word vectors must be the same. Jaccard Similarity Coefficient. For two given sets A and B, their Jaccard similarity coefficient is: $$J\left(A, B\right) = |A \cap B| / |A \cup B|$$ (Eq. 17) where $0 \le J(A,B) \le 1$ . It means that the closer the value of $J(A,B)$ is to 1, the more similar they are. In the text, intersection $A \cap B$ refers to similar words in the examples and union $A \cup B$ is all words without duplication. Word Mover’s Distance (WMD). WMD BIBREF37 is a variation of Earth Mover's Distance (EMD) BIBREF38 . It can be used to measure the dissimilarity between two text documents, relying on the travelling distance from embedded words of one document to another. In other words, WMD can quantify the semantic similarity between texts. Meanwhile, Euclidean distance is also used in the calculation of WMD. Edit Distance. Edit distance is a way to measure the minimum modifications by turning a string to another. The higher it is, the more dissimilar the two strings are. It can be applied to computational biology and natural language processing. Levenshtein distance BIBREF39 is also referred to as edit distance with insertion, deletion, replacement operations used in work of BIBREF23 . Datasets in Text In order to make data more accessible to those who need it, we collect some datasets which have been applied to NLP tasks in recent literatures and a brief introductions are given at the same time. These data sets can be downloaded via the corresponding link in the footnote. AG's News $\footnote {http://www.di.unipi.it/~gulli/AG\underline{ }corpus\underline{ }of\underline{ }news\underline{ }articles.html}$ : This is a news set with more than one million articles gathered from over 2000 news sources by an academic news search engine named ComeToMyHead. The provided db version and xml version can be downloaded for any non-commercial use. DBPedia Ontology $\footnote {https://wiki.dbpedia.org/services-resources/ontology}$ : It is a dataset with structured content from the information created in various Wikimedia projects. It has over 68 classes with 2795 different properties and now there are more than 4 million instances included in this dataset. Amazon Review $\footnote {http://snap.stanford.edu/data/web-Amazon.html}$ : The Amazon review dataset has nearly 35 million reviews spanning Jun 1995 to March 2013, including product and user information, ratings, and a plaintext review. It is collected by over 6 million users in more than 2 million products and categorized into 33 classes with the size ranging from KB to GB. Yahoo! Answers $\footnote { https://sourceforge.net/projects/yahoodataset/}$ : The corpus contains 4 million questions and their answers, which can be easily used in the question answer system. Besides that, a topic classification dataset is also able to be constructed with some main classes. Yelp Reviews $\footnote {https://www.yelp.com/dataset/download}$ : The provided data is made available by Yelp to enable researchers or students to develop academic projects. It contains 4.7 million user reviews with the type of json files and sql files. Movie Review (MR) $\footnote {http://www.cs.cornell.edu/people/pabo/movie-review-data/}$ : This is a labeled dataset with respect to sentiment polarity, subjective rating and sentences with subjectivity status or polarity. Probably because it is labeled by humans, the size of this dataset is smaller than others, with a maximum of dozens of MB. MPQA Opinion Corpus $\footnote {http://mpqa.cs.pitt.edu/}$ : The Multi-Perspective Question Answering (MPQA) Opinion Corpus is collected from a wide variety of news sources and annotated for opinions or other private states. Three different versions are available to people by the MITRE Corporation. The higher the version is, the richer the contents are. Internet Movie Database (IMDB) $\footnote {http://ai.stanford.edu/~amaas/data/sentiment/}$ : IMDBs is crawled from Internet including 50000 positive and negative reviews and average length of the review is nearly 200 words. It is usually used for binary sentiment classification including richer data than other similar datasets. IMDB also contains the additional unlabeled data, raw text and already processed data. SNLI Corpus $\footnote {https://nlp.stanford.edu/projects/snli/}$ : The Stanford Natural Language Inference (SNLI) Corpus is a collection with manually labeled data mainly for natural language inference (NLI) task. There are nearly five hundred thousand sentence pairs written by humans in a grounded context. More details about this corpus can be seen in the research of Samuel et al. BIBREF40 . Adversarial Attacks in Text Because the purpose of adversarial attacks is to make DNNs misbehave, they can be seen as a classification problem in a broad sense. And majority of recent representative adversarial attacks in text is related to classification so that we categorize them with this feature. In this section, we introduce the majority of existing adversarial attacks in text. Technical details and corresponding comments of each attack method described below are given to make them more clearly to readers. Non-target attacks for classification Adversarial attacks can be subdivided in many cases which are described in section "Discussion of Challenges and Future Direction" . With the purpose of more granular division of classification tasks, we introduce these attack methods group by group based on the desire of attackers. In this part, studies below are all non-target attacks that attackers do not care the category of misclassified results. Papernot et al. BIBREF41 might be the first to study the problem of adversarial example in text and contributed to producing adversarial input sequences on Recurrent Neural Network (RNN). They leveraged computational graph unfolding BIBREF42 to evaluate the forward derivative BIBREF26 , i.e. Jacobian, with respect to embedding inputs of the word sequences. Then for each word of the input, fast gradient sign method (FGSM) BIBREF7 was used on Jacobian tensor evaluated above to find the perturbations. Meanwhile, in order to solving the mapping problem of modified word embedding, they set a special dictionary and chose words to replace the original ones. The constraint of substitution operation was that the sign of the difference between replaced and original words was closest to the result by FGSM. Although adversarial input sequences can make long-short term memory (LSTM) BIBREF43 model misbehave, words of the input sequences were randomly chosen and there might be grammatical error. This was also a FGSM-based method like adversarial input sequence BIBREF41 . But difference was that three modification strategies of insertion, replacement and deletion were introduced by Samanta et al. BIBREF44 to generate adversarial examples by preserving the semantic meaning of inputs as much as possible. Premise of these modifications was to calculate the important or salient words which would highly affect classification results if they were removed. The authors utilized the concept of FGSM to evaluate the contribution of a word in a text and then targeted the words in the decreasing order of the contribution. Except for deletion, both insertion and replacement on high ranking words needed candidate pools including synonyms, typos and genre special keywords to assist. Thus, the author built a candidate pool for each word in the experiment. However, it would consume a great deal of time and the most important words in actual input text might not have candidate pools. Unlike previous white-box methods BIBREF41 , BIBREF44 , little attention was paid to generate adversarial examples for black-box attacks on text. Gao et al. BIBREF23 proposed a novel algorithm DeepWordBug in black-box scenario to make DNNs misbehave. The two-stage process they presented were determining which important tokens to change and creating imperceptible perturbations which could evade detection respectively. The calculation process for the first stage was as follows: $$\begin{split} CS(x_i)=&[F(x_1,\ldots ,x_{i-1},x_i)-F(x_1,x_2,\ldots ,x_{i-1})]+\\&\lambda [F(x_i,x_{i+1},\ldots ,x_n)-F(x_{i+1},\ldots ,x_n)] \end{split}$$ (Eq. 23) where $\emph {x}_i$ was the i-th word in the input and F was a function to evaluate the confidence score. Later similar modifications like swap, substitution, deletion and insertion were applied to manipulate the important tokens to make better adversarial examples. Meanwhile, in order to preserve the readability of these examples, edit distance was used by the authors. Different from other methods, Sato et al. BIBREF45 operated in input embedding space for text and reconstructed adversarial examples to misclassify the target model. The core idea of this method was that they searched for the weights of the direction vectors which maximized loss functions with overall parameters W as follows: $$ \alpha _{iAdvT} = \mathop {\arg \max }_{\alpha ,\Vert \alpha \Vert \le \epsilon } \lbrace \ell (\vec{w} + \sum _{k=1}^{|V|}a_kd_k, \hat{Y}, W)\rbrace $$ (Eq. 25) where $\sum _{k=1}^{|V|}a_kd_k$ was the perturbation generated from each input on its word embedding vector $\vec{w}$ and $\vec{d}$ was the direction vector from one word to another in embedding space. Because $\aleph _{iAdvT}$ in Eq. ( 25 ) was hard to calculate, the authors used Eq. ( 26 ) instead: $$ \alpha _{iAdvT} = \frac{\epsilon g}{\Vert g \Vert _2}, g = \nabla _{\alpha }\ell (\vec{w} + \sum _{k=1}^{|V|}a_kd_k, \hat{Y}, W)$$ (Eq. 26) The loss function of iAdvT was then defined based on $\aleph _{iAdvT}$ as an optimization problem by jointly minimizing objection functions on entire training dataset D as follows: $$\begin{split} \hat{W} = &\frac{1}{|D|}\mathop {\arg \min }_{W}\lbrace \sum _{(\hat{X},\hat{Y})\in D}\ell (\hat{X},\hat{Y},W)+\\&\lambda \sum _{(\hat{X},\hat{Y})\in D}\ell (\hat{X}_{+\gamma (\alpha _{iAdvT})},\hat{Y},W)\rbrace \end{split}$$ (Eq. 27) Compared with Miyato et al. BIBREF46 , iAdv-Text restricted the direction of perturbations to find a substitute which was in the predefined vocabulary rather than an unknown word to replace the origin one. Thus, it improved the interpretability of adversarial examples by adversarial training. The authors also took advantage of cosine similarity to select a better perturbation at the same time. Similarly, Gong et al. BIBREF47 also searched for adversarial perturbations in embedding space, but their method was gradient-based. Even though WMD was used by the authors to measure the similarity of clean examples and adversarial examples, the readability of generated results seemed a little poor. Li et al. BIBREF48 proposed an attack framework TextBugger for generating adversarial examples to trigger the deep learning-based text understanding system in both black-box and white-box settings. They followed the general steps to capture important words which were significant to the classification and then crafted on them. In white-box setting, Jacobian matrix was used to calculate the importance of each word as follows: $$C_{x_i} = J_{F(i,y)} = \frac{\partial F_y(x)}{\partial x_i}$$ (Eq. 29) where $F_y(\cdot )$ represented the confidence value of class y. The slight changes of words were in character-level and word-level respectively by operations like insertion, deletion, swap and substitution. In black-box setting, the authors segmented documents into sequences and probed the target model to filter out sentences with different predicted labels from the original. The odd sequences were sorted in an inverse order by their confidence score. Then important words were calculated by removing method as follows: $$\begin{split} C_{x_i} = &F_y\left(x_1,\ldots ,x_{i-1},x_i,x_{i+1},\ldots ,x_n\right) \\& - F_y\left(x_1,\ldots ,x_{i-1},x_{i+1},\ldots ,x_n\right) \end{split}$$ (Eq. 30) The last modification process was same as that in white-box setting. Target attacks for classification For target attack, attackers purposefully control the category of output to be what they want and the generated examples have similar semantic information with clean ones. This kind of attacks are described one by one in the following part. Different from works in BIBREF41 , BIBREF44 , Liang et al. BIBREF49 first demonstrated that FGSM could not be directly applied in text. Because input space of text is discrete, while image data is continuous. Continuous image has tolerance of tiny perturbations, but text does not have this kind of feature. Instead, the authors utilized FGSM to determine what, where and how to insert, remove and modify on text input. They conducted two kinds of attacks in different scenarios and used the natural language watermarking BIBREF50 technique to make generated adversarial examples compromise their utilities. In white-box scenario, the authors defined the conceptions of hot training phrases and hot sample phrases which were both obtained by leveraging the backpropagation algorithm to compute the cost gradients of samples. The former one shed light on what to insert and the later implied where to insert, remove and modify. In black-box scenario, authors used the idea of fuzzing technique BIBREF51 for reference to obtain hot training phrases and hot sample phrases. One assumption was that the target model could be probed. Samples were fed to target model and then isometric whitespace was used to substitute origin word each time. The difference between two classification results was each word's deviation. The larger it was, the more significant the corresponding word was to its classification. Hence, hot training phrases were the most frequent words in a set which consisted of the largest deviation word for each training sample. And hot sample phrases were the words with largest deviation for every test sample. Like one pixel attack BIBREF27 , a similar method named HotFlip was proposed by Ebrahimi et al. BIBREF52 . HotFlip was a white-box attack in text and it relied on an atomic flip operation to swap one token for another based on gradient computation. The authors represented samples as one-hot vectors in input space and a flip operation could be represented by: $$ \begin{split} \vec{v}_{ijb} = &(\vec{0},\ldots ;(\vec{0},\ldots (0,0,\ldots ,0,-1,0,\ldots ,1,0)_j,\\&\ldots ,\vec{0})_i;\vec{0},\ldots ) \end{split}$$ (Eq. 34) The eq. ( 34 ) means that the j-th character of i-th word in a sample was changed from a to b, which were both characters respectively at a-th and b-th places in the alphabet. The change from directional derivative along this vector was calculated to find the biggest increase in loss $\emph {J}(x, y)$ as follows: $$\max \nabla _{x}J(x, y)^T\cdot \vec{v}_{ijb} = \mathop {\max }_{ijb}\frac{\partial J^{(b)}}{\partial x_{ij}} - \frac{\partial J^{(a)}}{\partial x_{ij}}$$ (Eq. 35) where $x_{ij}^{(a)}=1$ . HotFlip could also be used on character-level insertion, deletion and word-level modification. Although HotFlip performed well on character-level models, only few successful adversarial examples could be generated with one or two flips under the strict constraints. Considering the limitation of gradient-based methods BIBREF41 , BIBREF44 , BIBREF22 , BIBREF52 in black-box case, Alzantot et al. BIBREF53 proposed a population-based optimization via genetic algorithm BIBREF54 , BIBREF55 to generated semantically similar adversarial examples. They randomly selected words in the input and computed their nearest neighbors by Euclidean Distance in GloVe embedding space BIBREF56 . These nearest neighbors which did not fit within the surrounding were filtered based on language model BIBREF57 scores and only high-ranking words with the highest scores were kept. The substitute which would maximize probability of the target label was picked from remaining words. At the same time, aforementioned operations were conducted several times to get a generation. If predicted label of modified samples in a generation were not the target label, the next generation was generated by randomly choosing two samples as parents each time and the same process was repeated on it. This optimization procedure was done to find successful attack by genetic algorithm. In this method, random selection words in the sequence to substitute were full of uncertainty and they might be meaningless for the target label when changed. These attacks above for classification are either popular or representative ones in recent studies. Some main attributes of them are summarized in table 1 and instances in these literatures are in appendix A. [10]https://iamtrask.github.io/2015/11/15/anyone-can-code-lstm/ [11]https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py [12]https://github.com/Smerity/keras_snli/blob/master/snli_rnn.py Adversarial examples on other tasks We have reviewed adversarial attacks for classification task in the previous subsections. But what other kinds of tasks or applications can be attacked by adversarial examples? How are they generated in these cases and whether the crafted examples can be applied in another way except for attack? These questions naturally arise and the answers will be described below. In order to know whether reading comprehension systems could really understand language, Jia et al. BIBREF61 inserted adversarial perturbations into paragraphs to test the systems without changing the true answers or misleading humans. They extracted nouns and adjectives in the question and replaced them with antonyms. Meanwhile named entities and numbers were changed by the nearest word in GloVe embedding space BIBREF56 . The modified question was transformed into declarative sentence as the adversarial perturbation which was concatenated to the end of the original paragraph. This process was call ADDSENT by the authors. Another process ADDANY was also used to randomly choose any sequence of some words to craft. Compared with ADDSENT, ADDANY did not consider grammaticality and it needed query the model several times. Certainly, both two kinds of generated adversarial examples could fool reading comprehension systems well that gave out incorrect answers. Mainly because they tried to draw the model’s attention on the generated sequences. Mudrakarta et al. BIBREF62 also studied adversarial examples on answering question system and part of their work could strengthen attacks proposed by Jia et al. BIBREF61 . Besides reading comprehension systems BIBREF61 , Minervini et al. BIBREF63 cast the generation of adversarial examples which violated the given constraints of First-Order Logic (FOL) in NLI as an optimization problem. They maximized the proposed inconsistency loss to search for substitution sets S by using a language model as follows: $$\begin{split} \mathop {maximise}\limits _{S} J_{I}(S) = &\left[p(S;body)-p(S;head)\right]_{+}, \\&s.t. \log p_{L}(S)\le \tau \end{split}$$ (Eq. 42) where $[x]_{+}=\max (0,x)$ and $\tau $ was a threshold on the perplexity of generated sequences. $S={X_{1}\rightarrow s_{1},\ldots ,X_{n}\rightarrow s_{n}}$ denoted a mapping from ${X_{1},\ldots ,X_{n}}$ which was the set of universally quantified variables in a rule to sequences in S. $p(S; body)$ and $p(S; head)$ denoted the probability of the given rule, after replacing $X_{i}$ with the corresponding sentence $S_{i}$ . The generated sequences which were the adversarial examples helped the authors find weaknesses of NLI systems when faced with linguistic phenomena, i.e. negation and antonymy. NMT was another kind of system attacked by adversaries and Belinkov et al. BIBREF64 made this attempt. They devised black-box methods depending on natural and synthetic language errors to generate adversarial examples. The naturally occurring errors included typos, misspelling words or others and synthetic noise was modified by random or keyboard typo types. These experiments were done on three different NMT systems BIBREF65 , BIBREF66 and results showed that these examples could also effectively fool the target systems. The same work was also done by Ebrahimi et al. BIBREF67 to conduct an adversarial attack on character-level NMT by employing differentiable string-edit operations. The method of generating adversarial examples was same in their previous work BIBREF52 . Compared with Belinkov et al. BIBREF64 , the authors demonstrated that black-box adversarial examples were much weaker than black-box ones in most cases. Iyyer et al. BIBREF68 crafted adversarial examples by the use of SCPNS they proposed. They designed this model for generating syntactically adversarial examples without decreasing the quality of the input semantics. The general process mainly relied on the encoder-decoder architecture of SCPNS. Given a sequence and a corresponding target syntax structure, the authors encoded them by a bidirectional LSTM model and decoded by LSTM model augmented with soft attention over encoded states BIBREF69 and the copy mechanism BIBREF70 . They then modified the inputs to the decoder, aiming at incorporating the target syntax structure to generate adversarial examples. The syntactically adversarial sentences not only could fool pre-trained models, but also improved the robustness of them to syntactic variation. The authors also used crowdsourced experiment to demonstrate the validity of the generated. Apart from attacks, adversarial examples were used as a way to measure robustness of DNN models. Blohm et al. BIBREF71 generated adversarial examples to find out the limitations of a machine reading comprehension model they designed. The categories of adversarial examples included word-level and sentence-level attack in different scenarios BIBREF72 . By comparing with human performance, experiment results showed that some other attributions, e.g. answer by elimination via ranking plausibility BIBREF73 , should be added into this model to improve its performance. Defenses against Adversarial Attacks in text The constant arms race between adversarial attacks and defenses invalidates conventional wisdom quickly BIBREF24 . In fact, defense is more difficult than attack and few works have been done on this aspect. There are two reasons for this situation. One is that a good theoretical model do not exist for complicated optimization problems like adversarial examples. The other is that tremendous amount of possible inputs may produce the target output with a very high possibility. Hence, a truly adaptive defense method is difficult. In this section, we describe some relatively effective methods of defenses against adversarial attacks in text. Defenses by processing training or input data Adversarial examples are also a kind of data with a special purpose. The first thing to think about is whether data processing or detecting is useful against adversarial attacks. Researchers have done various attempts such as adversarial training and spelling check in text. Adversarial training BIBREF7 was a direct approach to defend adversarial images in some studies BIBREF7 BIBREF74 . They mixed the adversarial examples with corresponding original examples as training dataset to train the model. Adversarial examples could be detected to a certain degree in this way, but adversarial training method was not always work. In text, there were some effects against the attacks after adversarial training BIBREF52 , BIBREF23 , BIBREF48 . However, it failed in the work of BIBREF53 , mainly because the different ways of generating adversarial examples. The modifications of the former were insertion, substitution, deletion and replacement, while the later took use of genetic algorithm to search for adversarial examples. Overfitting may be another reason why adversarial training method is not always useful and may be only effective on its corresponding attack. This has been confirmed by Tram`er et al. BIBREF75 in image domain, but it remains to be demonstrated in text. Another strategy of defense against adversarial attacks is to detect whether input data is modified or not. Researchers think that there exists some different features between adversarial example and its clean example. For this view, a series of work BIBREF76 , BIBREF77 , BIBREF78 , BIBREF79 , BIBREF80 has been conducted to detect adversarial examples and performs relatively well in image. In text, the ways of modification strategy in some methods may produce misspelling words in generated adversarial examples. This is a distinct different feature which can be utilized. It naturally came up with an idea to detect adversarial examples by checking out the misspelling words. Gao et al. BIBREF23 used an autocorrector which was the Python autocorrect 0.3.0 package before the input. And Li et al. BIBREF48 took advantage of a context-aware spelling check service to do the same work. But experiment results showed that this approach was effective on character-level modifications and partly useful on word-level operations. Meanwhile, the availability of different modifications was also different no matter on character-level or word-level methods. Re-defining function to improve robustness Except for adversarial training and spelling checking, improving robustness of the model is another way to resist adversarial examples. With the purpose of improving the ranking robustness to small perturbations of documents in the adversarial Web retrieval setting, Goren et al. BIBREF81 formally analyzed, defined and quantified the notions of robustness of linear learning-to-rank-based relevance ranking function. They adapted the notions of classification robustness BIBREF6 , BIBREF82 to ranking function and defined related concepts of pointwise robustness, pairwise robustness and a variance conjecture. To quantify the robustness of ranking functions, Kendall's- $\tau $ distance BIBREF83 and “top change” were used as normalized measures. Finally, the empirical findings supported the validity of the authors' analyses in two families of ranking functions BIBREF84 , BIBREF85 . Testing and verification as the important defenses against adversarial attacks The current security situation in DNNs seems to fall into a loop that new adversarial attacks are identified and then followed by new countermeasures which will be subsequently broken BIBREF86 . Hence, the formal guarantees on DNNs behavior are badly needed. But it is a hard work and nobody can ensure that their methods or models are perfect. Recently, what we could do is to make the threat of adversarial attacks as little as possible. The technology of testing and verification helps us deal with the problems from another point of view. By the means of it, people can know well about the safety and reliability of systems based on DNNs and determine whether to take measures to address security issues or anything else. In this section, we introduce recent testing and verification methods for enhancing robustness of DNNs against adversarial attacks. Even though these methods reviewed below have not applied in text, we hope someone interested in this aspect can be inspired and comes up with a good defense method used in text or all areas. Testing methods against adversarial examples As increasingly used of DNNs in security-critical domains, it is very significant to have a high degree of trust in the models’ accuracy, especially in the presence of adversarial examples. And the confidence to the correct behavior of the model is derived from the rigorous testing in a variety of possible scenarios. More importantly, testing can be helpful for understanding the internal behaviors of the network, contributing to the implementation of defense methods. This applies the traditional testing methodology used in DNNs. Pei et al. BIBREF87 designed a white-box framework DeepXplore to test real-world DNNs with the metric of neuron coverage and leveraged differential testing to catch the differences of corresponding output between multiple DNNs. In this way, DeepXplore could trigger the majority logic of the model to find out incorrect behaviors without manual efforts. It performed well in the advanced deep learning systems and found thousands of corner cases which would make the systems crash. However, the limitation of DeepXplore was that if all the DNNs made incorrect judgement, it was hard to know where was wrong and how to solve it. Different from single neuron coverage BIBREF87 , Ma et al. BIBREF88 proposed a multi-granularity testing coverage criteria to measure accuracy and detect erroneous behaviors. They took advantage of four methods BIBREF7 , BIBREF26 , BIBREF28 , BIBREF32 to generate adversarial test data to explore the new internal states of the model. The increasing coverage showed that the larger the coverage was, the more possible the defects were to be checked out. Similar work was done by Budnik et al. BIBREF89 to explore the output space of the model under test via an adversarial case generation approach. In order to solve the limitation of neuron coverage, Kim et al. BIBREF90 proposed a Surprise Adequacy for Deep Learning Systems(SADL) to test DNNs and developed Surprise Coverage(SC) to measure the coverage of the range of Surprise Adequacy(SA) values, which measured the different behaviors between inputs and training data. Experimental results showed that the SA values could be a metric to judge whether an input was adversarial example or not. In other hand, it could also improve the accuracy of DNNs against adversarial examples by retraining. There also exists other kinds of testing method against adversarial examples. Wicker et al. BIBREF91 presented a feature-guided approach to test the resilience of DNNs in black-box scenario against adversarial examples. They treated the process of generating adversarial cases as a two-player turn-based stochastic game with the asymptotic optimal strategy based on Monte Carlo tree search (MCTS) algorithm. In this strategy, there was an idea of reward for accumulating adversarial examples found over the process of game play and evaluated the robustness against adversarial examples by the use of it. Besides the feature-guided testing BIBREF91 , Sun et al. BIBREF92 presented DeepConcolic to evaluate the robustness of well-known DNNs, which was the first attempt to apply traditional concolic testing method for these networks. DeepConcolic iteratively used concrete execution and symbolic analysis to generate test suit to reach a high coverage and discovered adversarial examples by a robustness oracle. The authors also compared with other testing methods BIBREF87 , BIBREF88 , BIBREF93 , BIBREF94 . In terms of input data, DeepConcolic could start with a single input to achieve a better coverage or used coverage requirements as inputs. In terms of performance, DeepConcolic could achieve higher coverage than DeepXplore, but run slower than it. Verification methods against adversarial examples Researchers think that testing is insufficient to guarantee the security of DNNs, especially with unusual inputs like adversarial examples. As Edsger W. Dijkstra once said, “testing shows the presence, not the absence of bugs”. Hence, verification techniques on DNNs are needed to study more effective defense methods in adversarial settings. Pulina et al. BIBREF95 might be the first to develop a small verification system for a neural network. Since then, related work appears one after another. But verification of machine learning models’ robustness to adversarial examples is still in its infancy BIBREF96 . There is only a few researches on related aspects. We will introduce these works in the following part. There are several researches to check security properties against adversarial attacks by diverse kinds of Satisfiability Modulo Theory (SMT) BIBREF97 solvers. Katz et al. BIBREF98 presented a novel system named Reluplex to verify DNNs by splitting the problem into the LP problems with Rectified Linear Unit (ReLU) BIBREF99 activation functions based on SMT solver. Reluplex could be used to find adversarial inputs with the local adversarial robustness feature on the ACAS Xu networks, but it failed on large networks on the global variant. Huang et al. BIBREF100 proposed a new verification framework which was also based on SMT to verify neural network structures. It relied on discretizing search space and analyzing output of each layer to search for adversarial perturbations, but the authors found that SMT theory could only suitable for small networks in practice. On the other hand, this framework was limited by many assumptions and some of functions in it were unclear. For ReLU networks, a part of researches regarded the verification as a Mixed Integer Linear Programming (MILP) problem such as Tjeng et al. BIBREF101 . They evaluated robustness to adversarial examples from two aspects of minimum adversarial distortion BIBREF102 and adversarial test accuracy BIBREF103 . Their work was faster than Reluplex with a high adversarial test accuracy, but the same limitation was that it remained a problem to scale it to large networks. Different from other works, Narodytska et al. BIBREF104 verify the secure properties on the binarized neural networks(BNNs) BIBREF105 . They were the first to utilize exact Boolean encoding on a network to study its robustness and equivalence. The inputs would be judged whether they were adversarial examples or not by two encoding structures Gen and Ver. It could easily find adversarial examples for up to 95 percent of considered images on the MNIST dataset and also worked on the middle-sized BNNs rather than large networks. There is a different point of view that the difficulty in proving properties about DNNs is caused by the presence of activation functions BIBREF98 . Some researchers pays more attention to them for exploring better verification methods. Gehr et al. BIBREF106 introduced abstract transformers which could get the outputs of layers in convolutional neural network with ReLU, including fully connected layer. The authors evaluated this approach on verifying robustness of DNNs such as pre-trained defense network BIBREF107 . Results showed that FGSM attack could be effectively prevented. They also did some comparisons with Reluplex on both small and large networks. The stare-of-the-art Reluplex performed worse than it in verification of properties and time consumption. Unlike existing solver-based methods (e.g. SMT), Wang et al. BIBREF108 presented ReluVal which leveraged interval arithmetic BIBREF109 to guarantee the correct operations of DNNs in the presence of adversarial examples. They repeatedly partitioned input intervals to find out whether the corresponding output intervals violated security property or not. By contrast, this method was more effective than Reluplex and performed well on finding adversarial inputs. Weng et al. BIBREF110 designed two kinds of algorithm to evaluate lower bounds of minimum adversarial distortion via linear approximations and bounding the local Lipschitz constant. Their methods could be applied into defended networks especially for adversarial training to evaluate the effectiveness of them. Discussion of Challenges and Future Direction In the previous sections, a detailed description of adversarial examples on attack and defense was given to enable readers to have a faster and better understanding of this respect. Next, we present more general observations and discuss challenges on this direction based on the aforementioned contents. Judgement on the performance of attack methods: Generally, authors mainly evaluate their attacks on target models by accuracy rate or error rate. The lower the accuracy rate is, the more effective the adversarial examples are. And the use of error rate is the opposite. Certainly, some researchers prefer to utilize the difference in accuracy before and after attacks, because it can show the effect of attacks more intuitively. And these criterions can also used in defending of adversarial examples. Reasons by using misspelled words in some methods: The motivation by using misspelled words is similar to that in image, which aims at fooling target models with indiscernible perturbations. Some methods tend to conduct character-level modification operations which highly result in misspelled words. And humans are extremely robust against that case in written language BIBREF111 . Transferability in black-box scenario: When the adversaries have no access including probing to the target models, they train a substitute model and utilize the transferability of adversarial examples. Szegedy et al. BIBREF6 first found that adversarial examples generated from a neural network could also make another model misbehave by different datasets. This reflects the transferability of the adversarial eample. As a result, adversarial examples generated in the substitute model are used to attack the target models while models and datasets are all inaccessible. Apart from that, constructing adversarial examples with high transferability is a prerequisite to evaluate the effectiveness of black-box attacks and a key metric to evaluate generalized attacks BIBREF112 . The lack of a universal approach to generate adversarial examples: Because the application of adversarial examples in text rose as a frontier in recent years, the methods of adversarial attacks were relatively few, let alone defenses. The another reason why this kind of method do not exist is the language problem. Almost all recent methods use English dataset and the generated adversarial examples may be useless to the systems with Chinese or other language dataset. Thus, there is not a universal approach to generate adversarial examples. But in our observations, many methods mainly follow a two-step process to generate adversarial examples. The first step is to find important words which have significant impact on classification results and then homologous modifications are used to get adversarial examples. Difficulties on adversarial attacks and defenses:: There are many reasons for this question and one of the main reasons is that there is not a straightforward way to evaluate proposed works no matter attack or defense. Namely, the convincing benchmarks do not exist in recent works. One good performed attack method in a scenario may failed in another or new defense will soon be defeated in the way beyond defenders' anticipation. Even though some works are provably sound, but rigorous theoretical supports are still needed to deal with the problem of adversarial examples. Appropriate future directions on adversarial attacks and defenses: As an attacker, designing universal perturbations to catch better adversarial examples can be taken into consideration like it works in image BIBREF29 . A universal adversarial perturbation on any text is able to make a model misbehave with high probability. Moreover, more wonderful universal perturbations can fool multi-models or any model on any text. On the other hand, the work of enhancing the transferability of adversarial examples is meaningful in more practical back-box attacks. On the contrary, defenders prefer to completely revamp this vulnerability in DNNs, but it is no less difficult than redesigning a network and is also a long and arduous task with the common efforts of many people. At the moment defender can draw on methods from image area to text for improving the robustness of DNNs, e.g. adversarial training BIBREF107 , adding extra layer BIBREF113 , optimizing cross-entropy function BIBREF114 , BIBREF115 or weakening the transferability of adversarial examples. Conclusion This article presents a survey about adversarial attacks and defenses on DNNs in text. Even though DNNs have the high performance on a wide variety of NLP, they are inherently vulnerable to adversarial examples, which lead to a high degree concern about it. This article integrates almost existing adversarial attacks and some defenses focusing on recent works in the literature. From these works, we can see that the threat of adversarial attacks is real and defense methods are few. Most existing works have their own limitations such as application scene, constraint condition and problems with the method itself. More attention should be paid on the problem of adversarial example which remains an open issue for designing considerably robust models against adversarial attacks. Acknowledgment This work was partly supported by NSFC under No. 61876134, the National Key R&D Program of China under No. 2016YFB0801100, NSFC under U1536204 and U183610015.
At the moment defender can draw on methods from image area to text for improving the robustness of DNNs, e.g. adversarial training BIBREF107 , adding extra layer BIBREF113 , optimizing cross-entropy function BIBREF114 , BIBREF115 or weakening the transferability of adversarial examples.
04cab3325e20c61f19846674bf9a2c46ea60c449
04cab3325e20c61f19846674bf9a2c46ea60c449_0
Q: What are baseline models on WSJ eval92 and LibriSpeech test-clean? Text: Introduction Current state-of-the-art models for speech recognition require vast amounts of transcribed audio data to attain good performance. In particular, end-to-end ASR models are more demanding in the amount of training data required when compared to traditional hybrid models. While obtaining a large amount of labeled data requires substantial effort and resources, it is much less costly to obtain abundant unlabeled data. For this reason, semi-supervised learning (SSL) is often used when training ASR systems. The most commonly-used SSL approach in ASR is self-training BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. In this approach, a smaller labeled set is used to train an initial seed model, which is applied to a larger amount of unlabeled data to generate hypotheses. The unlabeled data with the most reliable hypotheses are added to the training data for re-training. This process is repeated iteratively. However, self-training is sensitive to the quality of the hypotheses and requires careful calibration of the confidence measures. Other SSL approaches include: pre-training on a large amount of unlabeled data with restricted Boltzmann machines (RBMs) BIBREF5; entropy minimization BIBREF6, BIBREF7, BIBREF8, where the uncertainty of the unlabeled data is incorporated as part of the training objective; and graph-based approaches BIBREF9, where the manifold smoothness assumption is exploited. Recently, transfer learning from large-scale pre-trained language models (LMs) BIBREF10, BIBREF11, BIBREF12 has shown great success and achieved state-of-the-art performance in many NLP tasks. The core idea of these approaches is to learn efficient word representations by pre-training on massive amounts of unlabeled text via word completion. These representations can then be used for downstream tasks with labeled data. Inspired by this, we propose an SSL framework that learns efficient, context-aware acoustic representations using a large amount of unlabeled data, and then applies these representations to ASR tasks using a limited amount of labeled data. In our implementation, we perform acoustic representation learning using forward and backward LSTMs and a training objective that minimizes the reconstruction error of a temporal slice of filterbank features given previous and future context frames. After pre-training, we fix these parameters and add output layers with connectionist temporal classification (CTC) loss for the ASR task. The paper is organized as follows: in Section SECREF2, we give a brief overview of related work in acoustic representation learning and SSL. In Section SECREF3, we describe an implementation of our SSL framework with DeCoAR learning. We describe the experimental setup in Section SECREF4 and the results on WSJ and LibriSpeech in Section SECREF5, followed by our conclusions in Section SECREF6. Related work While semi-supervised learning has been exploited in a plethora of works in hybrid ASR system, there are very few work done in the end-to-end counterparts BIBREF3, BIBREF13, BIBREF14. In BIBREF3, an intermediate representation of speech and text is learned via a shared encoder network. To train these representation, the encoder network was trained to optimize a combination of ASR loss, text-to-text autoencoder loss and inter-domain loss. The latter two loss functions did not require paired speech and text data. Learning efficient acoustic representation can be traced back to restricted Boltzmann machine BIBREF15, BIBREF16, BIBREF17, which allows pre-training on large amounts of unlabeled data before training the deep neural network acoustic models. More recently, acoustic representation learning has drawn increasing attention BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23 in speech processing. For example, an autoregressive predictive coding model (APC) was proposed in BIBREF20 for unsupervised speech representation learning and was applied to phone classification and speaker verification. WaveNet auto-encoders BIBREF21 proposed contrastive predictive coding (CPC) to learn speech representations and was applied on unsupervised acoustic unit discovery task. Wav2vec BIBREF22 proposed a multi-layer convolutional neural network optimized via a noise contrastive binary classification and was applied to WSJ ASR tasks. Unlike the speech representations described in BIBREF22, BIBREF20, our representations are optimized to use bi-directional contexts to auto-regressively reconstruct unseen frames. Thus, they are deep contextualized representations that are functions of the entire input sentence. More importantly, our work is a general semi-supervised training framework that can be applied to different systems and requires no architecture change. DEep COntextualized Acoustic Representations ::: Representation learning from unlabeled data Our approach is largely inspired by ELMo BIBREF10. In ELMo, given a sequence of $T$ tokens $(w_1,w_2,...,w_T)$, a forward language model (implemented with an LSTM) computes its probability using the chain rule decomposition: Similarly, a backward language model computes the sequence probability by modeling the probability of token $w_t$ given its future context $w_{t+1},\cdots , w_T$ as follows: ELMo is trained by maximizing the joint log-likelihood of both forward and backward language model probabilities: where $\Theta _x$ is the parameter for the token representation layer, $\Theta _s$ is the parameter for the softmax layer, and $\overrightarrow{\Theta }_{\text{LSTM}}$, $\overleftarrow{\Theta }_{\text{LSTM}}$ are the parameters of forward and backward LSTM layers, respectively. As the word representations are learned with neural networks that use past and future information, they are referred to as deep contextualized word representations. For speech processing, predicting a single frame $\mathbf {x}_t$ may be a trivial task, as it could be solved by exploiting the temporal smoothness of the signal. In the APC model BIBREF20, the authors propose predicting a frame $K$ steps ahead of the current one. Namely, the model aims to minimize the $\ell _1$ loss between an acoustic feature vector $\mathbf {x}$ at time $t+K$ and a reconstruction $\mathbf {y}$ predicted at time $t$: $\sum _{t=1}^{T-K} |\mathbf {x}_{t+K} - \mathbf {y}_t|$. They conjectured this would induce the model to learn more global structure rather than simply leveraging local information within the signal. We propose combining the bidirectionality of ELMo and the reconstruction objective of APC to give deep contextualized acoustic representations (DeCoAR). We train the model to predict a slice of $K$ acoustic feature vectors, given past and future acoustic vectors. As depicted on the left side of Figure FIGREF1, a stack of forward and backward LSTMs are applied to the entire unlabeled input sequence $\mathbf {X} = (\mathbf {x}_1,\cdots ,\mathbf {x}_T)$. The network computes a hidden representation that encodes information from both previous and future frames (i.e. $\overrightarrow{\mathbf {z}}_t, \overleftarrow{\mathbf {z}}_t$) for each frame $\mathbf {x}_t$. Given a sequence of acoustic feature inputs $(\mathbf {x}_1, ..., \mathbf {x}_{T}) \in \mathbb {R}^d$, for each slice $(\mathbf {x}_t, \mathbf {x}_{t+1}, ..., \mathbf {x}_{t+K})$ starting at time step $t$, our objective is defined as follows: where $[\overrightarrow{\mathbf {z}}_t; \overleftarrow{\mathbf {z}}_{t}] \in \mathbb {R}^{2h}$ are the concatenated forward and backward states from the last LSTM layer, and is a position-dependent feed-forward network with 512 hidden dimensions. The final loss $\mathcal {L}$ is summed over all possible slices in the entire sequence: Note this can be implemented efficiently as a layer which predicts these $(K+1)$ frames at each position $t$, all at once. We compare with the use of unidirectional LSTMs and various slice sizes in Section SECREF5. DEep COntextualized Acoustic Representations ::: End-to-end ASR training with labeled data After we have pre-trained the DeCoAR on unlabeled data, we freeze the parameters in the architecture. To train an end-to-end ASR system using labeled data, we remove the reconstruction layer and add two BLSTM layers with CTC loss BIBREF24, as illustrated on the right side of Figure FIGREF1. The DeCoAR vectors induced by the labeled data in the forward and backward layers are concatenated. We fine-tune the parameters of this ASR-specific new layer on the labeled data. While we use LSTMs and CTC loss in our implementation, our SSL approach should work for other layer choices (e.g. TDNN, CNN, self-attention) and other downstream ASR models (e.g. hybrid, seq2seq, RNN transducers) as well. Experimental Setup ::: Data We conducted our experiments on the WSJ and LibriSpeech datasets, pre-training by using one of the two training sets as unlabeled data. To simulate the SSL setting in WSJ, we used 30%, 50% as well as 100% of labeled data for ASR training, consisting of 25 hours, 40 hours, and 81 hours, respectively. We used dev93 for validation and eval92 and evaluation. For LibriSpeech, the amount of training data used varied from 100 hours to the entire 960 hours. We used dev-clean for validation and test-clean, test-other for evaluation. Experimental Setup ::: ASR systems Our experiments consisted of three different setups: 1) a fully-supervised system using all labeled data; 2) an SSL system using wav2vec features; 3) an SSL system using our proposed DeCoAR features. All models used were based on deep BLSTMs with the CTC loss criterion. In the supervised ASR setup, we used conventional log-mel filterbank features, which were extracted with a 25ms sliding window at a 10ms frame rate. The features were normalized via mean subtraction and variance normalization on a per-speaker basis. The model had 6 BLSTM layers, with 512 cells in each direction. We found that increasing the number of cells to a larger number did not further improve the performance and thus used it as our best supervised ASR baseline. The output CTC labels were 71 phonemes plus one blank symbol. In the SSL ASR setup, we pre-trained a 4-layer BLSTM (1024 cells per sub-layer) to learn DeCoAR features according to the loss defined in Equation DISPLAY_FORM4 and use a slice size of 18. We optimized the network with SGD and use a Noam learning rate schedule, where we started with a learning rate of 0.001, gradually warm up for 500 updates, and then perform inverse square-root decay. We grouped the input sequences by length with a batch size of 64, and trained the models on 8 GPUs. After the representation network was trained, we froze the parameters, and added a projection layer, followed by 2-layer BLSTM with CTC loss on top it. We fed the labeled data to the network. For comparison, we obtained 512-dimensional wav2vec representations BIBREF22 from the wav2vec-large model. Their model was pre-trained on 960-hour LibriSpeech data with constrastive loss and had 12 convolutional layers with skip connections. For evaluation purposes, we applied WFST-based decoding using EESEN BIBREF25. We composed the CTC labels, lexicons and language models (unpruned trigram LM for WSJ, 4-gram for LibriSpeech) into a decoding graph. The acoustic model score was set to $0.8$ and $1.0$ for WSJ and LibriSpeech, respectively, and the blank symbol prior scale was set to $0.3$ for both tasks. We report the performance in word error rate (WER). Results ::: Semi-supervised WSJ results Table TABREF14 shows our results on semi-supervised WSJ. We demonstrate that DeCoAR feature outperforms filterbank and wav2vec features, with a relative improvement of 42% and 20%, respectively. The lower part of the table shows that with smaller amounts of labeled data, the DeCoAR features are significantly better than the filterbank features: Compared to the system trained on 100% labeled data with filterbank features, we achieve comparable results on eval92 using 30% of the labeled data and better performance on eval92 using 50% of the labeled data. Results ::: Semi-supervised LibriSpeech results Table TABREF7 shows the results on semi-supervised LibriSpeech. Both our representations and wav2vecBIBREF22 are trained on 960h LibriSpeech data. We conduct our semi-supervised experiments using 100h (train-clean-100), 360h (train-clean-360), 460h, and 960h of training data. Our approach outperforms both the baseline and wav2vec model in each SSL scenario. One notable observation is that using only 100 hours of transcribed data achieves very similar performance to the system trained on the full 960-hour data with filterbank features. On the more challenging test-other dataset, we also achieve performance on par with the filterbank baseline using a 360h subset. Furthermore, training with with our DeCoAR features approach improves the baseline even when using the exact same training data (960h). Note that while BIBREF26 introduced SpecAugment to significantly improve LibriSpeech performance via data augmentation, and BIBREF27 achieved state-of-the-art results using both hybrid and end-to-end models, our approach focuses on the SSL case with less labeled training data via our DeCoAR features. Results ::: Ablation Study and Analysis ::: Context window size We study the effect of the context window size during pre-training. Table TABREF20 shows that masking and predicting a larger slice of frames can actually degrade performance, while increasing training time. A similar effect was found in SpanBERT BIBREF28, another deep contextual word representation which found that masking a mean span of 3.8 consecutive words was ideal for their word reconstruction objective. Results ::: Ablation Study and Analysis ::: Unidirectional versus bidirectional context Next, we study the importance of bidirectional context by training a unidirectional LSTM, which corresponds to only using $\overrightarrow{\mathbf {z}}_t$ to predict $\mathbf {x}_t, \cdots , \mathbf {x}_{t+K}$. Table TABREF22 shows that this unidirectional model achieves comparable performance to the wav2vec model BIBREF22, suggesting that bidirectionality is the largest contributor to DeCoAR's improved performance. Results ::: Ablation Study and Analysis ::: DeCoAR as denoiser Since our model is trained by predicting masked frames, DeCoAR has the side effect of learning decoder feed-forward networks $\text{FFN}_i$ which reconstruct the $(t+i)$-th filterbank frame from contexts $\overrightarrow{\mathbf {z}}_t$ and $\overleftarrow{\mathbf {z}}_{t+K}$. In this section, we consider the spectrogram reconstructed by taking the output of $\text{FFN}_i$ at all times $t$. The qualitative result is depicted in Figure FIGREF15 where the slice size is 18. We see that when $i=0$ (i.e., when reconstructing the $t$-th frame from $[\overrightarrow{\mathbf {z}}_t; \overleftarrow{\mathbf {z}}_{t+K}]$), the reconstruction is almost perfect. However, as soon as one predicts unseen frames $i=1, 4, 8$ (of 16), the reconstruction becomes more simplistic, but not by much. Background energy in the silent frames 510-550 is zeroed out. By $i=8$ artifacts begin to occur, such as an erroneous sharp band of energy being predicted around frame 555. This behavior is compatible with recent NLP works that interpret contextual word representations as denoising autoencoders BIBREF12. The surprising ability of DeCoAR to broadly reconstruct a frame $\overrightarrow{\mathbf {x}}_{t+{K/2}}$ in the middle of a missing 16-frame slice suggests that its representations $[\overrightarrow{\mathbf {z}}_t; \overleftarrow{\mathbf {z}}_{t+K}]$ capture longer-term phonetic structure during unsupervised pre-training, as with APC BIBREF20. This motivates its success in the semi-supervised ASR task with only two additional layers, as it suggests DeCoAR learns phonetic representations similar to those likely learned by the first 4 layers of a corresponding end-to-end ASR model. Conclusion In this paper, we introduce a novel semi-supervised learning approach for automatic speech recognition. We first propose a novel objective for a deep bidirectional LSTM network, where large amounts of unlabeled data are used to learn deep contextualized acoustic representations (DeCoAR). These DeCoAR features are used as the representations of labeled data to train a CTC-based end-to-end ASR model. In our experiments, we show a 42% relative improvement on WSJ compared to a baseline trained on log-mel filterbank features. On LibriSpeech, we achieve similar performance to training on 960 hours of labeled by pretraining then using only 100 hours of labeled data. While we use BLSTM-CTC as our ASR model, our approach can be applied to other end-to-end ASR models.
Wav2vec BIBREF22, a fully-supervised system using all labeled data
76c8aac84152fc4bbc0d5faa7b46e40438353e77
76c8aac84152fc4bbc0d5faa7b46e40438353e77_0
Q: Do they use the same architecture as LSTM-s and GRUs with just replacing with the LAU unit? Text: Introduction Neural Machine Translation (NMT) is an end-to-end learning approach to machine translation which has recently shown promising results on multiple language pairs BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . Unlike conventional Statistical Machine Translation (SMT) systems BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 which consist of multiple separately tuned components, NMT aims at building upon a single and large neural network to directly map input text to associated output text. Typical NMT models consists of two recurrent neural networks (RNNs), an encoder to read and encode the input text into a distributed representation and a decoder to generate translated text conditioned on the input representation BIBREF13 , BIBREF14 . Driven by the breakthrough achieved in computer vision BIBREF15 , BIBREF16 , research in NMT has recently turned towards studying Deep Neural Networks (DNNs). Wu et al. wu2016google and Zhou et al. zhou2016deep found that deep architectures in both the encoder and decoder are essential for capturing subtle irregularities in the source and target languages. However, training a deep neural network is not as simple as stacking layers. Optimization often becomes increasingly difficult with more layers. One reasonable explanation is the notorious problem of vanishing/exploding gradients which was first studied in the context of vanilla RNNs BIBREF17 . Most prevalent approaches to solve this problem rely on short-cut connections between adjacent layers such as residual or fast-forward connections BIBREF15 , BIBREF16 , BIBREF18 . Different from previous work, we choose to reduce the gradient path inside the recurrent units and propose a novel Linear Associative Unit (LAU) which creates a fusion of both linear and non-linear transformations of the input. Through this design, information can flow across several steps both in time and in space with little attenuation. The mechanism makes it easy to train deep stack RNNs which can efficiently capture the complex inherent structures of sentences for NMT. Based on LAUs, we also propose a NMT model , called DeepLAU, with deep architecture in both the encoder and decoder. Although DeepLAU is fairly simple, it gives remarkable empirical results. On the NIST Chinese-English task, DeepLAU with proper settings yields the best reported result and also a 4.9 BLEU improvement over a strong NMT baseline with most known techniques (e.g, dropout) incorporated. On WMT English-German and English-French tasks, it also achieves performance superior or comparable to the state-of-the-art. Neural machine translation A typical neural machine translation system is a single and large neural network which directly models the conditional probability INLINEFORM0 of translating a source sentence INLINEFORM1 to a target sentence INLINEFORM2 . Attention-based NMT, with RNNsearch as its most popular representative, generalizes the conventional notion of encoder-decoder in using an array of vectors to represent the source sentence and dynamically addressing the relevant segments of them during decoding. The process can be explicitly split into an encoding part, a decoding part and an attention mechanism. The model first encodes the source sentence INLINEFORM0 into a sequence of vectors INLINEFORM1 . In general, INLINEFORM2 is the annotation of INLINEFORM3 from a bi-directional RNN which contains information about the whole sentence with a strong focus on the parts of INLINEFORM4 . Then, the RNNsearch model decodes and generates the target translation INLINEFORM5 based on the context INLINEFORM6 <t INLINEFORM7 p(yi|y<i, INLINEFORM8 is dynamically obtained according to the contribution of the source annotation made to the word prediction. This is called automatic alignment BIBREF14 or attention mechanism BIBREF0 , but it is essentially reading with content-based addressing defined in BIBREF19 . With this addressing strategy the decoder can attend to the source representation that is most relevant to the stage of decoding. Deep neural models have recently achieved a great success in a wide range of problems. In computer vision, models with more than 100 convolutional layers have outperformed shallow ones by a big margin on a series of image tasks BIBREF15 , BIBREF16 . Following similar ideas of building deep CNNs, some promising improvements have also been achieved on building deep NMT systems. Zhou et al. zhou2016deep proposed a new type of linear connections between adjacent layers to simplify the training of deeply stacked RNNs. Similarly, Wu et al. wu2016google introduced residual connections to their deep neural machine translation system and achieve great improvements. However the optimization of deep RNNs is still an open problem due to the massive recurrent computation which makes the gradient propagation path extremely tortuous. Model Description In this section, we discuss Linear Associative Unit (LAU) to ease the training of deep stack of RNNs. Based on this idea, we further propose DeepLAU, a neural machine translation model with a deep encoder and decoder. Recurrent Layers A recurrent neural network BIBREF20 is a class of neural network that has recurrent connections and a state (or its more sophisticated memory-like extension). The past information is built up through the recurrent connections. This makes RNN applicable for sequential prediction tasks of arbitrary length. Given a sequence of vectors INLINEFORM0 as input, a standard RNN computes the sequence hidden states INLINEFORM1 by iterating the following equation from INLINEFORM2 to INLINEFORM3 : DISPLAYFORM0 INLINEFORM0 is usually a nonlinear function such as composition of a logistic sigmoid with an affine transformation. Gated Recurrent Unit It is difficult to train RNNs to capture long-term dependencies because the gradients tend to either vanish (most of the time) or explode. The effect of long-term dependencies is dropped exponentially with respect to the gradient propagation length. The problem was explored in depth by BIBREF21 , BIBREF17 . A successful approach is to design a more sophisticated activation function than a usual activation function consisting of gating functions to control the information flow and reduce the propagation path. There is a long thread of work aiming to solve this problem, with the long short-term memory units (LSTM) being the most salient examples and gated recurrent unit (GRU) being the most recent one BIBREF21 , BIBREF22 . RNNs employing either of these recurrent units have been shown to perform well in tasks that require capturing long-term dependencies. GRU can be viewed as a slightly more dramatic variation on LSTM with fewer parameters. The activation function is armed with two specifically designed gates called update and reset gates to control the flow of information inside each hidden unit. Each hidden state at time-step INLINEFORM0 is computed as follows DISPLAYFORM0 For Chinese-English, our training data consists of INLINEFORM0 M sentence pairs extracted from LDC corpora, with INLINEFORM1 M Chinese words and INLINEFORM2 M English words respectively. We choose NIST 2002 (MT02) dataset as our development set, and the NIST 2003 (MT03), 2004 (MT04) 2005 (MT05) and 2006 (MT06) datasets as our test sets. For English-German, to compare with the results reported by previous work BIBREF0 , BIBREF18 , BIBREF6 , we used the same subset of the WMT 2014 training corpus that contains 4.5M sentence pairs with 91M English words and 87M German words. The concatenation of news-test 2012 and news-test 2013 is used as the validation set and news-test 2014 as the test set. To evaluate at scale, we also report the results of English-French. To compare with the results reported by previous work on end-to-end NMT BIBREF13 , BIBREF14 , BIBREF6 , BIBREF23 , BIBREF18 , we used the same subset of the WMT 2014 training corpus that contains 12M sentence pairs with 304M English words and 348M French words. The concatenation of news-test 2012 and news-test 2013 serves as the validation set and news-test 2014 as the test set. Training details Our training procedure and hyper parameter choices are similar to those used by BIBREF14 . In more details, we limit the source and target vocabularies to the most frequent INLINEFORM0 words in both Chinese-English and English-French. For English-German, we set the source and target vocabularies size to INLINEFORM1 and INLINEFORM2 , respectively. For all experiments, the dimensions of word embeddings and recurrent hidden states are both set to 512. The dimension of INLINEFORM0 is also of size 512. Note that our network is more narrow than most previous work where hidden states of dimmention 1024 is used. we initialize parameters by sampling each element from the Gaussian distribution with mean 0 and variance INLINEFORM1 . Parameter optimization is performed using stochastic gradient descent. Adadelta BIBREF24 is used to automatically adapt the learning rate of each parameter ( INLINEFORM0 and INLINEFORM1 ). To avoid gradient explosion, the gradients of the cost function which had INLINEFORM2 norm larger than a predefined threshold INLINEFORM3 were normalized to the threshold BIBREF25 . We set INLINEFORM4 to INLINEFORM5 at the beginning and halve the threshold until the BLEU score does not change much on the development set. Each SGD is a mini-batch of 128 examples. We train our NMT model with the sentences of length up to 80 words in the training data, while for the Moses system we use the full training data. Translations are generated by a beam search and log-likelihood scores are normalized by sentence length. We use a beam width of 10 in all the experiments. Dropout was also applied on the output layer to avoid over-fitting. The dropout rate is set to INLINEFORM6 . Except when otherwise mentioned, NMT systems are have 4 layers encoders and 4 layers decoders. Results on Chinese-English Translation Table TABREF7 shows BLEU scores on Chinese-English datasets. Clearly DeepLAU leads to a remarkable improvement over their competitors. Compared to DeepGRU, DeepLAU is INLINEFORM0 BLEU score higher on average four test sets, showing the modeling power gained from the liner associative connections. We suggest it is because LAUs apply adaptive gate function conditioned on the input which make it able to automatically decide how much linear information should be transferred to the next step. To show the power of DeepLAU, we also make a comparison with previous work. Our best single model outperforms both a phrased-based MT system (Moses) as well as an open source attention-based NMT system (Groundhog) by INLINEFORM0 and INLINEFORM1 BLEU points respectively on average. The result is also better than some other state-of-the-art variants of attention-based NMT mode with big margins. After PosUnk and ensemble, DeepLAU seizes another notable gain of INLINEFORM2 BLEU and outperform Moses by INLINEFORM3 BLEU. Results on English-German Translation The results on English-German translation are presented in Table TABREF10 . We compare our NMT systems with various other systems including the winning system in WMT’14 BIBREF26 , a phrase-based system whose language models were trained on a huge monolingual text, the Common Crawl corpus. For end-to-end NMT systems, to the best of our knowledge, Wu et al. wu2016google is currently the SOTA system and about 4 BLEU points on top of previously best reported results even though Zhou et al. zhou2016deep used a much deeper neural network. Following Wu et al. wu2016google, the BLEU score represents the averaged score of 8 models we trained. Our approach achieves comparable results with SOTA system. As can be seen from the Table TABREF10 , DeepLAU performs better than the word based model and even not much worse than the best wordpiece models achieved by Wu et al. wu2016google. Note that DeepLAU are simple and easy to implement, as opposed to previous models reported in Wu et al. wu2016google, which dependends on some external techniques to achieve their best performance, such as their introduction of length normalization, coverage penalty, fine-tuning and the RL-refined model. Results on English-French Translation To evaluate at scale, we also show the results on an English-French task with INLINEFORM0 sentence pairs and INLINEFORM1 vocabulary in Table TABREF13 . Luong et al. luong2014addressing achieves BLEU score of INLINEFORM2 with a six layers deep Encoder-Decoder model. The two attention models, RNNSearch and RNNsearch-LV achieve BLEU scores of INLINEFORM3 and INLINEFORM4 respectively. The previous best single NMT Deep-Att model with an 18 layers encoder and 7 layers decoder achieves BLEU score of INLINEFORM5 . For DeepLAU, we obtain the BLEU score of INLINEFORM6 with a 4 layers encoder and 4 layers decoder, which is on par with the SOTA system in terms of BLEU. Note that Zhou et al. zhou2016deep utilize a much larger depth as well as external alignment model and extensive regularization to achieve their best results. Analysis Then we will study the main factors that influence our results on NIST Chinese-English translation task. We also compare our approach with two SOTA topologies which were used in building deep NMT systems. Residual Networks (ResNet) are among the pioneering works BIBREF27 , BIBREF28 that utilize extra identity connections to enhance information flow such that very deep neural networks can be effectively optimized. Share the similar idea, Wu et al. wu2016google introduced to leverage residual connections to train deep RNNs. Fast Forward (F-F) connections were proposed to reduce the propagation path length which is the pioneer work to simplify the training of deep NMT model BIBREF18 . The work can be viewed as a parametric ResNet with short cut connections between adjacent layers. The procedure takes a linear sum between the input and the newly computed state. Table TABREF18 shows the effect of the novel LAU. By comparing row 3 to row 7, we see that when INLINEFORM0 and INLINEFORM1 are set to 2, the average BLEU scores achieved by DeepGRU and DeepLAU are INLINEFORM2 and INLINEFORM3 , respectively. LAU can bring an improvement of INLINEFORM4 in terms of BLEU. After increasing the model depth to 4 (row 4 and row 6), the improvement is enlarged to INLINEFORM5 . When DeepGRU is trained with larger depth (say, 4), the training becomes more difficult and the performance falls behind its shallow partner. While for DeepLAU, as can be see in row 9, with increasing the depth even to INLINEFORM6 and INLINEFORM7 we can still obtain growth by INLINEFORM8 BLEU score. Compared to previous short-cut connection methods (row 5 and row 6), The LAU still achieve meaningful improvements over F-F connections and Residual connections by INLINEFORM9 and INLINEFORM10 BLEU points respectively. DeepLAU introduces more parameters than DeepGRU. In order to figure out the effect of DeepLAU comparing models with the same parameter size, we increase the hidden size of DeepGRU model. Row 3 shows that, after using a twice larger GRU layer, the BLEU score is INLINEFORM0 , which is still worse than the corresponding DeepLAU model with fewer parameters. Next we will study the model size. In Table TABREF18 , starting from INLINEFORM0 and INLINEFORM1 and gradually increasing the model depth, we can achieve substantial improvements in terms of BLEU. With INLINEFORM2 and INLINEFORM3 , our DeepLAU model yields the best BLEU score. We tried to increase the model depth with the same hidden size but failed to see further improvements. We then tried to increase the hidden size. By comparing row 2 and row 3, we find the improvements is relative small with a wider hidden size. It is also worth mentioning that a deep and thin network with fewer parameters can still achieve comparable results with its shallow partner. This suggests that depth plays a more important role in increasing the complexity of neural networks than width and our deliberately designed LAU benefit from the optimizing of such a deep model. A more detailed comparison between DeepLAU (4 layers encoder and 4 layers decoder), DeepLAU(2 layer encoder and 2 layer decoder) and DeepGRU (4 layers encoder and 4 layers decoder), suggest that with deep architectures are essential to the superior performance of our system. In particular, we test the BLEU scores on sentences longer than INLINEFORM0 on the merged test set. Clearly, in all curves, performance degrades with increased sentence length. However, DeepLAU models yield consistently higher BLEU scores than the DeepGRU model on longer sentences. These observations are consistent with our intuition that very deep RNN model is especially good at modeling the nested latent structures on relatively complicated sentences and LAU plays an important role on optimizing such a complex deep model. Conclusion We propose a Linear Associative Unit (LAU) which makes a fusion of both linear and non-linear transformation inside the recurrent unit. On this way, gradients decay much slower compared to the standard deep networks which enable us to build a deep neural network for machine translation. Our empirical study shows that it can significantly improve the performance of NMT. acknowledge We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions. Wang's work is partially supported by National Science Foundation for Deep Semantics Based Uighur to Chinese Machine Translation (ID 61662077). Qun Liu's work is partially supported by Science Foundation Ireland in the ADAPT Centre for Digital Content Technology (www.adaptcentre.ie) at Dublin City University funded under the SFI Research Centres Programme (Grant 13/RC/2106) co-funded under the European Regional Development Fund.
Yes
6916596253d67f74dba9222f48b9e8799581bad9
6916596253d67f74dba9222f48b9e8799581bad9_0
Q: So this paper turns unstructured text inputs to parameters that GNNs can read? Text: Introduction Recent years, graph neural networks (GNNs) have been applied to various fields of machine learning, including node classification BIBREF0 , relation classification BIBREF1 , molecular property prediction BIBREF2 , few-shot learning BIBREF3 , and achieve promising results on these tasks. These works have demonstrated GNNs' strong power to process relational reasoning on graphs. Relational reasoning aims to abstractly reason about entities/objects and their relations, which is an important part of human intelligence. Besides graphs, relational reasoning is also of great importance in many natural language processing tasks such as question answering, relation extraction, summarization, etc. Consider the example shown in Fig. 1 , existing relation extraction models could easily extract the facts that Luc Besson directed a film Léon: The Professional and that the film is in English, but fail to infer the relationship between Luc Besson and English without multi-hop relational reasoning. By considering the reasoning patterns, one can discover that Luc Besson could speak English following a reasoning logic that Luc Besson directed Léon: The Professional and this film is in English indicates Luc Besson could speak English. However, most existing GNNs can only process multi-hop relational reasoning on pre-defined graphs and cannot be directly applied in natural language relational reasoning. Enabling multi-hop relational reasoning in natural languages remains an open problem. To address this issue, in this paper, we propose graph neural networks with generated parameters (GP-GNNs), to adapt graph neural networks to solve the natural language relational reasoning task. GP-GNNs first constructs a fully-connected graph with the entities in the sequence of text. After that, it employs three modules to process relational reasoning: (1) an encoding module which enables edges to encode rich information from natural languages, (2) a propagation module which propagates relational information among various nodes, and (3) a classification module which makes predictions with node representations. As compared to traditional GNNs, GP-GNNs could learn edges' parameters from natural languages, extending it from performing inferring on only non-relational graphs or graphs with a limited number of edge types to unstructured inputs such as texts. In the experiments, we apply GP-GNNs to a classic natural language relational reasoning task: relation extraction from text. We carry out experiments on Wikipedia corpus aligned with Wikidata knowledge base BIBREF4 and build a human annotated test set as well as two distantly labeled test sets with different levels of denseness.Experiment results show that our model outperforms other models on relation extraction task by considering multi-hop relational reasoning. We also perform a qualitative analysis which shows that our model could discover more relations by reasoning more robustly as compared to baseline models. Our main contributions are in two-fold: (1) We extend a novel graph neural network model with generated parameters, to enable relational message-passing with rich text information, which could be applied to process relational reasoning on unstructured inputs such as natural languages. (2) We verify our GP-GNNs in the task of relation extraction from text, which demonstrates its ability on multi-hop relational reasoning as compared to those models which extract relationships separately. Moreover, we also present three datasets, which could help future researchers compare their models in different settings. Graph Neural Networks (GNNs) GNNs were first proposed in BIBREF5 and are trained via the Almeida-Pineda algorithm BIBREF6 . Later the authors in BIBREF7 replace the Almeida-Pineda algorithm with the more generic backpropagation and demonstrate its effectiveness empirically. BIBREF2 propose to apply GNNs to molecular property prediction tasks. BIBREF3 shows how to use GNNs to learn classifiers on image datasets in a few-shot manner. BIBREF2 study the effectiveness of message-passing in quantum chemistry. BIBREF8 apply message-passing on a graph constructed by coreference links to answer relational questions. There are relatively fewer papers discussing how to adapt GNNs to natural language tasks. For example, BIBREF9 propose to apply GNNs to semantic role labeling and BIBREF1 apply GNNs to knowledge base completion tasks. BIBREF10 apply GNNs to relation extraction by encoding dependency trees, and BIBREF11 apply GNNs to multi-hop question answering by encoding co-occurence and co-reference relationships. Although they also consider applying GNNs to natural language processing tasks, they still perform message-passing on predefined graphs. BIBREF12 introduces a novel neural architecture to generate a graph based on the textual input and dynamically update the relationship during the learning process. In sharp contrast, this paper focuses on extracting relations from real-world relation datasets. Relational Reasoning Relational reasoning has been explored in various fields. For example, BIBREF13 propose a simple neural network to reason the relationship of objects in a picture, BIBREF14 build up a scene graph according to an image, and BIBREF15 model the interaction of physical objects. In this paper, we focus on the relational reasoning in natural language domain. Existing works BIBREF16 , BIBREF17 , BIBREF18 have demonstrated that neural networks are capable of capturing the pair-wise relationship between entities in certain situations. For example, BIBREF16 is one of the earliest works that applies a simple CNN to this task, and BIBREF17 further extends it with piece-wise max-pooling. BIBREF19 propose a multi-window version of CNN for relation extraction. BIBREF18 study an attention mechanism for relation extraction tasks. BIBREF20 predict n-ary relations of entities in different sentences with Graph LSTMs. BIBREF21 treat relations as latent variables which are capable of inducing the relations without any supervision signals. BIBREF22 show that the relation path has an important role in relation extraction. BIBREF23 show the effectiveness of LSTMs BIBREF24 in relation extraction. BIBREF25 proposed a walk-based model to do relation extraction. The most related work is BIBREF26 , where the proposed model incorporates contextual relations with attention mechanism when predicting the relation of a target entity pair. The drawback of existing approaches is that they could not make full use of the multi-hop inference patterns among multiple entity pairs and their relations within the sentence. Graph Neural Network with Generated Parameters (GP-GNNs) We first define the task of natural language relational reasoning. Given a sequence of text with $m$ entities, it aims to reason on both the text and entities and make a prediction of the labels of the entities or entity pairs. In this section, we will introduce the general framework of GP-GNNs. GP-GNNs first build a fully-connected graph $\mathcal {G} = (\mathcal {V}, \mathcal {E})$ , where $\mathcal {V}$ is the set of entities, and each edge $(v_i, v_j) \in \mathcal {E}, v_i, v_j \in \mathcal {V}$ corresponds to a sequence $s = x_0^{i,j}, x_1^{i,j}, \dots , x_{l-1}^{i,j}$ extracted from the text. After that, GP-GNNs employ three modules including (1) encoding module, (2) propagation module and (3) classification module to proceed relational reasoning, as shown in Fig. 2 . Encoding Module The encoding module converts sequences into transition matrices corresponding to edges, i.e. the parameters of the propagation module, by $$\mathcal {A}_{i,j}^{(n)} = f(E({x}_{0}^{i,j}), E({x}_{1}^{i,j}), \cdots , E({x}_{l-1}^{i,j}); \theta _e^n),$$ (Eq. 6) where $f(\cdot )$ could be any model that could encode sequential data, such as LSTMs, GRUs, CNNs, $E(\cdot )$ indicates an embedding function, and $\theta _e^n$ denotes the parameters of the encoding module of $n$ -th layer. To encode the context of entity pairs (or edges in the graph), we first concatenate the position embeddings with word embeddings in the sentence: $$E(x_t^{i, j}) = [{x}_t; {p}_t^{i, j}],$$ (Eq. 12) where ${x}_t$ denotes the word embedding of word $x_t$ and ${p}_t^{i,j}$ denotes the position embedding of word position $t$ relative to the entity pair's position $i, j$ (Details of these two embeddings are introduced in the next two paragraphs.) After that, we feed the representations of entity pairs into encoder $f(\cdot )$ which contains a bi-directional LSTM and a multi-layer perceptron: $$\mathcal {A}_{i,j}^{(n)} = [\mathtt {MLP}_n(\mathtt {BiLSTM}_n((E({x}_{0}^{i,j}), E({x}_{1}^{i,j}), \cdots , E({x}_{l-1}^{i,j}))],$$ (Eq. 13) where $n$ denotes the index of layer , $[\cdot ]$ means reshaping a vector as a matrix, $\mathtt {BiLSTM}$ encodes a sequence by concatenating tail hidden states of the forward LSTM and head hidden states of the backward LSTM together and $\mathtt {MLP}$ denotes a multi-layer perceptron with non-linear activation $\sigma $ . We first map each token $x_t$ of sentence $\lbrace x_0, x_1, \dots , x_{l-1}\rbrace $ to a $k$ -dimensional embedding vector ${x}_t$ using a word embedding matrix $W_e \in \mathbb {R}^{|V|\times d_w}$ , where $|V|$ is the size of the vocabulary. Throughout this paper, we stick to 50-dimensional GloVe embeddings pre-trained on a 6 billion corpus BIBREF27 . In this work, we consider a simple entity marking scheme: we mark each token in the sentence as either belonging to the first entity $v_i$ , the second entity $v_j$ or to neither of those. Each position marker is also mapped to a $d_p$ -dimensional vector by a position embedding matrix ${P}\in \mathbb {R}^{3\times d_p}$ . We use notation ${p}_t^{i, j}$ to represent the position embedding for $x_t$ corresponding to entity pair $(v_i, v_j)$ . Propagation Module The propagation module learns representations for nodes layer by layer. The initial embeddings of nodes, i.e. the representations of layer 0, are task-related, which could be embeddings that encode features of nodes or just one-hot embeddings. Given representations of layer $n$ , the representations of layer $n+1$ are calculated by $$\mathbf {h}_i^{(n+1)} = \sum _{v_j \in \mathcal {N}(v_i)} \sigma (\mathcal {A}_{i, j}^{(n)}\mathbf {h}_j^{(n)}),$$ (Eq. 8) where $\mathcal {N}(v_i)$ denotes the neighbours of node $v_i$ in graph $\mathcal {G}$ and $\sigma (\cdot )$ denotes non-linear activation function. Next, we use Eq. ( 8 ) to propagate information among nodes where the initial embeddings of nodes and number of layers are further specified as follows. Suppose we are focusing on extracting the relationship between entity $v_i$ and entity $v_j$ , the initial embeddings of them are annotated as $\mathbf {h}_{v_i}^{(0)} = {a}_{\text{subject}}$ , and ${h}_{v_j}^{(0)} = {a}_{\text{object}}$ , while the initial embeddings of other entities are set to all zeros. We set special values for the head and tail entity's initial embeddings as a kind of “flag” messages which we expect to be passed through propagation. Annotators ${a}_{\text{subject}}$ and ${a}_{\text{object}}$ could also carry the prior knowledge about subject entity and object entity. In our experiments, we generalize the idea of Gated Graph Neural Networks BIBREF7 by setting ${a}_{\text{subject}} = [{1}; {0}]^{\top }$ and ${a}_{\text{object}} = [{0}; {1}]^{\top }$ . In general graphs, the number of layers $K$ is chosen to be of the order of the graph diameter so that all nodes obtain information from the entire graph. In our context, however, since the graph is densely connected, the depth is interpreted simply as giving the model more expressive power. We treat $K$ as a hyper-parameter, the effectiveness of which will be discussed in detail (Sect. "The Effectiveness of the Number of Layers" ). Classification Module Generally, the classification module takes node representations as inputs and outputs predictions. Therefore, the loss of GP-GNNs could be calculated as $$\mathcal {L} = g(\mathbf {h}_{0:|\mathcal {V}|-1}^{0}, \mathbf {h}_{0:|\mathcal {V}|-1}^{1}, \dots , \mathbf {h}_{0:|\mathcal {V}|-1}^{K}, Y; \theta _c),$$ (Eq. 10) where $\theta _c$ denotes the parameters of the classification module, $K$ is the number of layers in propagation module and $Y$ denotes the ground truth label. The parameters in GP-GNNs are trained by gradient descent methods. The output module takes the embeddings of the target entity pair $(v_i, v_j)$ as input, which are first converted by: $$\small {r}_{v_i,v_j} = [ [{h}_{v_i}^{(1)}\odot {h}_{v_j}^{(1)}]^{\top }; [{h}_{v_i}^{(2)} \odot {h}_{v_j}^{(2)}]^{\top }; \dots ; [{h}_{v_i}^{(K)} \odot {h}_{v_j}^{(K)}]^{\top }],$$ (Eq. 23) where $\odot $ represents element-wise multiplication. This could be used for classification: $$\small \mathbb {P} (r_{v_i, v_j}|h, t, s) = \mathtt {softmax}(\mathtt {MLP}({r}_{v_i,v_j})),$$ (Eq. 24) where $r_{v_i, v_j}\in \mathcal {R}$ , and $\mathtt {MLP}$ denotes a multi-layer perceptron module. We use cross entropy here as the classification loss $$\small \mathcal {L} = \sum _{s\in S} \sum _{i\ne j} \log \mathbb {P} (r_{v_i, v_j} | i, j, s),$$ (Eq. 25) where $r_{v_i, v_j}$ denotes the relation label for entity pair $(v_i, v_j)$ and $S$ denotes the whole corpus. In practice, we stack the embeddings for every target entity pairs together to infer the underlying relationship between each pair of entities. We use PyTorch BIBREF28 to implement our models. To make it more efficient, we avoid using loop-based, scalar-oriented code by matrix and vector operations. Relation Extraction with GP-GNNs Relation extraction from text is a classic natural language relational reasoning task. Given a sentence $s = (x_0, x_1, \dots , x_{l-1})$ , a set of relations $\mathcal {R}$ and a set of entities in this sentence $\mathcal {V}_s = \lbrace v_1, v_2, \dots , v_{|\mathcal {V}_s|}\rbrace $ , where each $v_i$ consists of one or a sequence of tokens, relation extraction from text is to identify the pairwise relationship $r_{v_i, v_j}\in \mathcal {R}$ between each entity pair $(v_i, v_j)$ . In this section, we will introduce how to apply GP-GNNs to relation extraction. Experiments Our experiments mainly aim to: (1) showing that our best models could improve the performance of relation extraction under a variety of settings; (2) illustrating that how the number of layers affect the performance of our model; and (3) performing a qualitative investigation to highlight the difference between our models and baseline models. In both part (1) and part (2), we do three subparts of experiments: (i) we will first show that our models could improve instance-level relation extraction on a human annotated test set, and (ii) then we will show that our models could also help enhance the performance of bag-level relation extraction on a distantly labeled test set , and (iii) we also split a subset of distantly labeled test set, where the number of entities and edges is large. Experiment Settings BIBREF26 have proposed a dataset with Wikipedia corpora. There is a small difference between our task and theirs: our task is to extract the relationship between every pair of entities in the sentence, whereas their task is to extract the relationship between the given entity pair and the context entity pairs. Therefore, we need to modify their dataset: (1) We added reversed edges if they are missing from a given triple, e.g. if triple (Earth, part of, Solar System) exists in the sentence, we add a reversed label, (Solar System, has a member, Earth), to it; (2) For all of the entity pairs with no relations, we added “NA” labels to them. We use the same training set for all of the experiments. Based on the test set provided by BIBREF26 , 5 annotators are asked to label the dataset. They are asked to decide whether or not the distant supervision is right for every pair of entities. Only the instances accepted by all 5 annotators are incorporated into the human annotated test set. There are 350 sentences and 1,230 triples in this test set. We further split a dense test set from the distantly labeled test set. Our criteria are: (1) the number of entities should be strictly larger than 2; and (2) there must be at least one circle (with at least three entities) in the ground-truth label of the sentence . This test set could be used to test our methods' performance on sentences with the complex interaction between entities. There are 1,350 sentences and more than 17,915 triples and 7,906 relational facts in this test set. We select the following models for comparison, the first four of which are our baseline models. Context-Aware RE, proposed by BIBREF26 . This model utilizes attention mechanism to encode the context relations for predicting target relations. It was the state-of-the-art models on Wikipedia dataset. This baseline is implemented by ourselves based on authors' public repo. Multi-Window CNN. BIBREF16 utilize convolutional neural networks to classify relations. Different from the original version of CNN proposed in BIBREF16 , our implementation, follows BIBREF19 , concatenates features extracted by three different window sizes: 3, 5, 7. PCNN, proposed by BIBREF17 . This model divides the whole sentence into three pieces and applies max-pooling after convolution layer piece-wisely. For CNN and following PCNN, the entity markers are the same as originally proposed in BIBREF16 , BIBREF17 . LSTM or GP-GNN with $K=1$ layer. Bi-directional LSTM BIBREF29 could be seen as an 1-layer variant of our model. GP-GNN with $K=2$ or $K=3$ layerss. These models are capable of performing 2-hop reasoning and 3-hop reasoning, respectively. We select the best parameters for the validation set. We select non-linear activation functions between relu and tanh, and select $d_n$ among $\lbrace 2, 4, 8, 12, 16\rbrace $ . We have also tried two forms of adjacent matrices: tied-weights (set $\mathcal {A}^{(n)} = \mathcal {A}^{(n+1)}$ ) and untied-weights. Table 1 shows our best hyper-parameter settings, which are used in all of our experiments. Evaluation Details So far, we have only talked about the way to implement sentence-level relation extraction. To evaluate our models and baseline models in bag-level, we utilize a bag of sentences with given entity pair to score the relations between them. BIBREF17 formalize the bag-level relation extraction as multi-instance learning. Here, we follow their idea and define the score function of entity pair and its corresponding relation $r$ as a max-one setting: $$\small E(r| v_i, v_j, S) = \max _{s\in S} \mathbb {P} (r_{v_i, v_j} | i, j, s).$$ (Eq. 41) Effectiveness of Reasoning Mechanism From Table 2 and 3 , we can see that our best models outperform all the baseline models significantly on all three test sets. These results indicate our model could successfully conduct reasoning on the fully-connected graph with generated parameters from natural language. These results also indicate that our model not only performs well on sentence-level relation extraction but also improves on bag-level relation extraction. Note that Context-Aware RE also incorporates context information to predict the relation of the target entity pair, however, we argue that Context-Aware RE only models the co-occurrence of various relations, ignoring whether the context relation participates in the reasoning process of relation extraction of the target entity pair. Context-Aware RE may introduce more noise, for it may mistakenly increase the probability of a relation with the similar topic with the context relations. We will give samples to illustrate this issue in Sect. "Qualitative Results: Case Study" . Another interesting observation is that our #layers=1 version outperforms CNN and PCNN in these three datasets. One probable reason is that sentences from Wikipedia corpus are always complex, which may be hard to model for CNN and PCNN. Similar conclusions are also reached by BIBREF30 . The Effectiveness of the Number of Layers The number of layers represents the reasoning ability of our models. A $K$ -layer version has the ability to infer $K$ -hop relations. To demonstrate the effects of the number of layers, we also compare our models with different numbers of layers. From Table 2 and Table 3 , we could see that on all three datasets, 3-layer version achieves the best. We could also see from Fig. 3 that as the number of layers grows, the curves get higher and higher precision, indicating considering more hops in reasoning leads to better performance. However, the improvement of the third layer is much smaller on the overall distantly supervised test set than the one on the dense subset. This observation reveals that the reasoning mechanism could help us identify relations especially on sentences where there are more entities. We could also see that on the human annotated test set 3-layer version to have a greater improvement over 2-layer version as compared with 2-layer version over 1-layer version. It is probably due to the reason that bag-level relation extraction is much easier. In real applications, different variants could be selected for different kind of sentences or we can also ensemble the prediction from different models. We leave these explorations for future work. Qualitative Results: Case Study Tab. 4 shows qualitative results that compare our GP-GNN model and the baseline models. The results show that GP-GNN has the ability to infer the relationship between two entities with reasoning. In the first case, GP-GNN implicitly learns a logic rule $\exists y, x\xrightarrow{}y\xrightarrow{}z\Rightarrow x \xrightarrow{}z$ to derive (Oozham, language spoken, Malayalam) and in the second case our model implicitly learns another logic rule $\exists y, x\xrightarrow{}y\xrightarrow{}z\Rightarrow x \xrightarrow{}z$ to find the fact (BankUnited Center, located in, English). Note that (BankUnited Center, located in, English) is even not in Wikidata, but our model could identify this fact through reasoning. We also find that Context-Aware RE tends to predict relations with similar topics. For example, in the third case, share boarder with and located in are both relations about territory issues. Consequently, Context-Aware RE makes a mistake by predicting (Kentucky, share boarder with, Ohio). As we have discussed before, this is due to its mechanism to model co-occurrence of multiple relations. However, in our model, since Ohio and Johnson County have no relationship, this wrong relation is not predicted. Conclusion and Future Work We addressed the problem of utilizing GNNs to perform relational reasoning with natural languages. Our proposed models, GP-GNNs, solves the relational message-passing task by encoding natural language as parameters and performing propagation from layer to layer. Our model can also be considered as a more generic framework for graph generation problem with unstructured input other than text, e.g. images, videos, audios. In this work, we demonstrate its effectiveness in predicting the relationship between entities in natural language and bag-level and show that by considering more hops in reasoning the performance of relation extraction could be significantly improved.
Yes
cf63a4f9fe0f71779cf5a014807ae4528279c25a
cf63a4f9fe0f71779cf5a014807ae4528279c25a_0
Q: How does the semi-automatic construction process work? Text: Introduction Arabish is the romanization of Arabic Dialects (ADs) used for informal messaging, especially in social networks. This writing system provides an interesting ground for linguistic research, computational as well as sociolinguistic, mainly due to the fact that it is a spontaneous representation of the ADs, and because it is a linguistic phenomenon in constant expansion on the web. Despite such potential, little research has been dedicated to Tunisian Arabish (TA). In this paper we describe the work we carried to develop a flexible and multi-purpose TA resource. This will include a TA corpus, together with some tools that could be useful for analyzing the corpus and for its extension with new data. First of all, the resource will be useful to give an overview of the TA. At the same time, it will be a reliable representation of the Tunisian dialect (TUN) evolution over the last ten years: the collected texts date from 2009 to present. This selection was done with the purpose to observe to what extent the TA orthographic system has evolved toward a writing convention. Therefore, the TArC will be suitable for phonological, morphological, syntactic and semantic studies, both in the linguistic and the Natural Language Processing (NLP) domains. For these reasons, we decided to build a corpus which could highlight the structural characteristics of TA through different annotation levels, including Part of Speech (POS) tags and lemmatization. In particular, to facilitate the match with the already existing tools and studies for the Arabic language processing, we provide a transcription in Arabic characters at token level, following the Conventional Orthography for Dialectal Arabic guidelines CODA* (CODA star) BIBREF0 and taking into account the specific guidelines for TUN (CODA TUN) BIBREF1. Furthermore, even if the translation is not the main goal of this research, we have decided to provide an Italian translation of the TArC’s texts. Even though in the last few years ADs have received an increasing attention by the NLP community, many aspects have not been studied yet and one of these is the Arabish code-system. The first reason for this lack of research is the relatively recent widespread of its use: before the advent of the social media, Arabish usage was basically confined to text messaging. However, the landscape has changed considerably, and particularly thanks to the massive registration of users on Facebook since 2008. At that time, in Tunisia there were still no Arabic keyboards, neither for Personal Computers, nor for phones, so Arabic-speaking users designed TA for writing in social media (Table TABREF14). A second issue that has held back the study of Arabish is its lack of a standard orthography, and the informal context of use. It is important to note that also the ADs lack a standard code-system, mainly because of their oral nature. In recent years the scientific community has been active in producing various sets of guidelines for dialectal Arabic writing in Arabic characters: CODA (Conventional Orthography for Dialectal Arabic) BIBREF2. The remainder of the paper is organized as follows: section SECREF2 is an overview of NLP studies on TUN and TA; section SECREF3 describes TUN and TA; section SECREF4 presents the TArC corpus building process; section SECREF5 explains preliminary experiments with a semi-automatic transcription and annotation procedure, adopted for a faster and simpler construction of the TArC corpus; conclusions are drawn in section SECREF6 Related Work In this section, we provide an overview of work done on automatic processing of TUN and TA. As briefly outlined above, many studies on TUN and TA aim at solving the lack of standard orthography. The first Conventional Orthography for Dialectal Arabic (CODA) was for Egyptian Arabic BIBREF2 and it was used by bies2014transliteration for Egyptian Arabish transliteration into Arabic script. The CODA version for TUN (CODA TUN) was developed by DBLP:conf/lrec/ZribiBMEBH14, and was used in many studies, like boujelbane2015traitements. Such work presents a research on automatic word recognition in TUN. Narrowing down to the specific field of TA, CODA TUN was used in masmoudi2015arabic to realize a TA-Arabic script conversion tool, implemented with a rule-based approach. The most extensive CODA is CODA*, a unified set of guidelines for 28 Arab city dialects BIBREF0. For the present research, CODA* is considered the most convenient guideline to follow due to its extensive applicability, which will support comparative studies of corpora in different ADs. As we already mentioned, there are few NLP tools available for Arabish processing in comparison to the amount of NLP tools realized for Arabic. Considering the lack of spelling conventions for Arabish, previous effort has focused on automatic transliteration from Arabish to Arabic script, e.g. chalabi2012romanized, darwish2013arabizi, and al2014automatic. These three work are based on a character-to-character mapping model that aims at generating a range of alternative words that must then be selected through a linguistic model. A different method is presented in younes2018sequence, in which the authors present a sequence-to-sequence-based approach for TA-Arabic characters transliteration in both directions BIBREF3, BIBREF4. Regardless of the great number of work done on TUN automatic processing, there are not a lot of TUN corpora available for free BIBREF5. To the best of our knowledge there are only five TUN corpora freely downloadable: one of these is the PADIC PADIC, composed of 6,400 sentences in six Arabic dialects, translated in Modern Standard Arabic (MSA), and annotated at sentence level. Two other corpora are the Tunisian Dialect Corpus Interlocutor (TuDiCoI) Tudicoi and the Spoken Tunisian Arabic Corpus (STAC) stac, which are both morpho-syntactically annotated. The first one is a spoken task-oriented dialogue corpus, which gathers a set of conversations between staff and clients recorded in a railway station. TuDiCoI consists of 21,682 words in client turns BIBREF7. The STAC is composed of 42,388 words collected from audio files downloaded from the web (as TV channels and radio stations files) BIBREF8. A different corpus is the TARIC Taric, which contains 20 hours of TUN speech, transcribed in Arabic characters BIBREF9. The last one is the TSAC Tsac, containing 17k comments from Facebook, manually annotated to positive and negative polarities BIBREF10. This corpus is the only one that contains TA texts as well as texts in Arabic characters. As far as we know there are no available corpora of TA transcribed in Arabic characters which are also morpho-syntactically annotated. In order to provide an answer to the lack of resources for TA, we decided to create TArC, a corpus entirely dedicated to the TA writing system, transcribed in CODA TUN and provided with a lemmatization level and POS tag annotation. Characteristics of Tunisian Arabic and Tunisian Arabish The Tunisian dialect (TUN) is the spoken language of Tunisian everyday life, commonly referred to as الدَّارِجَة, ad-dārija, العَامِّيَّة, al-‘āmmiyya, or التُّونْسِي, . According to the traditional diatopic classification, TUN belongs to the area of Maghrebi Arabic, of which the other main varieties are Libyan, Algerian, Moroccan and the Ḥassānīya variety of Mauritania BIBREF11. Arabish is the transposition of ADs, which are mainly spoken systems, into written form, thus turning into a quasi-oral system (this topic will be discussed in section SECREF12). In addition, Arabish is not realized through Arabic script and consequently it is not subject to the Standard Arabic orthographic rules. As a result, it is possible to consider TA as a faithful written representation of the spoken TUN BIBREF12. Characteristics of Tunisian Arabic and Tunisian Arabish ::: Tunisian Arabic The following list provides an excerpt of the principal features of TUN, which, through the TArC, would be researched in depth among many others. At the phonetic level, some of the main characteristics of TUN, and Maghrebi Arabic in general, are the following: 1em0pt * Strong influence of the Berber substratum, to which it is possible to attribute the conservative phonology of TUN consonants. 1em0pt * Presence of new emphatic phonemes, above all [ṛ], [ḷ], [ḅ]. * Realization of the voiced post-alveolar affricate [ʤ] as fricative . * Overlapping of the pharyngealized voiced alveolar stop , <ض>, with the fricative , <ظ>. * Preservation of a full glottal stop mainly in cases of loans from Classical Arabic (CA) or exclamations and interjections of frequent use. * Loss of short vowels in open syllables. * Monophthongization. In TUN <بَيت>, , house, becomes meaning room. * Palatalization of ā: Imāla, <إمالة>, literally inclination. (In TUN the phenomenon is of medium intensity.) Thereby the word <باب>, , door, becomes . * Metathesis. (Transposition of the first vowel of the word. It occurs when non-conjugated verbs or names without suffix begin with the sequence CCvC, where C stands for ungeminated consonant, and v for short vowel. When a suffix is added to this type of name, or a verb of this type is conjugated, the first vowel changes position giving rise to the CvCC sequence.) In TUN it results in: (he) has understood: <فْهِم>, , (she) has understood: <فِهْمِت>, or leg: <رْجِل>, , my leg: <رِجْلِي>, . Regarding the morpho-syntactic level, TUN presents: 1em0pt * Addition of the prefix /-n/ to first person verbal morphology in muḍāri' (imperfective). * Realization of passive-reflexive verbs through the morpheme /-t/ prefixed to the verb as in the example: <سوريّة مالحَفْصيّة تْتِلْبِس>, , the shirts of Ḥafṣiya are not bad, (lit: they dress). * Loss of gender distinction at the 2nd and 3rd persons, at verbal and pronominal level. * Disappearance of the dual form from verbal and pronominal inflexion. There is a residual of pseudo-dual in some words fixed in time in their dual form. * Loss of relative pronouns flexion and replacement with the invariable form <اِلّي>, . * Use of presentatives /ṛā-/ and /hā-/ with the meaning of here, look, as in the example in TUN: <راني مَخْنوق>, ṛ, here I am asphyxiated (by problems), or in <هاك دَبَّرْتْها>, , here you are, finding it (the solution) hence: you were lucky. * Presence of circumfix negation marks, such as < <ما>, + verb + <ش>, >. The last element of this structure must be omitted if there is another negation, such as the Tunisian adverb <عُمْر>, , never, as in the structure: < + personal pronoun suffix + + perfect verb>. This construction is used to express the concept of never having done the action in question, as in the example: <عُمري ما كُنْت نِتْصَوُّر...>, , I never imagined that.... Instead, to deny an action pointing out that it will never repeat itself again, a structure widely used is <[ma] + + + imperfective verb>, where the element within the circumfix marks is a grammaticalized element of verbal origin from CA: <عاد>, , meaning to go back, to reoccur, which gives the structure a sense of denied repetitiveness, as in the sentence: <هو ما عادِش يَرْجَع>, , he will not come back. Finally, to deny the nominal phrase, in TUN both the <موش>, , and the circumfix marks are frequently used. For the negative form of the verb to be in the present, circumfix marks can be combined with the personal suffix pronoun, placed between the marks, as in <مَانِيش>, , I am not. Within the negation marks we can also find other types of nominal structures, such as: < + (mind) + personal pronoun suffix>, which has a value equivalent to the verb be aware of, as in the example: <ما في باليش>, , I did not know. Characteristics of Tunisian Arabic and Tunisian Arabish ::: Tunisian Arabish As previously mentioned, we consider Arabish a quasi-oral system. With quasi-orality it is intended the form of communication typical of Computer-Mediated Communication (CMC), characterized by informal tones, dependence on context, lack of attention to spelling and especially the ability to create a sense of collectivity BIBREF15. TA and TUN have not a standard orthography, with the exception of the CODA TUN. Nevertheless, TA is a spontaneous code-system used since more than ten years, and is being conventionalized by its daily usage. From the table TABREF14, where the coding scheme of TA is illustrated, it is possible to observe that there is no one-to-one correspondence between TA and TUN characters and that often Arabish presents overlaps in the encoding possibilities. The main issue is represented by the not proper representation by TA of the emphatic phones: , and . On the other hand, being TA not codified through the Arabic alphabet, it can well represent the phonetic realization of TUN, as shown by the following examples: * The Arabic alphabet is generally used for formal conversations in Modern Standard Arabic (MSA), the Arabic of formal situations, or in that of Classical Arabic (CA), the Arabic of the Holy Qur’ān, also known as ‘The Beautiful Language’. Like MSA and CA, also Arabic Dialects (ADs) can be written in the Arabic alphabet, but in this case it is possible to observe a kind of hypercorrection operated by the speakers in order to respect the writing rules of MSA. For example, in TUN texts written in Arabic script, it is possible to find a ‘silent vowel’ (namely an epenthetic alif <ا>) written at the beginning of those words starting with the sequence ‘#CCv’, which is not allowed in MSA. * Writing TUN in Arabic script, the Code-Mixing or Switching in foreign language will be unnaturally reduced. * As described in table TABREF14, the Arabic alphabet is provided with three short vowels, which correspond to the three long ones: , , , but TUN presents a wider range of vowels. Indeed, regarding the early presented characteristics of TUN, the TA range of vowels offers better possibility to represent most of the TUN characteristics outlined in the previous subsection, in particular: [nosep] Palatalization. Vowel metathesis. Monophthongization. Tunisian Arabish Corpus In order to analyze the TA system, we have built a TA Corpus based on social media data, considering this as the best choice to observe the quasi-oral nature of the TA system. Tunisian Arabish Corpus ::: Text collection The corpus collection procedure is composed of the following steps: Thematic categories detection. Match of categories with sets of semantically related TA keywords. Texts and metadata extraction. Step UNKREF20. In order to build a Corpus that was as representative as possible of the linguistic system, it was considered useful to identify wide thematic categories that could represent the most common topics of daily conversations on CMC. In this regard, two instruments with a similar thematic organization have been employed: [nosep] ‘A Frequency Dictionary of Arabic’ BIBREF16 In particular its ‘Thematic Vocabulary List’ (TVL). ‘Loanword Typology Meaning List’ A list of 1460 meanings (LTML) BIBREF17. The TVL consists of 30 groups of frequent words, each one represented by a thematic word. The second consists of 23 groups of basic meanings sorted by representative word heading. Considering that the boundaries between some categories are very blurred, some categories have been merged, such as Body and Health, (see table TABREF26). Some others have been eliminated, being not relevant for the purposes of our research, e.g. Colors, Opposites, Male names. In the end, we obtained 15 macro-categories listed in table TABREF26. Step UNKREF21. Aiming at easily detect texts and the respective seed URLs, without introducing relevant query biases, we decided to avoid using the category names as query keywords BIBREF18. Therefore, we associated to each category a set of TA keywords belonging to the basic Tunisian vocabulary. We found that a semantic category with three meanings was enough to obtain a sufficient number of keywords and URLs for each category. For example, to the category Family the meanings: son, wedding, divorce have been associated in all their TA variants, obtaining a set of 11 keywords (table TABREF26). Step UNKREF22. We collected about 25,000 words and the related metadata as first part of our corpus, which are being semi-automatically transcribed into Arabic characters (see next sections). We planned to increase the size of the corpus at a later time. Regarding the metadata, we have extracted the information published by users, focusing on the three types of information generally used in ethnographic studies: Gender: Male (M) and Female (F). Age range: [10-25], [25-35], [35-50], [50-90]. City of origin. Tunisian Arabish Corpus ::: Corpus Creation In order to create our corpus, we applied a word-level annotation. This phase was preceded by some data pre-processing steps, in particular tokenization. Each token has been associated with its annotations and metadata (table TABREF32). In order to obtain the correspondence between Arabish and Arabic morpheme transcriptions, tokens were segmented into morphemes. This segmentation was carried out completely manually for a first group of tokens. In its final version, each token is associated with a total of 11 different annotations, corresponding to the number of the annotation levels we chose. An excerpt of the corpus after tokens annotation is depicted in table TABREF32. For the sake of clarity, in table TABREF32 we show: * The A column, Cor, indicates the tokens source code. For example, the code 3fE, which stands for 3rab fi Europe, is the forum from which the text was extracted. * The B column, Textco, is the publication date of the text. * The C column, Par, is the row index of the token in the paragraph. * The D column, W, is the index of the token in the sentence. When W corresponds to a range of numbers, it means that the token has been segmented in to its components, specified in the rows below. * The E column, Arabi, corresponds to the token transcription in Arabish. * The F column, Tra, is the transcription into Arabic characters. * The G column, Ita, is the translation to Italian. * The H column, Lem, corresponds to the lemma. * The I column, POS, is the Part-Of-Speech tag of the token. The tags that have been used for the POS tagging are conform to the annotation system of Universal Dependencies. * The last three columns (J, K, L) contain the metadata: Var, Age, Gen. Since TA is a spontaneous orthography of TUN, we considered important to adopt the CODA* guidelines as a model to produce a unified lemmatization for each token (column Lem in table TABREF32). In order to guarantee accurate transcription and lemmatization, we annotated manually the first 6,000 tokens with all the annotation levels. Some annotation decisions were taken before this step, with regard to specific TUN features: * Foreign words. We transcribed the Arabish words into Arabic characters, except for Code-Switching terms. In order to not interrupt the sentences continuity we decide to transcribe Code-Mixing terms into Arabic script. However, at the end of the corpus creation process, these words will be analyzed, making the distinction between acclimatized loans and Code-Mixing. The first ones will be transcribed into Arabic characters also in Lem, as shown in table TABREF33. The second ones will be lemmatized in the foreign language, mostly French, as shown in table TABREF34. * Typographical errors. Concerning typos and typical problems related to the informal writing habits in the web, such as repeated characters to simulate prosodic features of the language, we have not maintained all these characteristics in the transcription (column Tra). Logically, these were neither included in Lem, according to the CODA* conventions, as shown in table TABREF34. * Phono-Lexical exceptions. We used the grapheme <ڨ>, , only in loanword transcription and lemmatization. As can be seen in table TABREF35, the Hilalian phoneme [g] of the Turkish loanword gawriyya, has been transcribed and lemmatized with the grapheme <ق>, . * Glottal stop. As explained in CODA TUN, real initial and final glottal stops have almost disappeared in TUN. They remain in some words that are treated as exceptions, e.g. <أسئلة>, , question BIBREF1. Indeed, we transcribe the glottal stops only when it is usually pronounced, and if it does not, we do not write the glottal stops at the beginning of the word or at the end, neither in the transcription, nor in the lemmas. * Negation Marks. CODA TUN proposes to keep the MSA rule of maintaining a space between the first negation mark and the verb, in order to uniform CODA TUN to the first CODA BIBREF2. However, as DBLP:conf/lrec/ZribiBMEBH14 explains, in TUN this rule does not make really sense, but it should be done to preserve the consistency among the various CODA guidelines. Indeed, in our transcriptions we report what has been produced in Arabish following CODA TUN rules, while in lemmatization we report the verb lemma. At the same time we segment the negative verb in its minor parts: the circumfix negation marks and the conjugated verb. For the first one, we describe the negative morphological structure in the Tra and Lem columns, as in table TABREF36. For the second one, as well as the other verbs, we provide transcription and lemmatization. Incremental and Semi-Automatic Transcription In order to make the corpus collection easier and faster, we adopted a semi-automatic procedure based on sequential neural models BIBREF19, BIBREF20. Since transcribing Arabish into Arabic is by far the most important information to study the Arabish code-system, the semi-automatic procedure concerns only transcription from Arabish to Arabic script. In order to proceed, we used the first group of (roughly) 6,000 manually transcribed tokens as training and test data sets in a 10-fold cross validation setting with 9-1 proportions for training and test, respectively. As we explained in the previous section, French tokens were removed from the data. More precisely, whole sentences containing non-transcribable French tokens (code-switching) were removed from the data. Since at this level there is no way for predicting when a French word can be transcribed into Arabic and when it has to be left unchanged, French tokens create some noise for an automatic, probabilistic model. After removing sentences with French tokens, the data reduced to roughly 5,000 tokens. We chose this amount of tokens for annotation blocks in our incremental annotation procedure. We note that by combining sentence, paragraph and token index in the corpus, whole sentences can be reconstructed. However, from 5,000 tokens roughly 300 sentences could be reconstructed, which are far too few to be used for training a neural model. Instead, since tokens are transcribed at morpheme level, we split Arabish tokens into characters, and Arabic tokens into morphemes, and we treated each token itself as a sequence. Our model learns thus to map Arabish characters into Arabic morphemes. The 10-fold cross validation with this setting gave a token-level accuracy of roughly 71%. This result is not satisfactory on an absolute scale, however it is more than encouraging taking into account the small size of our data. This result means that less than 3 tokens, on average, out of 10, must be corrected to increase the size of our corpus. With this model we automatically transcribed into Arabic morphemes, roughly, 5,000 additional tokens, corresponding to the second annotation block. This can be manually annotated in at least 7,5 days, but thanks to the automatic annotation accuracy, it was manually corrected into 3 days. The accuracy of the model on the annotation of the second block was roughly 70%, which corresponds to the accuracy on the test set. The manually-corrected additional tokens were added to the training data of our neural model, and a new block was automatically annotated and manually corrected. Both accuracy on the test set and on the annotation block remained at around 70%. This is because the block added to the training data was significantly different from the previous and from the third. Adding the third block to the training data and annotating a fourth block with the new trained model gave in contrast an accuracy of roughly 80%. This incremental, semi-automatic transcription procedure is in progress for the remaining blocks, but it is clear that it will make the corpus annotation increasingly easier and faster as the amount of training data will grow up. Our goal concerning transcription, is to have the 25,000 tokens mentioned in section SECREF19 annotated automatically and manually corrected. These data will constitute our gold annotated data, and they will be used to automatically transcribe further data. Conclusions In this paper we presented TArC, the first Tunisian Arabish Corpus annotated with morpho-syntactic information. We discussed the decisions taken in order to highlight the phonological and morphological features of TUN through the TA corpus structure. Concerning the building process, we have shown the steps undertaken and our effort intended to make the corpus as representative as possible of TA. We therefore described the texts collection stage, as well as the corpus building and the semi-automatic procedure adopted for transcribing TA into Arabic script, taking into account CODA* and CODA TUN guidelines. At the present stage of research, TArC consists of 25.000 tokens, however our work is in progress and for future research we plan to enforce the semi-automatic transcription, which has already shown encouraging results (accuracy = 70%). We also intend to realize a semi-automatic TA Part-Of-Speech tagger. Thus, we aim to develop tools for TA processing and, in so doing, we strive to complete the annotation levels (transcription, POS tag, lemmatization) semi-automatically in order to increase the size of the corpus, making it available for linguistic analyses on TA and TUN. Language Resource References lrec2020W-xample-kc
Automatic transcription of 5000 tokens through sequential neural models trained on the annotated part of the corpus
8829f738bcdf05b615072724223dbd82463e5de6
8829f738bcdf05b615072724223dbd82463e5de6_0
Q: Does the paper report translation accuracy for an automatic translation model for Tunisian to Arabish words? Text: Introduction Arabish is the romanization of Arabic Dialects (ADs) used for informal messaging, especially in social networks. This writing system provides an interesting ground for linguistic research, computational as well as sociolinguistic, mainly due to the fact that it is a spontaneous representation of the ADs, and because it is a linguistic phenomenon in constant expansion on the web. Despite such potential, little research has been dedicated to Tunisian Arabish (TA). In this paper we describe the work we carried to develop a flexible and multi-purpose TA resource. This will include a TA corpus, together with some tools that could be useful for analyzing the corpus and for its extension with new data. First of all, the resource will be useful to give an overview of the TA. At the same time, it will be a reliable representation of the Tunisian dialect (TUN) evolution over the last ten years: the collected texts date from 2009 to present. This selection was done with the purpose to observe to what extent the TA orthographic system has evolved toward a writing convention. Therefore, the TArC will be suitable for phonological, morphological, syntactic and semantic studies, both in the linguistic and the Natural Language Processing (NLP) domains. For these reasons, we decided to build a corpus which could highlight the structural characteristics of TA through different annotation levels, including Part of Speech (POS) tags and lemmatization. In particular, to facilitate the match with the already existing tools and studies for the Arabic language processing, we provide a transcription in Arabic characters at token level, following the Conventional Orthography for Dialectal Arabic guidelines CODA* (CODA star) BIBREF0 and taking into account the specific guidelines for TUN (CODA TUN) BIBREF1. Furthermore, even if the translation is not the main goal of this research, we have decided to provide an Italian translation of the TArC’s texts. Even though in the last few years ADs have received an increasing attention by the NLP community, many aspects have not been studied yet and one of these is the Arabish code-system. The first reason for this lack of research is the relatively recent widespread of its use: before the advent of the social media, Arabish usage was basically confined to text messaging. However, the landscape has changed considerably, and particularly thanks to the massive registration of users on Facebook since 2008. At that time, in Tunisia there were still no Arabic keyboards, neither for Personal Computers, nor for phones, so Arabic-speaking users designed TA for writing in social media (Table TABREF14). A second issue that has held back the study of Arabish is its lack of a standard orthography, and the informal context of use. It is important to note that also the ADs lack a standard code-system, mainly because of their oral nature. In recent years the scientific community has been active in producing various sets of guidelines for dialectal Arabic writing in Arabic characters: CODA (Conventional Orthography for Dialectal Arabic) BIBREF2. The remainder of the paper is organized as follows: section SECREF2 is an overview of NLP studies on TUN and TA; section SECREF3 describes TUN and TA; section SECREF4 presents the TArC corpus building process; section SECREF5 explains preliminary experiments with a semi-automatic transcription and annotation procedure, adopted for a faster and simpler construction of the TArC corpus; conclusions are drawn in section SECREF6 Related Work In this section, we provide an overview of work done on automatic processing of TUN and TA. As briefly outlined above, many studies on TUN and TA aim at solving the lack of standard orthography. The first Conventional Orthography for Dialectal Arabic (CODA) was for Egyptian Arabic BIBREF2 and it was used by bies2014transliteration for Egyptian Arabish transliteration into Arabic script. The CODA version for TUN (CODA TUN) was developed by DBLP:conf/lrec/ZribiBMEBH14, and was used in many studies, like boujelbane2015traitements. Such work presents a research on automatic word recognition in TUN. Narrowing down to the specific field of TA, CODA TUN was used in masmoudi2015arabic to realize a TA-Arabic script conversion tool, implemented with a rule-based approach. The most extensive CODA is CODA*, a unified set of guidelines for 28 Arab city dialects BIBREF0. For the present research, CODA* is considered the most convenient guideline to follow due to its extensive applicability, which will support comparative studies of corpora in different ADs. As we already mentioned, there are few NLP tools available for Arabish processing in comparison to the amount of NLP tools realized for Arabic. Considering the lack of spelling conventions for Arabish, previous effort has focused on automatic transliteration from Arabish to Arabic script, e.g. chalabi2012romanized, darwish2013arabizi, and al2014automatic. These three work are based on a character-to-character mapping model that aims at generating a range of alternative words that must then be selected through a linguistic model. A different method is presented in younes2018sequence, in which the authors present a sequence-to-sequence-based approach for TA-Arabic characters transliteration in both directions BIBREF3, BIBREF4. Regardless of the great number of work done on TUN automatic processing, there are not a lot of TUN corpora available for free BIBREF5. To the best of our knowledge there are only five TUN corpora freely downloadable: one of these is the PADIC PADIC, composed of 6,400 sentences in six Arabic dialects, translated in Modern Standard Arabic (MSA), and annotated at sentence level. Two other corpora are the Tunisian Dialect Corpus Interlocutor (TuDiCoI) Tudicoi and the Spoken Tunisian Arabic Corpus (STAC) stac, which are both morpho-syntactically annotated. The first one is a spoken task-oriented dialogue corpus, which gathers a set of conversations between staff and clients recorded in a railway station. TuDiCoI consists of 21,682 words in client turns BIBREF7. The STAC is composed of 42,388 words collected from audio files downloaded from the web (as TV channels and radio stations files) BIBREF8. A different corpus is the TARIC Taric, which contains 20 hours of TUN speech, transcribed in Arabic characters BIBREF9. The last one is the TSAC Tsac, containing 17k comments from Facebook, manually annotated to positive and negative polarities BIBREF10. This corpus is the only one that contains TA texts as well as texts in Arabic characters. As far as we know there are no available corpora of TA transcribed in Arabic characters which are also morpho-syntactically annotated. In order to provide an answer to the lack of resources for TA, we decided to create TArC, a corpus entirely dedicated to the TA writing system, transcribed in CODA TUN and provided with a lemmatization level and POS tag annotation. Characteristics of Tunisian Arabic and Tunisian Arabish The Tunisian dialect (TUN) is the spoken language of Tunisian everyday life, commonly referred to as الدَّارِجَة, ad-dārija, العَامِّيَّة, al-‘āmmiyya, or التُّونْسِي, . According to the traditional diatopic classification, TUN belongs to the area of Maghrebi Arabic, of which the other main varieties are Libyan, Algerian, Moroccan and the Ḥassānīya variety of Mauritania BIBREF11. Arabish is the transposition of ADs, which are mainly spoken systems, into written form, thus turning into a quasi-oral system (this topic will be discussed in section SECREF12). In addition, Arabish is not realized through Arabic script and consequently it is not subject to the Standard Arabic orthographic rules. As a result, it is possible to consider TA as a faithful written representation of the spoken TUN BIBREF12. Characteristics of Tunisian Arabic and Tunisian Arabish ::: Tunisian Arabic The following list provides an excerpt of the principal features of TUN, which, through the TArC, would be researched in depth among many others. At the phonetic level, some of the main characteristics of TUN, and Maghrebi Arabic in general, are the following: 1em0pt * Strong influence of the Berber substratum, to which it is possible to attribute the conservative phonology of TUN consonants. 1em0pt * Presence of new emphatic phonemes, above all [ṛ], [ḷ], [ḅ]. * Realization of the voiced post-alveolar affricate [ʤ] as fricative . * Overlapping of the pharyngealized voiced alveolar stop , <ض>, with the fricative , <ظ>. * Preservation of a full glottal stop mainly in cases of loans from Classical Arabic (CA) or exclamations and interjections of frequent use. * Loss of short vowels in open syllables. * Monophthongization. In TUN <بَيت>, , house, becomes meaning room. * Palatalization of ā: Imāla, <إمالة>, literally inclination. (In TUN the phenomenon is of medium intensity.) Thereby the word <باب>, , door, becomes . * Metathesis. (Transposition of the first vowel of the word. It occurs when non-conjugated verbs or names without suffix begin with the sequence CCvC, where C stands for ungeminated consonant, and v for short vowel. When a suffix is added to this type of name, or a verb of this type is conjugated, the first vowel changes position giving rise to the CvCC sequence.) In TUN it results in: (he) has understood: <فْهِم>, , (she) has understood: <فِهْمِت>, or leg: <رْجِل>, , my leg: <رِجْلِي>, . Regarding the morpho-syntactic level, TUN presents: 1em0pt * Addition of the prefix /-n/ to first person verbal morphology in muḍāri' (imperfective). * Realization of passive-reflexive verbs through the morpheme /-t/ prefixed to the verb as in the example: <سوريّة مالحَفْصيّة تْتِلْبِس>, , the shirts of Ḥafṣiya are not bad, (lit: they dress). * Loss of gender distinction at the 2nd and 3rd persons, at verbal and pronominal level. * Disappearance of the dual form from verbal and pronominal inflexion. There is a residual of pseudo-dual in some words fixed in time in their dual form. * Loss of relative pronouns flexion and replacement with the invariable form <اِلّي>, . * Use of presentatives /ṛā-/ and /hā-/ with the meaning of here, look, as in the example in TUN: <راني مَخْنوق>, ṛ, here I am asphyxiated (by problems), or in <هاك دَبَّرْتْها>, , here you are, finding it (the solution) hence: you were lucky. * Presence of circumfix negation marks, such as < <ما>, + verb + <ش>, >. The last element of this structure must be omitted if there is another negation, such as the Tunisian adverb <عُمْر>, , never, as in the structure: < + personal pronoun suffix + + perfect verb>. This construction is used to express the concept of never having done the action in question, as in the example: <عُمري ما كُنْت نِتْصَوُّر...>, , I never imagined that.... Instead, to deny an action pointing out that it will never repeat itself again, a structure widely used is <[ma] + + + imperfective verb>, where the element within the circumfix marks is a grammaticalized element of verbal origin from CA: <عاد>, , meaning to go back, to reoccur, which gives the structure a sense of denied repetitiveness, as in the sentence: <هو ما عادِش يَرْجَع>, , he will not come back. Finally, to deny the nominal phrase, in TUN both the <موش>, , and the circumfix marks are frequently used. For the negative form of the verb to be in the present, circumfix marks can be combined with the personal suffix pronoun, placed between the marks, as in <مَانِيش>, , I am not. Within the negation marks we can also find other types of nominal structures, such as: < + (mind) + personal pronoun suffix>, which has a value equivalent to the verb be aware of, as in the example: <ما في باليش>, , I did not know. Characteristics of Tunisian Arabic and Tunisian Arabish ::: Tunisian Arabish As previously mentioned, we consider Arabish a quasi-oral system. With quasi-orality it is intended the form of communication typical of Computer-Mediated Communication (CMC), characterized by informal tones, dependence on context, lack of attention to spelling and especially the ability to create a sense of collectivity BIBREF15. TA and TUN have not a standard orthography, with the exception of the CODA TUN. Nevertheless, TA is a spontaneous code-system used since more than ten years, and is being conventionalized by its daily usage. From the table TABREF14, where the coding scheme of TA is illustrated, it is possible to observe that there is no one-to-one correspondence between TA and TUN characters and that often Arabish presents overlaps in the encoding possibilities. The main issue is represented by the not proper representation by TA of the emphatic phones: , and . On the other hand, being TA not codified through the Arabic alphabet, it can well represent the phonetic realization of TUN, as shown by the following examples: * The Arabic alphabet is generally used for formal conversations in Modern Standard Arabic (MSA), the Arabic of formal situations, or in that of Classical Arabic (CA), the Arabic of the Holy Qur’ān, also known as ‘The Beautiful Language’. Like MSA and CA, also Arabic Dialects (ADs) can be written in the Arabic alphabet, but in this case it is possible to observe a kind of hypercorrection operated by the speakers in order to respect the writing rules of MSA. For example, in TUN texts written in Arabic script, it is possible to find a ‘silent vowel’ (namely an epenthetic alif <ا>) written at the beginning of those words starting with the sequence ‘#CCv’, which is not allowed in MSA. * Writing TUN in Arabic script, the Code-Mixing or Switching in foreign language will be unnaturally reduced. * As described in table TABREF14, the Arabic alphabet is provided with three short vowels, which correspond to the three long ones: , , , but TUN presents a wider range of vowels. Indeed, regarding the early presented characteristics of TUN, the TA range of vowels offers better possibility to represent most of the TUN characteristics outlined in the previous subsection, in particular: [nosep] Palatalization. Vowel metathesis. Monophthongization. Tunisian Arabish Corpus In order to analyze the TA system, we have built a TA Corpus based on social media data, considering this as the best choice to observe the quasi-oral nature of the TA system. Tunisian Arabish Corpus ::: Text collection The corpus collection procedure is composed of the following steps: Thematic categories detection. Match of categories with sets of semantically related TA keywords. Texts and metadata extraction. Step UNKREF20. In order to build a Corpus that was as representative as possible of the linguistic system, it was considered useful to identify wide thematic categories that could represent the most common topics of daily conversations on CMC. In this regard, two instruments with a similar thematic organization have been employed: [nosep] ‘A Frequency Dictionary of Arabic’ BIBREF16 In particular its ‘Thematic Vocabulary List’ (TVL). ‘Loanword Typology Meaning List’ A list of 1460 meanings (LTML) BIBREF17. The TVL consists of 30 groups of frequent words, each one represented by a thematic word. The second consists of 23 groups of basic meanings sorted by representative word heading. Considering that the boundaries between some categories are very blurred, some categories have been merged, such as Body and Health, (see table TABREF26). Some others have been eliminated, being not relevant for the purposes of our research, e.g. Colors, Opposites, Male names. In the end, we obtained 15 macro-categories listed in table TABREF26. Step UNKREF21. Aiming at easily detect texts and the respective seed URLs, without introducing relevant query biases, we decided to avoid using the category names as query keywords BIBREF18. Therefore, we associated to each category a set of TA keywords belonging to the basic Tunisian vocabulary. We found that a semantic category with three meanings was enough to obtain a sufficient number of keywords and URLs for each category. For example, to the category Family the meanings: son, wedding, divorce have been associated in all their TA variants, obtaining a set of 11 keywords (table TABREF26). Step UNKREF22. We collected about 25,000 words and the related metadata as first part of our corpus, which are being semi-automatically transcribed into Arabic characters (see next sections). We planned to increase the size of the corpus at a later time. Regarding the metadata, we have extracted the information published by users, focusing on the three types of information generally used in ethnographic studies: Gender: Male (M) and Female (F). Age range: [10-25], [25-35], [35-50], [50-90]. City of origin. Tunisian Arabish Corpus ::: Corpus Creation In order to create our corpus, we applied a word-level annotation. This phase was preceded by some data pre-processing steps, in particular tokenization. Each token has been associated with its annotations and metadata (table TABREF32). In order to obtain the correspondence between Arabish and Arabic morpheme transcriptions, tokens were segmented into morphemes. This segmentation was carried out completely manually for a first group of tokens. In its final version, each token is associated with a total of 11 different annotations, corresponding to the number of the annotation levels we chose. An excerpt of the corpus after tokens annotation is depicted in table TABREF32. For the sake of clarity, in table TABREF32 we show: * The A column, Cor, indicates the tokens source code. For example, the code 3fE, which stands for 3rab fi Europe, is the forum from which the text was extracted. * The B column, Textco, is the publication date of the text. * The C column, Par, is the row index of the token in the paragraph. * The D column, W, is the index of the token in the sentence. When W corresponds to a range of numbers, it means that the token has been segmented in to its components, specified in the rows below. * The E column, Arabi, corresponds to the token transcription in Arabish. * The F column, Tra, is the transcription into Arabic characters. * The G column, Ita, is the translation to Italian. * The H column, Lem, corresponds to the lemma. * The I column, POS, is the Part-Of-Speech tag of the token. The tags that have been used for the POS tagging are conform to the annotation system of Universal Dependencies. * The last three columns (J, K, L) contain the metadata: Var, Age, Gen. Since TA is a spontaneous orthography of TUN, we considered important to adopt the CODA* guidelines as a model to produce a unified lemmatization for each token (column Lem in table TABREF32). In order to guarantee accurate transcription and lemmatization, we annotated manually the first 6,000 tokens with all the annotation levels. Some annotation decisions were taken before this step, with regard to specific TUN features: * Foreign words. We transcribed the Arabish words into Arabic characters, except for Code-Switching terms. In order to not interrupt the sentences continuity we decide to transcribe Code-Mixing terms into Arabic script. However, at the end of the corpus creation process, these words will be analyzed, making the distinction between acclimatized loans and Code-Mixing. The first ones will be transcribed into Arabic characters also in Lem, as shown in table TABREF33. The second ones will be lemmatized in the foreign language, mostly French, as shown in table TABREF34. * Typographical errors. Concerning typos and typical problems related to the informal writing habits in the web, such as repeated characters to simulate prosodic features of the language, we have not maintained all these characteristics in the transcription (column Tra). Logically, these were neither included in Lem, according to the CODA* conventions, as shown in table TABREF34. * Phono-Lexical exceptions. We used the grapheme <ڨ>, , only in loanword transcription and lemmatization. As can be seen in table TABREF35, the Hilalian phoneme [g] of the Turkish loanword gawriyya, has been transcribed and lemmatized with the grapheme <ق>, . * Glottal stop. As explained in CODA TUN, real initial and final glottal stops have almost disappeared in TUN. They remain in some words that are treated as exceptions, e.g. <أسئلة>, , question BIBREF1. Indeed, we transcribe the glottal stops only when it is usually pronounced, and if it does not, we do not write the glottal stops at the beginning of the word or at the end, neither in the transcription, nor in the lemmas. * Negation Marks. CODA TUN proposes to keep the MSA rule of maintaining a space between the first negation mark and the verb, in order to uniform CODA TUN to the first CODA BIBREF2. However, as DBLP:conf/lrec/ZribiBMEBH14 explains, in TUN this rule does not make really sense, but it should be done to preserve the consistency among the various CODA guidelines. Indeed, in our transcriptions we report what has been produced in Arabish following CODA TUN rules, while in lemmatization we report the verb lemma. At the same time we segment the negative verb in its minor parts: the circumfix negation marks and the conjugated verb. For the first one, we describe the negative morphological structure in the Tra and Lem columns, as in table TABREF36. For the second one, as well as the other verbs, we provide transcription and lemmatization. Incremental and Semi-Automatic Transcription In order to make the corpus collection easier and faster, we adopted a semi-automatic procedure based on sequential neural models BIBREF19, BIBREF20. Since transcribing Arabish into Arabic is by far the most important information to study the Arabish code-system, the semi-automatic procedure concerns only transcription from Arabish to Arabic script. In order to proceed, we used the first group of (roughly) 6,000 manually transcribed tokens as training and test data sets in a 10-fold cross validation setting with 9-1 proportions for training and test, respectively. As we explained in the previous section, French tokens were removed from the data. More precisely, whole sentences containing non-transcribable French tokens (code-switching) were removed from the data. Since at this level there is no way for predicting when a French word can be transcribed into Arabic and when it has to be left unchanged, French tokens create some noise for an automatic, probabilistic model. After removing sentences with French tokens, the data reduced to roughly 5,000 tokens. We chose this amount of tokens for annotation blocks in our incremental annotation procedure. We note that by combining sentence, paragraph and token index in the corpus, whole sentences can be reconstructed. However, from 5,000 tokens roughly 300 sentences could be reconstructed, which are far too few to be used for training a neural model. Instead, since tokens are transcribed at morpheme level, we split Arabish tokens into characters, and Arabic tokens into morphemes, and we treated each token itself as a sequence. Our model learns thus to map Arabish characters into Arabic morphemes. The 10-fold cross validation with this setting gave a token-level accuracy of roughly 71%. This result is not satisfactory on an absolute scale, however it is more than encouraging taking into account the small size of our data. This result means that less than 3 tokens, on average, out of 10, must be corrected to increase the size of our corpus. With this model we automatically transcribed into Arabic morphemes, roughly, 5,000 additional tokens, corresponding to the second annotation block. This can be manually annotated in at least 7,5 days, but thanks to the automatic annotation accuracy, it was manually corrected into 3 days. The accuracy of the model on the annotation of the second block was roughly 70%, which corresponds to the accuracy on the test set. The manually-corrected additional tokens were added to the training data of our neural model, and a new block was automatically annotated and manually corrected. Both accuracy on the test set and on the annotation block remained at around 70%. This is because the block added to the training data was significantly different from the previous and from the third. Adding the third block to the training data and annotating a fourth block with the new trained model gave in contrast an accuracy of roughly 80%. This incremental, semi-automatic transcription procedure is in progress for the remaining blocks, but it is clear that it will make the corpus annotation increasingly easier and faster as the amount of training data will grow up. Our goal concerning transcription, is to have the 25,000 tokens mentioned in section SECREF19 annotated automatically and manually corrected. These data will constitute our gold annotated data, and they will be used to automatically transcribe further data. Conclusions In this paper we presented TArC, the first Tunisian Arabish Corpus annotated with morpho-syntactic information. We discussed the decisions taken in order to highlight the phonological and morphological features of TUN through the TA corpus structure. Concerning the building process, we have shown the steps undertaken and our effort intended to make the corpus as representative as possible of TA. We therefore described the texts collection stage, as well as the corpus building and the semi-automatic procedure adopted for transcribing TA into Arabic script, taking into account CODA* and CODA TUN guidelines. At the present stage of research, TArC consists of 25.000 tokens, however our work is in progress and for future research we plan to enforce the semi-automatic transcription, which has already shown encouraging results (accuracy = 70%). We also intend to realize a semi-automatic TA Part-Of-Speech tagger. Thus, we aim to develop tools for TA processing and, in so doing, we strive to complete the annotation levels (transcription, POS tag, lemmatization) semi-automatically in order to increase the size of the corpus, making it available for linguistic analyses on TA and TUN. Language Resource References lrec2020W-xample-kc
Yes
4b624064332072102ea674254d7098038edad572
4b624064332072102ea674254d7098038edad572_0
Q: Did participants behave unexpectedly? Text: Introduction Our success as a social species depends on our ability to understand, and be understood by, different communicative partners across different contexts. Theory of mind—the ability to represent and reason about others' mental states—is considered to be the key mechanism that supports such context-sensitivity in our everyday social interactions. Being able to reason about what others see, want, and think allows us to make more accurate predictions about their future behavior in different contexts and adjust our own behaviors accordingly BIBREF0 . Over the past two decades, however, there has been sustained debate over the extent to which adults actually make of use theory of mind in communication. On one hand, accounts of language use in the tradition of BIBREF1 and BIBREF2 , BIBREF3 implicitly assume a fundamental and pervasive role for theory of mind mechanisms. The meaning of an utterance is established against a backdrop of inference, intention, and common ground: knowledge that is taken to be shared by both parties BIBREF4 , BIBREF5 . This view of adults as natural mind-readers is consistent with extensive evidence from the psycholinguistics literature: for instance, we spontaneously calibrate our referential expressions to our intended audiences BIBREF6 and make use of partner-specific history BIBREF7 , BIBREF8 . Yet in other cases the evidence appears to be more consistent with a more egocentric or “reflexively mind-blind” view of language processing BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Under this view, although adults have the ability to deploy theory of mind, it is effortful and costly to do so. Thus people may initially anchor on their own perspective and only adjust to account for other perspectives when a problem arises and when sufficient cognitive resources are available. Much of this debate has centered around the influential director-matcher paradigm, a variant of classic reference games BIBREF13 where a confederate speaker gives participants instructions about how to move objects around a grid. By introducing an asymmetry in visual access—certain cells of the grid are covered such that participants can see objects that the speaker cannot (e.g. Fig. 1 )— BIBREF14 designed a task to expose cases where participants (listeners) either succeed or fail to take into account what the speaker sees. In particular, BIBREF14 argued that if listeners were reliably using theory of mind, they would only consider mutually visible objects as possible referents. For instance, on one trial a roll of Scotch tape was mutually visible and a cassette tape was hidden from the speaker's view. When the confederate speaker produced an ambiguous utterance, “tape,” participants should still interpret it as a reference to the mutually visible object even if it fits the hidden object better; the idea is that a speaker who cannot see an object wouldn't possibly be referring to it. While the visual asymmetries constructed by BIBREF14 may provide the starkest test of this hypothesis, variations on this basic paradigm have manipulated other dimensions of non-visual knowledge asymmetry, including those based on spoken information BIBREF15 , BIBREF16 , spatial cues BIBREF17 , BIBREF18 , private pre-training on object labels BIBREF19 , cultural background BIBREF20 , and other task-relevant information BIBREF21 , BIBREF22 . Questions about speaker perspective-taking during production have similarly been explored by reversing the direction of the asymmetry so the speaker has private knowledge that the listener does not and examining whether this private information leaks into their utterances BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . Numerous rounds of reinterpretation and methodological criticism have puzzled over seemingly contradictory findings in this sprawling body of work: some studies find strong evidence consistent with an egocentric view—listeners initially consider and even attempt to move such objects—while others find that information from the speaker's perspective is integrated from the very earliest stages of processing BIBREF30 , BIBREF31 . Recent computational models have begun to unify this literature under a probabilistic framework. For instance, some models assume that listeners BIBREF32 and speakers BIBREF33 simultaneously integrate their own perspective with that of their partner, leading to behavior that lies between purely egocentric and purely guided by common ground. These constraint-based models BIBREF34 , BIBREF35 introduce a probabilistic weighting parameter between the two domains of reference and show that an intermediate weighting explains the gradient of communicative behavior better than a purely egocentric or purely perspective-adopting model. Yet these constraint-based models leave open a key puzzle for rational models of language use: why do people use the proportion they do in a given context? In other words, while different factors influencing the weighting have been proposed, no formal mechanism yet explains why incorporating egocentric knowledge would be adaptive when full common ground is available. We argue in this paper for a resource rational account of perspective-taking in communication BIBREF36 , BIBREF37 . In a communicative interaction with another agent, the participants share the goal of successfully being understood while minimizing joint effort BIBREF38 , BIBREF4 . If theory of mind use is indeed effortful and cognitively demanding to some degree BIBREF39 , BIBREF40 , BIBREF41 , then the question for a rational agent is when and how to best allocate its cognitive resources to achieve its goals. This sets up a natural division of labor between the speaker and listener in how the effort should be shared, which in principle admits many solutions. Rather than being guided by rigid heuristics, individuals may rationally and adaptively calibrate their perspective-taking based on expectations about their partner's likely behavior. Critically, these expectations may themselves be derived from a targeted use of theory of mind. Here, we explore one particular source of expectations derived from Gricean expectations of informativity, which have been largely neglected by prior work in the perspective-taking literature BIBREF42 . Just as making sense of an agent's physical behaviors requires a broad, accurate mental model of how the agent's visual access, beliefs, and intentions translate into motor plans BIBREF43 , BIBREF44 , making sense of an agent's linguistic behaviors depends on an accurate model of what a speaker would say, or what a listener would understand, in different situations BIBREF45 , BIBREF46 , BIBREF47 , BIBREF48 , BIBREF49 . From this perspective, theory of mind use not only incorporates people’s mental models of a partner’s knowledge or visual access but also their inferences about how their partner would behave in a communicative context. To instantiate this account, we elaborate the family of probabilistic weighting models by proposing that theory of mind use under knowledge asymmetries not only involves integrating a partner's knowledge but also recursive reasoning about how they will likely produce or interpret utterances in particular communicative contexts BIBREF50 . The Gricean notion of cooperativity BIBREF3 , BIBREF4 refers to the idea that speakers try to avoid saying things that are confusing or unnecessarily complicated given the current context, and that listeners expect this. For instance, imagine trying to help someone spot your dog at a busy dog park. It may be literally correct to call it a “dog,” but as a cooperative speaker you would understand that the listener would have trouble disambiguating the referent from many other dogs. Likewise, the listener would reasonably expect you to say something more informative than “dog” in this context. You may therefore prefer to use a more specific or informative expressions, like “the little terrier with the blue collar.” BIBREF7 , BIBREF51 . Critically, you might do so even when you happen to see only one dog at the moment, but know there are likely to be other dogs from the listener's point of view. In the presence of uncertainty about their partner's visual context, a cooperative speaker may tend toward additional specificity. Now, what level of specificity is pragmatically appropriate in the particular director-matcher task used by BIBREF52 ? This task requires the speaker to generate a description such that a listener can identify the correct object among distractors, even though several cells are hidden from the speaker's view (e.g. Fig. 2 , bottom). It is thus highly salient to the speaker that there are hidden objects she cannot see but her partner can. Gricean reasoning, as realized by recent formal models BIBREF46 , BIBREF47 , BIBREF49 , predicts that a speaker in this context will compensate for her uncertainty about the listener's visual context by increasing the informativity of her utterance beyond what she would produce in a completely shared context. (See Appendix A for a formal model of pragmatic reasoning in this situation and a mathematical derivation of the informativity prediction.). The director-matcher task used by BIBREF52 is therefore not only challenging for the listener; it also requires a sophisticated use of theory of mind, vis a vis pragmatic reasoning, on the part of the speaker, to understand that the listener may expect her to increase the informativity of her utterance. While extensive prior work has examined how speakers adjust their utterances, or not, depending on their own private information, it remains untested how they pragmatically compensate for their lack of access to the listener's private information by flexibly modifying their informativity. In the following experiments, we ask whether people, as speakers, show such sensitivity to their own uncertainty about their partner's visual access. Furthermore, we suggest that such sensitivity (and the listener's expectations about this sensitivity) can help us understand why listeners in prior work (e.g., in the Director-Matcher task) made frequent errors. A listener's rational reliance on the speaker's informativity, which allows them to efficiently neglect the speaker's visual access under cognitive load, may backfire and lead to errors when paired with a confederate speaker who violates Gricean expectations. First, we directly test our model's prediction by manipulating the presence and absence of occlusions in a simple, interactive, natural-language reference game. Second, we conduct a replication of BIBREF52 with an additional unscripted condition to evaluate whether the scripted referring expressions used by confederate speakers in prior work accord with what a real speaker would say in the same interactive context BIBREF54 , BIBREF55 , BIBREF56 . If confederate speakers were using scripts that were uncooperative and underinformative compared to what speakers naturally say, this previously unrecognized violation of Gricean expectations may have implications for the rational basis of listener errors. Our main goal here is to directly establish the adaptive pragmatic behavior of speakers. It is important to note that our broader claim about the source of listener errors emerges from establishing the plausibility of a resource-rational basis for perspective-neglect, showing that speakers are adaptive (Exp.1) and listeners indeed make more errors when speakers violate their expectations (Exp.2); causally manipulating listener expectations is beyond the scope of the current work. We return to the broader implications and predictions of this account in the discussion. Experiment 1: Speaker behavior under uncertainty How does an unscripted speaker change her communicative behavior when there is uncertainty about exactly what her partner can see? To address this question empirically, we randomly assigned participants to the roles of speaker and listener and paired them over the web to play an interactive communication task BIBREF57 . Methods We recruited 102 pairs of participants from Amazon Mechanical Turk and randomly assigned speaker and listener roles. After we removed 7 games that disconnected part-way through and 12 additional games according to our pre-registered exclusion criteria (due to being non-native English speakers, reporting confusion about the instructions, or clearly violating the instructions), we were left with a sample of 83 full games. On each trial, both players were presented with a $3\times 3$ grid containing objects. One target object was privately highlighted for the speaker, who freely typed a message into a chat box in order to get the listener to click the intended referent. The objects varied along three discrete features (shape, texture, and color), each of which took four discrete values (64 possible objects). See Appendix Fig. 7 for a screenshot of the interface. There were four types of trials, forming a within-pair $2 \times 2$ factorial design. We manipulated the presence or absence of occlusions and the closeness of shared distractors to the target (see Fig. 2 ). On `shared' trials, all objects were seen by both participants, but on `hidden' trials, two cells of the grid were covered with occluders (curtains) such that only the listener could see the contents of the cell. On `far' trials, the target is the only object with a particular shape; on `close' trials, there is also a shared distractor with the target's shape, differing only in color or texture. In order to make it clear to the speaker that there could really be objects behind the occluders without providing a statistical cue to their identity or quantity on any particular trial, we randomized the total number of distractors in the grid on each trial (between 2 and 4) as well as the number of those distractors covered by curtains (1 or 2). If there were only two distractors, we did not allow both of them to be covered: there was always at least one visible distractor. Each trial type appeared 6 times for a total of 24 trials, and the sequence of trials was pseudo-randomized such that no trial type appeared more than twice in each block of eight trials. Participants were instructed to use visual properties of the objects rather than spatial locations in the grid. Finally, we collected mouse-tracking data analogous to the eye-tracking common in referential paradigms. We asked the matcher to wait until the director sent a message; when the message was received, the matcher clicked a small circle in the center of the grid to show the objects and proceed with the trial. We recorded at 100Hz from the matcher's mouse in the decision window after this click, until the point where they clicked and started to drag one of the objects. While we did not intend to analyze these data for Exp. 1, we anticipated using it in our second experiment below and wanted to use the same procedure across experiments for consistency. We recruited 200 pairs of participants from Amazon Mechanical Turk. 58 pairs were unable to complete the game due to a server outage. Following our preregistered exclusion criteria, we removed 24 games who reported confusion, violated our instructions, or made multiple errors on filler items, as well as 2 additional games containing non-native English speakers. This left 116 pairs in our final sample. The materials and procedure were chosen to be as faithful as possible to those reported in BIBREF52 while allowing for interaction over the web. Directors used a chat box to communicate where to move a privately cued target object in a $4 \times 4$ grid (see Fig. 1 ). The listener then attempted to click and drag the intended object. In each of 8 objects sets, mostly containing filler objects, one target belonged to a `critical pair' of objects, such as a visible cassette tape and a hidden roll of tape that could both plausibly be called `the tape.' We displayed instructions to the director as a series of arrows pointing from some object to a neighboring unoccupied cell. Trials were blocked into eight sets of objects, with four instructions each. As in BIBREF52 , we collected baseline performance by replacing the hidden alternative (e.g. a roll of tape) with a filler object that did not fit the critical instruction (e.g. a battery) in half of the critical pairs. The assignment of items to conditions was randomized across participants, and the order of conditions was randomized under the constraint that the same condition would not be used on more than two consecutive items. All object sets, object placements, and corresponding instruction sets were fixed across participants. In case of a listener error, the object was placed back in its original position; both participants were given feedback and asked to try again. We used a between-subject design to compare the scripted labels used by confederate directors in prior work against what participants naturally say in the same role. For participants assigned to the director role in the `scripted' condition, a pre-scripted message using the precise wording from BIBREF52 automatically appeared in their chat box on half of trials (the 8 critical trials as well as nearly half of the fillers). Hence, the scripted condition served as a direct replication. To maintain an interactive environment, the director could freely produce referring expressions on the remainder of filler trials. In the `unscripted' condition, directors were unrestricted and free to send whatever messages they deemed appropriate on all trials. In addition to analyzing messages sent through the chat box and errors made by matchers (listeners), we collected mouse-tracking data in analogy to the eye-tracking common in these paradigms. Behavioral results Our primary measure of speaker behavior is the length (in words) of naturally produced referring expressions sent through the chat box. We tested differences in speaker behavior across conditions using a mixed-effect regression of context and occlusion on the number of words produced, with maximal random effect structure containing intercept, slopes, and interaction. First, as a baseline, we examined the simple effect of close vs. far contexts in trials with no occlusions. We found that speakers used significantly more words on average when there was a distractor in context that shared the same shape as the target ( $b = 0.56, t = 5.1, p < 0.001$ ; see Fig. 3 A). This replicates the findings of prior studies in experimental pragmatics BIBREF7 , BIBREF58 . Next, we turn to the simple effect of occlusion in far contexts (which are most similar to the displays used in the director-matcher task which we adopt in Exp. 2 BIBREF52 ). Speakers used 1.25 additional words on average when they knew their partner could potentially see additional objects ( $t = 7.5, p < 0.001$ ). Finally, we found a significant interaction ( $b = -0.49, t = 3.8, p <0.001$ ) where the effect of occlusion was larger in far contexts, likely indicating a ceiling on the level of informativity required to individuate objects in our simple stimulus space. What are these additional words used for? As a secondary analysis, we annotated each utterance based on which of the three object features were mentioned (shape, texture, color). Because speakers nearly always mentioned shape (e.g. `star', `triangle') as the head noun of their referring expression regardless of context ( $\sim 99\%$ of trials), differences in utterance length across conditions must be due to differentially mentioning the other two features (color and texture). To test this observation, we ran separate mixed-effect logistic regressions for color and texture predicting mention from context; due to convergence issues, the maximum random effect structure supported by our data contains only speaker-level intercepts and slopes for the occlusion effect. We found simple effects of occlusion in far contexts for both features ( $b = 1.33, z = 2.9, p = 0.004$ for color; $b = 4.8, z = 6.4, p < 0.001$ for texture, see Fig. 3 B). In other words, in displays like the left column of Fig. 2 where the target was the only `star', speakers were somewhat more likely to produce the star's color—and much more likely to produce its texture—when there were occlusions present, even though shape alone is sufficient to disambiguate the target from visible distractors in both cases. Finally, we note that listener errors were rare: 88% of listeners made only one or fewer errors (out of 24 trials), and there was no significant difference in error rates across the four conditions ( $\chi ^2(3) = 1.23, p = 0.74$ ). We test the connections between context-sensitive speaker behavior and listener error rates more explicitly in Exp. 2. Model comparison While our behavioral results provide qualitative support for a Gricean account over an egocentric account, formalizing these two accounts in computational models allows a stronger test of our hypothesis by generating graded quantitative predictions. We formalized both accounts in the probabilistic Rational Speech Act (RSA) framework BIBREF47 , BIBREF46 , BIBREF49 , BIBREF59 , BIBREF48 , which has successfully captured a variety of other pragmatic phenomena. In this framework, speakers are decision-theoretic agents attempting to (soft-)maximize a utility function balancing parsimony (i.e., a preference for shorter, simpler utterances) with informativeness (i.e., the likelihood of an imagined listener agent having the intended interpretation). The only difference between the two accounts in the RSA framework is how the asymmetry in visual access is handled: the `occlusion-blind' speaker simply assumes that the listener sees the same objects as she herself sees, while the `occlusion-sensitive' speaker represents uncertainty over her partner's visual context. In particular, she assumes a probability distribution over the possible objects that might be hidden behind the occlusions and attempts to be informative on average. The two models have the same four free parameters: a speaker optimality parameter controlling the soft-max temperature, and three parameters controlling the costs of producing the features of shape, color, and texture (see Appendix B for details). We conducted a Bayesian data analysis to infer these parameters conditioning on our empirical data, and computed a Bayes Factor to compare the models. We found extremely strong support for the occlusion-sensitive model relative to the occlusion-blind model ( $BF = 2.2 \times 10^{209}$ ; see Appendix Fig. 8 for likelihoods). To examine the pattern of behavior of each model, we computed the posterior predictive on the expected number of features mentioned in each trial type of our design. While the occlusion-blind speaker model successfully captured the simple effect of close vs. far contexts, it failed to account for behavior in the presence of occlusions. The occlusion-sensitive model, on the other hand, accurately accounted for the full pattern of results (see Fig 4 ). Finally, we examined parameter posteriors for the occlusion-sensitive model (see Appendix Fig. 9 ): the inferred production cost for texture was significantly higher than that for the other features, reflecting the asymmetry in production of texture relative to color. Experiment 2: Comparing confederates to natural speakers Experiment 1 directly tested the hypothesis that speakers increase their specificity in contexts with asymmetry in visual access. We found that speakers are not only context-sensitive in choosing referring expressions that distinguish target from distractors in a shared context, but are occlusion-sensitive, adaptively compensating for uncertainty. Critically, this resulted in systematic differences in behavior across the occlusion conditions that are difficult to explain under an egocentric theory: in the presence of occlusions, speakers were spontaneously willing to spend additional time and keystrokes to give further information beyond what they produce in the corresponding unoccluded contexts, even though that information is equally redundant given the visible objects in their display. These results validate our prediction that speakers appropriately increase their level of specificity in contexts containing occlusions. In Experiment 2, we recruited pairs of participants for an online, interactive version of the original director-matcher task BIBREF52 which used occluded contexts to demonstrate limits on visual perspective-taking for the listener. Given the results of Exp. 1, we predicted that participants in the director role (i.e. speakers) would naturally provide more informative referring expressions than the confederate directors used in prior work. This would suggest that the confederate directors in prior work were pragmatically infelicitous, violating listeners' expectations. This violation of listeners' cooperative expectations may have led to detrimental consequences for listener performance. Results Our scripted condition successfully replicated the results of BIBREF52 with even stronger effects: listeners incorrectly moved the hidden object on approximately 50% of critical trials. However, on unscripted trials, the listener error rate dropped by more than half, $p_1 = 0.51, p_2 = 0.20, \chi ^2(1) = 43, p < 0.001$ (Fig. 5 A). While we found substantial heterogeneity in error rates across object sets (just 3 of the 8 object sets accounted for the vast majority of remaining unscripted errors; see Appendix Fig. 10 ), listeners in the unscripted condition made fewer errors for nearly every critical item. In a maximal logistic model with fixed effect of condition, random intercepts for each dyad, and random slopes and intercepts for each object set, we found a significant difference in error rates across conditions ( $z = 2.6, p = 0.008$ ). Even if participants in the unscripted condition make fewer actual errors, they may still be considering the hidden object just as often on trials where they go on to make correct responses. As a proxy for the eye-tracking analyses reported by BIBREF52 , we conducted a mouse-tracking analysis. We computed the mean (logged) amount of time spent hovering over the hidden distractor and found a significant interaction between condition and the contents of the hidden cell ( $t = 3.59, p <0.001$ ; Fig. 5 B) in a mixed-effects regression using dyad-level and object-level random intercepts and slopes for the difference from baseline. Listeners in the scripted condition spent more time hovering over the hidden cell when it contained a confusable distractor relative to baseline, again replicating BIBREF52 . In the unscripted condition there was no difference from baseline. Next, we test whether these improvements in listener performance in the unscripted condition are accompanied by more informative speaker behavior than the scripted utterances allowed. The simplest measure of speaker informativity is the raw number of words used in referring expressions. Compared to the scripted referring expressions, speakers in the unscripted condition used significantly more words to refer to critical objects ( $b = 0.54, t = 2.6, p=0.019$ in a mixed-effects regression on difference scores using a fixed intercept and random intercepts for object and dyads). However, this is a coarse measure: for example, the shorter “Pyrex glass” may be more specific than “large measuring glass” despite using fewer words. For a more direct measure, we extracted the referring expressions generated by speakers in all critical trials and standardized spelling and grammar, yielding 122 unique labels after including scripted utterances. We then recruited an independent sample of 20 judges on Amazon Mechanical Turk to rate how well each label fit the target and hidden distractor objects on a slider from “strongly disagree” (meaning the label “doesn't match the object at all”) to “strongly agree” (meaning the label “matches the object perfectly”). They were shown objects in the context of the full grid (with no occlusions) such that they could feasibly judge spatial or relative references like “bottom block.” We excluded 4 judges for guessing with response times $< 1s$ . Inter-rater reliability was relatively high, with intra-class correlation coefficient of $0.54\, (95\% CI = [0.47, 0.61])$ . We computed the informativity of an utterance (the tape) as the difference in how well it was judged to apply to the target (the cassette tape) relative to the distractor object (the roll of tape). Our primary measure of interest is the difference in informativity across scripted and unscripted utterances. We found that speakers in the unscripted condition systematically produced more informative utterances than the scripted utterances ( $d = 0.5$ , 95% bootstrapped CI = $[0.27, 0.77], p < .001$ ; see Appendix C for details). Scripted labels fit the hidden distractor just as well or better than the target, but unscripted labels fit the target better and the hidden distractor much worse (see Fig. 6 A). In other words, the scripted labels used in BIBREF52 were less informative than expressions speakers would normally produce to refer to the same object in this context. These results strongly suggest that the speaker's informativity influences listener accuracy. In support of this hypothesis, we found a strong negative correlation between informativity and error rates across items and conditions: listeners make fewer errors when utterances are a better fit for the target relative to the distractor ( $\rho = -0.81$ , bootstrapped 95% CI $= [-0.9, -0.7]$ ; Fig. 6 B). This result suggests that listener behavior is driven by an expectation of speaker informativity: listeners interpret utterances proportionally to how well they fit objects in context. General Discussion Are human adults expert mind-readers, or fundamentally egocentric? The longstanding debate over the role of theory of mind in communication has largely centered around whether listeners (or speakers) with private information consider their partner's perspective BIBREF30 , BIBREF16 . Our work presents a more nuanced picture of how a speaker and a listener use theory of mind to modulate their pragmatic expectations. The Gricean cooperative principle emphasizes a natural division of labor in how the joint effort of being cooperative is shared BIBREF4 , BIBREF60 . It can be asymmetric when one partner is expected to, and able to, take on more complex reasoning than the other, in the form of visual perspective-taking, pragmatic inference, or avoiding further exchanges of clarification and repair. One such case is when the speaker has uncertainty over what the listener can see, as in the director-matcher task. Our Rational Speech Act (RSA) formalization of cooperative reasoning in this context predicts that speakers (directors) naturally increase the informativity of their referring expressions to hedge against the increased risk of misunderstanding; Exp. 1 presents direct evidence in support of this hypothesis. Importantly, when the director (speaker) is expected to be appropriately informative, communication can be successful even when the matcher (listener) does not reciprocate the effort. If visual perspective-taking is effortful and cognitively demanding BIBREF39 , the matcher will actually minimize joint effort by not taking the director's visual perspective. This suggests a less egocentric explanation of when and why listeners neglect the speaker's visual perspective; they do so when they expect the speaker to disambiguate referents sufficiently. While adaptive in most natural communicative contexts, such neglect might backfire and lead to errors when the speaker (inexplicably) violates this expectation. From this point of view, the “failure” of listener theory of mind in these tasks is not really a failure; instead, it suggests that both speakers and listeners may use theory of mind to know when (and how much) they should expect others to be cooperative and informative, and subsequently allocate their resources accordingly BIBREF36 . Exp. 2 is consistent with this hypothesis; when directors used underinformative scripted instructions (taken from prior work), listeners made significantly more errors than when speakers were allowed to provide referring expressions at their natural level of informativity, and speaker informativeness strongly modulated listener error rates. Our work adds to the growing literature on the debate over the role of pragmatics in the director-matcher task. A recent study questions the communicative nature of the task itself by showing that selective attention alone is sufficient for successful performance on this task, and that listeners become suspicious of the director's visual access when the director shows unexpectedly high levels of specificity in their referring expressions BIBREF61 . Our results further sbolster the argument that pragmatic reasoning about appropriate levels of informativity is an integral aspect of theory of mind use in the director-matcher task (and communication more generally). Note however that in BIBREF61 , participants became suspicious, while in our study participants overtrusted the speaker to be informative; a more detailed look at differences between experimental paradigms, as well as further experimental work, is necessary to better understand why participants had different expectations about the speaker. Prior work also suggests that although speakers tend to be over-informative in their referring expressions BIBREF62 a number of situational factors (e.g., perceptual saliency of referents) can modulate this tendency. Our work hints at an additional principle that guides speaker informativity: speakers maintain uncertainty about the listener's visual context and their ability to disambiguate the referent in that context. Additionally, while our model builds on probabilistic models weighting different perspectives BIBREF32 , BIBREF33 , we leave the formal integration of resource-rational recursive reasoning mechanisms with perspective-weighting mechanisms for future work. While BIBREF33 focused on cases where the speaker has private information unknown to the listener, our model focuses on the reverse case: how speakers behave when they know that the listener has additional private information BIBREF52 . Furthermore, whether the allocation of resources, and ensuing perspective neglect, is a fixed strategy or one that adjusts dynamically remains an open question: given sufficient evidence of an unusually underinformative partner, listeners may realize that vigilance about which objects are occluded yields a more effective strategy for the immediate interaction. An important direction for future work is to directly explore listener adaptability in adjusting their use of visual perspective-taking as a function of Gricean expectations for a given partner BIBREF63 , BIBREF64 . In sum, our findings suggest that language use is well-adapted to contexts of uncertainty and knowledge asymmetry. The pragmatic use of theory of mind to establish division of labor is also critical for other forms of social cooperation, including pedagogy BIBREF65 and team-based problem solving BIBREF66 , BIBREF67 . Enriching our notion of theory of mind use to encompass these pragmatic expectations, not only expectations about what our partner knows or desires, may shed new light on the flexibility of social interaction more broadly. Acknowledgements This manuscript is based in part on work presented at the 38th Annual Conference of the Cognitive Science Society. The first author is supported by a NSF Graduate Research Fellowship and a Stanford Graduate Fellowship. A pilot of expt. 2 was originally conducted under the supervision of Michael Frank, with early input from Desmond Ong. We’re grateful to Boaz Keysar for providing select materials for our replication. This work was supported by ONR grants N00014-13-1-0788 and N00014-13- 1-0287, and a James S. McDonnell Foundation Scholar Award to NDG. Author contributions R.X.D.H. and N.D.G. initially formulated project. R.X.D.H. performed experiments, analyzed data, and performed computational modeling. All authors planned experiments, interpreted result, and wrote the paper. Unless otherwise mentioned, all analyses and materials were preregistered at https://osf.io/qwkmp/. Code and materials for reproducing the experiment as well as all data and analysis scripts are open and available at https://github.com/hawkrobe/pragmatics_of_perspective_taking. Appendix A: Derivation of qualitative model predictions Our experiments are motivated by the Gricean observation that speakers should attempt to be more informative when there is an asymmetry in visual access, such that their partner sees something they do not. In this appendix, we formalize this scenario in a computational model of communication as recursive social reasoning and prove that the predicted increase in informativity qualitatively holds under fairly unrestrictive conditions. Following recent advances in the Rational Speech Act (RSA) framework, we define a speaker as a decision-theoretic agent who must choose a referring expression $u$ to refer to a target object $o$ in a context $C$ by (soft)-maximizing a utility function $U$ : $S(u | o, C) \propto \exp \lbrace \alpha U(u; o, C)\rbrace $ Definition The basic utility used in RSA models captures the informativeness of each utterance to an imagined literal listener agent $L$ who is attempting to select the target object from alternatives in context: $U_{basic}(u; o, C) = \log L(o | u, C)$ This information-theoretic expression measures how certain the listener becomes about the intended object after hearing the utterance. The literal listener is assumed to update their beliefs about the target object according to Bayesian inference, conditioning on the literal meaning of the utterance being true of it: $L(o | u, C) \propto \mathcal {L}(o,u) P(o)$ where normalization takes place over objects $o \in C$ and $\mathcal {L}$ represents the lexical semantics of $u$ . If $u$ is true of $o$ then $\mathcal {L}(o,u) = 1$ ; otherwise, $\mathcal {L}(o,u) = 0$ . This basic setup assumes that the speaker reasons about a listener sharing the same context $C$ in common ground. How should it be extended to handle asymmetries in visual access between the speaker and listener, where the speaker has uncertainty over the possible distractors behind the occlusions? In the RSA framework, speaker uncertainty is represented straightforwardly by a prior over the state of the world: for example, BIBREF48 examined a case where the speaker has limited perceptual access to the objects they are describing. For the director-matcher task, we construct this prior by positing a space of alternative objects $\mathcal {O}$ , introducing uncertainty $P(o_h)$ over which object $o_h \in \mathcal {O}$ , if any, is hidden behind an occlusion, and marginalizing over these alternatives when reasoning about the listener. Definition This gives us a utility for conditions of asymmetries in visual access: $U_{asym}(u; o, C) =\sum _{o_h \in \mathcal {O}} P(o_h) \log L(o | u, C \cup o_h)$ where $C$ denotes the set of objects in context that the speaker perceives. We define “specificity” extensionally, in the sense that if $u_0$ is more specific than $u_1$ , then the objects for which $u_0$ is true is a subset of the objects for which $u_1$ is true: Definition Utterance $u_0$ is said to be more specific than $u_1$ iff $\mathcal {L}(u_0, o_h) \le \mathcal {L}(u_1, o_h)\ \forall o_h \in \mathcal {O}$ and there exists a subset of objects $\mathcal {O}^* \subset \mathcal {O}$ such that $\sum _{o^* \in \mathcal {O}^*} P(o^*) > 0$ and $\mathcal {L}(u_0, o^*) < \mathcal {L}(u_1, o^*)$ for $o* \in \mathcal {O}^*$ . We now show that the recursive reasoning model predicts that speakers should prefer more informative utterances in contexts with occlusions. In other words, that the asymmetry utility leads to a preference for more specific referring expressions than the basic utility. Theorem If $u_0$ is more specific than $u_1$ , then the following holds for any target $o^t$ and shared context $C$ : $ \frac{S_{asym}(u_0 | o^t, C)}{S_{asym}(u_1| o^t, C)} > \frac{S_{basic}(u_0 | o^t, C)}{S_{basic}(u_1 | o^t, C)} $ Since $S(u_0|o^t, C)/S(u_1|o^t, C) = \exp (\alpha \cdot (U(u_0; o^t, C) - U(u_1;o^t,C)))$ it is sufficient to show $ U_{asym}(u_0 ; o, C) - U_{asym}(u_1; o, C) > U_{basic}(u_0 ; o, C) - U_{basic}(u_1 ; o, C) $ We first break apart the sum on the left-hand side: $$U_{asym}(u_0 | o^t, C) - U_{asym}(u_1 | o^t, C) &=& \displaystyle \sum _{o_h \in \mathcal {O}} p(o_h)\left[\log L(o | u_0, C\cup o_h) - \log L(o|u_1, C \cup o_h)\right] \\ & = & \displaystyle \sum _{o^*\in \mathcal {O}^*} p(o^*) \log \frac{L(o^t|u_0, C\cup o^*)}{L(o^t|u_1, C\cup o^*)} \\ & & + \displaystyle \sum _{o_h\in \mathcal {O}\setminus \mathcal {O}^*} p(o_h) \log \frac{L(o^t|u_0, C\cup o_h)}{L(o^t|u_1, C\cup o_h)} $$ (Eq. 9) By the definition of “more specific” and because we defined $o^*\in \mathcal {O^*}$ to be precisely the subset of objects for which $\mathcal {L}(u_0, o^*) < \mathcal {L}(u_1, o^*)$ , for objects $o_h$ in the complementary set $\mathcal {O} \setminus \mathcal {O^*}$ we have $\mathcal {L}(u_0, o_h) = \mathcal {L}(u_1, o_h)$ . Therefore, for , $L(o^t | u_i, C \cup o_h) = L(o^t | u_i, C)$ , giving us $\log \frac{L(o^t | u_0, C)}{L(o^t|u_1, C)}\sum _{o_h\in \mathcal {O}\setminus \mathcal {O}^*}p(o_h)$ For the ratio in 9 , we can substitute the definition of the listener $L$ and simplify: $ \begin{array}{rcl} \displaystyle \frac{L(o^t|u_0, C\cup o^*)}{L(o^t|u_1, C\cup o^*)} & = & \displaystyle \frac{\mathcal {L}(o^t, u_0) [\sum _{o\in C \cup o^*}\mathcal {L}(o,u_1)]}{\mathcal {L}(o^t, u_1) [\sum _{o\in C \cup o^*}\mathcal {L}(o,u_0)]} \\[.5cm] & = & \displaystyle \frac{\mathcal {L}(o^t, u_0) [\sum _{o\in C}\mathcal {L}(o,u_1) + \mathcal {L}(o^*, u_1)]}{\mathcal {L}(o^t, u_1) [\sum _{o\in C}\mathcal {L}(o,u_0) + \mathcal {L}(o^*, u_0)]} \\[.5cm] & < & \displaystyle \frac{\mathcal {L}(o^t, u_0) [\sum _{o\in C}\mathcal {L}(o,u_1)]}{\mathcal {L}(o^t, u_1) [\sum _{o\in C}\mathcal {L}(o,u_0)]} \\[.5cm] & = & \displaystyle \frac{L(o^t|u_0, C)}{L(o^t|u_1, C)} \end{array} $ Thus, $ \begin{array}{rcl} U_{asym}(u_0 | o^t, C) - U_{asym}(u_1 | o^t, C) & < & \log \frac{L(o^t | u_0, C)}{L(o^t|u_1, C)}\left(\displaystyle \sum _{o^*\in \mathcal {O}^*}p(o^*) + \displaystyle \sum _{o_h\in \mathcal {O}\setminus \mathcal {O}^*}p(o_h)\right) \\ &=& \log L(o^t | u_0, C) - \log L(o^t | u_1, C) \\ &=& U_{basic}(u_0 | o^t, C) - U_{basic}(u_1 | o^t, C) \end{array} $ Note that this proof also holds when an utterance-level cost term $\textrm {cost}(u)$ penalizing longer or more effortful utterances is incorporated into the utilities $ \begin{array}{lcl} U_{asym}(u; o, C_s) & = & \sum _{o_h \in \mathcal {O}} \log L_0(o | u, C_s \cup o_h)P(o_h) - \textrm {cost}(u) \\ U_{basic}(u; o, C) & = & \log L(o | u, C) - \textrm {cost}(u) \end{array} $ since the same constant appears on both sides of inequality. In principle, it can also be extended to real-valued meanings $\mathcal {L}$ , though additional assumptions must be made. Appendix B: Quantitative model fit for Exp. 1 In addition to the qualitative predictions derived in the previous section, our speaker model makes direct quantitative predictions about Exp. 1 data. Here, we describe the details of a Bayesian Data Analysis evaluating this model on the empirical data, and comparing it to an occlusion-blind model which does not reason about possible hidden objects. Because there were no differences observed in production based on the particular levels of target features (e.g. whether the target was blue or red), we collapse across these details and only feed the model which features of each distractor differed from the target on each trial. After this simplification, there were only 4 possible contexts: far contexts, where the distractors differed in every dimension, and three varieties of close contexts, where the critical distractor differed in only shape, shape and color, or shape and texture. In addition, we included in the model information about whether each trial had cells occluded or not. The space of utterances used in our speaker model is derived from our feature annotations: for each trial, the speaker model selected among 7 utterances referring to each combination of features: only mentioning the target's shape, only mentioning the target's color, mentioning the shape and the color, and so on. For the set of alternative objects $\mathcal {O}$ , we used the full 64-object stimulus space used in our experiment design, and we placed a uniform prior over these objects such that the occlusion-sensitive speaker assumed they were equally likely to be hidden. Our model has four free parameters which we infer from the data using Bayesian inference. The speaker optimality parameter, $\alpha $ , is a soft-max temperature such that at $\alpha = 1$ , the speaker produces utterances directly proportional to their utility, and as $\alpha \rightarrow \infty $ the speaker maximizes. In addition, to account for the differential production of the three features (see Fig. 2B), we assume separate production costs for each feature: a texture cost $c_t$ , a color cost $c_c$ , and a shape cost $c_s$ . We use (uninformative) uniform priors for all parameters: $ \begin{array}{rcl} \alpha & \sim & \textrm {Unif}(0,50) \\ c_t, c_c, c_s & \sim & \textrm {Unif}(0,10) \end{array} $ We compute speaker predictions for a particular parameter setting using (nested) enumeration and infer the posterior over parameters using MCMC. We discard 5000 burn-in samples and then take 5000 samples from the posterior with a lag of 2. Our posterior predictives are computed from these posteriors by taking the expected number of features produced by the speaker marginalizing over parameters and possible non-critical distractors in context (this captures the statistics of our experimental contexts, where there was always a distractor sharing the same color or texture but a different shape as the target). Finally, to precisely compute the Bayes Factor, we enumerated over a discrete grid of parameter values in the prior. We implemented our models and conducted inference in the probabilistic programming language WebPPL (Goodman & Stuhlmuller, 2014). All code necessary to reproduce our model results are available at the project github: https://github.com/hawkrobe/pragmatics_of_perspective_taking. Appendix C: Multi-stage bootstrap procedure for Expt. 2 The statistical dependency structure of our ratings was more complex than standard mixed-effect model packages are designed to handle and the summary statistic we needed for our test was a simple difference score across conditions, so we instead implemented a simple multi-stage, non-parametric bootstrap scheme to appropriately account for different sources of variance. In particular, we needed to control for effects of judge, item, and speaker. First, to control for the repeated measurements of each judge rating the informativity of all labels, we resampled our set of sixteen judge ids with replacement. For each label, we then computed informativity as the difference between the target and distractor fits within every judge's ratings, and took the mean across our bootstrapped sample of judges. Next, we controlled for item effects by resampling our eight item ids with replacement. Finally, we resampled speakers from pairs within each condition (scripted vs. unscripted), and looked up the mean informativity of each utterance they produced for each of the resampled set of items. Now, we can take the mean within each condition and compute the difference across conditions, which is our desired test statistic. We repeated this multi-stage resampling procedure 1000 times to get the bootstrapped distribution of our test statistic that we reported in the main text. Individual errors bars in Fig. 4 are derived from the same procedure but without taking difference scores.
No
65ba7304838eb960e3b3de7c8a367d2c2cd64c54
65ba7304838eb960e3b3de7c8a367d2c2cd64c54_0
Q: Was this experiment done in a lab? Text: Introduction Our success as a social species depends on our ability to understand, and be understood by, different communicative partners across different contexts. Theory of mind—the ability to represent and reason about others' mental states—is considered to be the key mechanism that supports such context-sensitivity in our everyday social interactions. Being able to reason about what others see, want, and think allows us to make more accurate predictions about their future behavior in different contexts and adjust our own behaviors accordingly BIBREF0 . Over the past two decades, however, there has been sustained debate over the extent to which adults actually make of use theory of mind in communication. On one hand, accounts of language use in the tradition of BIBREF1 and BIBREF2 , BIBREF3 implicitly assume a fundamental and pervasive role for theory of mind mechanisms. The meaning of an utterance is established against a backdrop of inference, intention, and common ground: knowledge that is taken to be shared by both parties BIBREF4 , BIBREF5 . This view of adults as natural mind-readers is consistent with extensive evidence from the psycholinguistics literature: for instance, we spontaneously calibrate our referential expressions to our intended audiences BIBREF6 and make use of partner-specific history BIBREF7 , BIBREF8 . Yet in other cases the evidence appears to be more consistent with a more egocentric or “reflexively mind-blind” view of language processing BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Under this view, although adults have the ability to deploy theory of mind, it is effortful and costly to do so. Thus people may initially anchor on their own perspective and only adjust to account for other perspectives when a problem arises and when sufficient cognitive resources are available. Much of this debate has centered around the influential director-matcher paradigm, a variant of classic reference games BIBREF13 where a confederate speaker gives participants instructions about how to move objects around a grid. By introducing an asymmetry in visual access—certain cells of the grid are covered such that participants can see objects that the speaker cannot (e.g. Fig. 1 )— BIBREF14 designed a task to expose cases where participants (listeners) either succeed or fail to take into account what the speaker sees. In particular, BIBREF14 argued that if listeners were reliably using theory of mind, they would only consider mutually visible objects as possible referents. For instance, on one trial a roll of Scotch tape was mutually visible and a cassette tape was hidden from the speaker's view. When the confederate speaker produced an ambiguous utterance, “tape,” participants should still interpret it as a reference to the mutually visible object even if it fits the hidden object better; the idea is that a speaker who cannot see an object wouldn't possibly be referring to it. While the visual asymmetries constructed by BIBREF14 may provide the starkest test of this hypothesis, variations on this basic paradigm have manipulated other dimensions of non-visual knowledge asymmetry, including those based on spoken information BIBREF15 , BIBREF16 , spatial cues BIBREF17 , BIBREF18 , private pre-training on object labels BIBREF19 , cultural background BIBREF20 , and other task-relevant information BIBREF21 , BIBREF22 . Questions about speaker perspective-taking during production have similarly been explored by reversing the direction of the asymmetry so the speaker has private knowledge that the listener does not and examining whether this private information leaks into their utterances BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . Numerous rounds of reinterpretation and methodological criticism have puzzled over seemingly contradictory findings in this sprawling body of work: some studies find strong evidence consistent with an egocentric view—listeners initially consider and even attempt to move such objects—while others find that information from the speaker's perspective is integrated from the very earliest stages of processing BIBREF30 , BIBREF31 . Recent computational models have begun to unify this literature under a probabilistic framework. For instance, some models assume that listeners BIBREF32 and speakers BIBREF33 simultaneously integrate their own perspective with that of their partner, leading to behavior that lies between purely egocentric and purely guided by common ground. These constraint-based models BIBREF34 , BIBREF35 introduce a probabilistic weighting parameter between the two domains of reference and show that an intermediate weighting explains the gradient of communicative behavior better than a purely egocentric or purely perspective-adopting model. Yet these constraint-based models leave open a key puzzle for rational models of language use: why do people use the proportion they do in a given context? In other words, while different factors influencing the weighting have been proposed, no formal mechanism yet explains why incorporating egocentric knowledge would be adaptive when full common ground is available. We argue in this paper for a resource rational account of perspective-taking in communication BIBREF36 , BIBREF37 . In a communicative interaction with another agent, the participants share the goal of successfully being understood while minimizing joint effort BIBREF38 , BIBREF4 . If theory of mind use is indeed effortful and cognitively demanding to some degree BIBREF39 , BIBREF40 , BIBREF41 , then the question for a rational agent is when and how to best allocate its cognitive resources to achieve its goals. This sets up a natural division of labor between the speaker and listener in how the effort should be shared, which in principle admits many solutions. Rather than being guided by rigid heuristics, individuals may rationally and adaptively calibrate their perspective-taking based on expectations about their partner's likely behavior. Critically, these expectations may themselves be derived from a targeted use of theory of mind. Here, we explore one particular source of expectations derived from Gricean expectations of informativity, which have been largely neglected by prior work in the perspective-taking literature BIBREF42 . Just as making sense of an agent's physical behaviors requires a broad, accurate mental model of how the agent's visual access, beliefs, and intentions translate into motor plans BIBREF43 , BIBREF44 , making sense of an agent's linguistic behaviors depends on an accurate model of what a speaker would say, or what a listener would understand, in different situations BIBREF45 , BIBREF46 , BIBREF47 , BIBREF48 , BIBREF49 . From this perspective, theory of mind use not only incorporates people’s mental models of a partner’s knowledge or visual access but also their inferences about how their partner would behave in a communicative context. To instantiate this account, we elaborate the family of probabilistic weighting models by proposing that theory of mind use under knowledge asymmetries not only involves integrating a partner's knowledge but also recursive reasoning about how they will likely produce or interpret utterances in particular communicative contexts BIBREF50 . The Gricean notion of cooperativity BIBREF3 , BIBREF4 refers to the idea that speakers try to avoid saying things that are confusing or unnecessarily complicated given the current context, and that listeners expect this. For instance, imagine trying to help someone spot your dog at a busy dog park. It may be literally correct to call it a “dog,” but as a cooperative speaker you would understand that the listener would have trouble disambiguating the referent from many other dogs. Likewise, the listener would reasonably expect you to say something more informative than “dog” in this context. You may therefore prefer to use a more specific or informative expressions, like “the little terrier with the blue collar.” BIBREF7 , BIBREF51 . Critically, you might do so even when you happen to see only one dog at the moment, but know there are likely to be other dogs from the listener's point of view. In the presence of uncertainty about their partner's visual context, a cooperative speaker may tend toward additional specificity. Now, what level of specificity is pragmatically appropriate in the particular director-matcher task used by BIBREF52 ? This task requires the speaker to generate a description such that a listener can identify the correct object among distractors, even though several cells are hidden from the speaker's view (e.g. Fig. 2 , bottom). It is thus highly salient to the speaker that there are hidden objects she cannot see but her partner can. Gricean reasoning, as realized by recent formal models BIBREF46 , BIBREF47 , BIBREF49 , predicts that a speaker in this context will compensate for her uncertainty about the listener's visual context by increasing the informativity of her utterance beyond what she would produce in a completely shared context. (See Appendix A for a formal model of pragmatic reasoning in this situation and a mathematical derivation of the informativity prediction.). The director-matcher task used by BIBREF52 is therefore not only challenging for the listener; it also requires a sophisticated use of theory of mind, vis a vis pragmatic reasoning, on the part of the speaker, to understand that the listener may expect her to increase the informativity of her utterance. While extensive prior work has examined how speakers adjust their utterances, or not, depending on their own private information, it remains untested how they pragmatically compensate for their lack of access to the listener's private information by flexibly modifying their informativity. In the following experiments, we ask whether people, as speakers, show such sensitivity to their own uncertainty about their partner's visual access. Furthermore, we suggest that such sensitivity (and the listener's expectations about this sensitivity) can help us understand why listeners in prior work (e.g., in the Director-Matcher task) made frequent errors. A listener's rational reliance on the speaker's informativity, which allows them to efficiently neglect the speaker's visual access under cognitive load, may backfire and lead to errors when paired with a confederate speaker who violates Gricean expectations. First, we directly test our model's prediction by manipulating the presence and absence of occlusions in a simple, interactive, natural-language reference game. Second, we conduct a replication of BIBREF52 with an additional unscripted condition to evaluate whether the scripted referring expressions used by confederate speakers in prior work accord with what a real speaker would say in the same interactive context BIBREF54 , BIBREF55 , BIBREF56 . If confederate speakers were using scripts that were uncooperative and underinformative compared to what speakers naturally say, this previously unrecognized violation of Gricean expectations may have implications for the rational basis of listener errors. Our main goal here is to directly establish the adaptive pragmatic behavior of speakers. It is important to note that our broader claim about the source of listener errors emerges from establishing the plausibility of a resource-rational basis for perspective-neglect, showing that speakers are adaptive (Exp.1) and listeners indeed make more errors when speakers violate their expectations (Exp.2); causally manipulating listener expectations is beyond the scope of the current work. We return to the broader implications and predictions of this account in the discussion. Experiment 1: Speaker behavior under uncertainty How does an unscripted speaker change her communicative behavior when there is uncertainty about exactly what her partner can see? To address this question empirically, we randomly assigned participants to the roles of speaker and listener and paired them over the web to play an interactive communication task BIBREF57 . Methods We recruited 102 pairs of participants from Amazon Mechanical Turk and randomly assigned speaker and listener roles. After we removed 7 games that disconnected part-way through and 12 additional games according to our pre-registered exclusion criteria (due to being non-native English speakers, reporting confusion about the instructions, or clearly violating the instructions), we were left with a sample of 83 full games. On each trial, both players were presented with a $3\times 3$ grid containing objects. One target object was privately highlighted for the speaker, who freely typed a message into a chat box in order to get the listener to click the intended referent. The objects varied along three discrete features (shape, texture, and color), each of which took four discrete values (64 possible objects). See Appendix Fig. 7 for a screenshot of the interface. There were four types of trials, forming a within-pair $2 \times 2$ factorial design. We manipulated the presence or absence of occlusions and the closeness of shared distractors to the target (see Fig. 2 ). On `shared' trials, all objects were seen by both participants, but on `hidden' trials, two cells of the grid were covered with occluders (curtains) such that only the listener could see the contents of the cell. On `far' trials, the target is the only object with a particular shape; on `close' trials, there is also a shared distractor with the target's shape, differing only in color or texture. In order to make it clear to the speaker that there could really be objects behind the occluders without providing a statistical cue to their identity or quantity on any particular trial, we randomized the total number of distractors in the grid on each trial (between 2 and 4) as well as the number of those distractors covered by curtains (1 or 2). If there were only two distractors, we did not allow both of them to be covered: there was always at least one visible distractor. Each trial type appeared 6 times for a total of 24 trials, and the sequence of trials was pseudo-randomized such that no trial type appeared more than twice in each block of eight trials. Participants were instructed to use visual properties of the objects rather than spatial locations in the grid. Finally, we collected mouse-tracking data analogous to the eye-tracking common in referential paradigms. We asked the matcher to wait until the director sent a message; when the message was received, the matcher clicked a small circle in the center of the grid to show the objects and proceed with the trial. We recorded at 100Hz from the matcher's mouse in the decision window after this click, until the point where they clicked and started to drag one of the objects. While we did not intend to analyze these data for Exp. 1, we anticipated using it in our second experiment below and wanted to use the same procedure across experiments for consistency. We recruited 200 pairs of participants from Amazon Mechanical Turk. 58 pairs were unable to complete the game due to a server outage. Following our preregistered exclusion criteria, we removed 24 games who reported confusion, violated our instructions, or made multiple errors on filler items, as well as 2 additional games containing non-native English speakers. This left 116 pairs in our final sample. The materials and procedure were chosen to be as faithful as possible to those reported in BIBREF52 while allowing for interaction over the web. Directors used a chat box to communicate where to move a privately cued target object in a $4 \times 4$ grid (see Fig. 1 ). The listener then attempted to click and drag the intended object. In each of 8 objects sets, mostly containing filler objects, one target belonged to a `critical pair' of objects, such as a visible cassette tape and a hidden roll of tape that could both plausibly be called `the tape.' We displayed instructions to the director as a series of arrows pointing from some object to a neighboring unoccupied cell. Trials were blocked into eight sets of objects, with four instructions each. As in BIBREF52 , we collected baseline performance by replacing the hidden alternative (e.g. a roll of tape) with a filler object that did not fit the critical instruction (e.g. a battery) in half of the critical pairs. The assignment of items to conditions was randomized across participants, and the order of conditions was randomized under the constraint that the same condition would not be used on more than two consecutive items. All object sets, object placements, and corresponding instruction sets were fixed across participants. In case of a listener error, the object was placed back in its original position; both participants were given feedback and asked to try again. We used a between-subject design to compare the scripted labels used by confederate directors in prior work against what participants naturally say in the same role. For participants assigned to the director role in the `scripted' condition, a pre-scripted message using the precise wording from BIBREF52 automatically appeared in their chat box on half of trials (the 8 critical trials as well as nearly half of the fillers). Hence, the scripted condition served as a direct replication. To maintain an interactive environment, the director could freely produce referring expressions on the remainder of filler trials. In the `unscripted' condition, directors were unrestricted and free to send whatever messages they deemed appropriate on all trials. In addition to analyzing messages sent through the chat box and errors made by matchers (listeners), we collected mouse-tracking data in analogy to the eye-tracking common in these paradigms. Behavioral results Our primary measure of speaker behavior is the length (in words) of naturally produced referring expressions sent through the chat box. We tested differences in speaker behavior across conditions using a mixed-effect regression of context and occlusion on the number of words produced, with maximal random effect structure containing intercept, slopes, and interaction. First, as a baseline, we examined the simple effect of close vs. far contexts in trials with no occlusions. We found that speakers used significantly more words on average when there was a distractor in context that shared the same shape as the target ( $b = 0.56, t = 5.1, p < 0.001$ ; see Fig. 3 A). This replicates the findings of prior studies in experimental pragmatics BIBREF7 , BIBREF58 . Next, we turn to the simple effect of occlusion in far contexts (which are most similar to the displays used in the director-matcher task which we adopt in Exp. 2 BIBREF52 ). Speakers used 1.25 additional words on average when they knew their partner could potentially see additional objects ( $t = 7.5, p < 0.001$ ). Finally, we found a significant interaction ( $b = -0.49, t = 3.8, p <0.001$ ) where the effect of occlusion was larger in far contexts, likely indicating a ceiling on the level of informativity required to individuate objects in our simple stimulus space. What are these additional words used for? As a secondary analysis, we annotated each utterance based on which of the three object features were mentioned (shape, texture, color). Because speakers nearly always mentioned shape (e.g. `star', `triangle') as the head noun of their referring expression regardless of context ( $\sim 99\%$ of trials), differences in utterance length across conditions must be due to differentially mentioning the other two features (color and texture). To test this observation, we ran separate mixed-effect logistic regressions for color and texture predicting mention from context; due to convergence issues, the maximum random effect structure supported by our data contains only speaker-level intercepts and slopes for the occlusion effect. We found simple effects of occlusion in far contexts for both features ( $b = 1.33, z = 2.9, p = 0.004$ for color; $b = 4.8, z = 6.4, p < 0.001$ for texture, see Fig. 3 B). In other words, in displays like the left column of Fig. 2 where the target was the only `star', speakers were somewhat more likely to produce the star's color—and much more likely to produce its texture—when there were occlusions present, even though shape alone is sufficient to disambiguate the target from visible distractors in both cases. Finally, we note that listener errors were rare: 88% of listeners made only one or fewer errors (out of 24 trials), and there was no significant difference in error rates across the four conditions ( $\chi ^2(3) = 1.23, p = 0.74$ ). We test the connections between context-sensitive speaker behavior and listener error rates more explicitly in Exp. 2. Model comparison While our behavioral results provide qualitative support for a Gricean account over an egocentric account, formalizing these two accounts in computational models allows a stronger test of our hypothesis by generating graded quantitative predictions. We formalized both accounts in the probabilistic Rational Speech Act (RSA) framework BIBREF47 , BIBREF46 , BIBREF49 , BIBREF59 , BIBREF48 , which has successfully captured a variety of other pragmatic phenomena. In this framework, speakers are decision-theoretic agents attempting to (soft-)maximize a utility function balancing parsimony (i.e., a preference for shorter, simpler utterances) with informativeness (i.e., the likelihood of an imagined listener agent having the intended interpretation). The only difference between the two accounts in the RSA framework is how the asymmetry in visual access is handled: the `occlusion-blind' speaker simply assumes that the listener sees the same objects as she herself sees, while the `occlusion-sensitive' speaker represents uncertainty over her partner's visual context. In particular, she assumes a probability distribution over the possible objects that might be hidden behind the occlusions and attempts to be informative on average. The two models have the same four free parameters: a speaker optimality parameter controlling the soft-max temperature, and three parameters controlling the costs of producing the features of shape, color, and texture (see Appendix B for details). We conducted a Bayesian data analysis to infer these parameters conditioning on our empirical data, and computed a Bayes Factor to compare the models. We found extremely strong support for the occlusion-sensitive model relative to the occlusion-blind model ( $BF = 2.2 \times 10^{209}$ ; see Appendix Fig. 8 for likelihoods). To examine the pattern of behavior of each model, we computed the posterior predictive on the expected number of features mentioned in each trial type of our design. While the occlusion-blind speaker model successfully captured the simple effect of close vs. far contexts, it failed to account for behavior in the presence of occlusions. The occlusion-sensitive model, on the other hand, accurately accounted for the full pattern of results (see Fig 4 ). Finally, we examined parameter posteriors for the occlusion-sensitive model (see Appendix Fig. 9 ): the inferred production cost for texture was significantly higher than that for the other features, reflecting the asymmetry in production of texture relative to color. Experiment 2: Comparing confederates to natural speakers Experiment 1 directly tested the hypothesis that speakers increase their specificity in contexts with asymmetry in visual access. We found that speakers are not only context-sensitive in choosing referring expressions that distinguish target from distractors in a shared context, but are occlusion-sensitive, adaptively compensating for uncertainty. Critically, this resulted in systematic differences in behavior across the occlusion conditions that are difficult to explain under an egocentric theory: in the presence of occlusions, speakers were spontaneously willing to spend additional time and keystrokes to give further information beyond what they produce in the corresponding unoccluded contexts, even though that information is equally redundant given the visible objects in their display. These results validate our prediction that speakers appropriately increase their level of specificity in contexts containing occlusions. In Experiment 2, we recruited pairs of participants for an online, interactive version of the original director-matcher task BIBREF52 which used occluded contexts to demonstrate limits on visual perspective-taking for the listener. Given the results of Exp. 1, we predicted that participants in the director role (i.e. speakers) would naturally provide more informative referring expressions than the confederate directors used in prior work. This would suggest that the confederate directors in prior work were pragmatically infelicitous, violating listeners' expectations. This violation of listeners' cooperative expectations may have led to detrimental consequences for listener performance. Results Our scripted condition successfully replicated the results of BIBREF52 with even stronger effects: listeners incorrectly moved the hidden object on approximately 50% of critical trials. However, on unscripted trials, the listener error rate dropped by more than half, $p_1 = 0.51, p_2 = 0.20, \chi ^2(1) = 43, p < 0.001$ (Fig. 5 A). While we found substantial heterogeneity in error rates across object sets (just 3 of the 8 object sets accounted for the vast majority of remaining unscripted errors; see Appendix Fig. 10 ), listeners in the unscripted condition made fewer errors for nearly every critical item. In a maximal logistic model with fixed effect of condition, random intercepts for each dyad, and random slopes and intercepts for each object set, we found a significant difference in error rates across conditions ( $z = 2.6, p = 0.008$ ). Even if participants in the unscripted condition make fewer actual errors, they may still be considering the hidden object just as often on trials where they go on to make correct responses. As a proxy for the eye-tracking analyses reported by BIBREF52 , we conducted a mouse-tracking analysis. We computed the mean (logged) amount of time spent hovering over the hidden distractor and found a significant interaction between condition and the contents of the hidden cell ( $t = 3.59, p <0.001$ ; Fig. 5 B) in a mixed-effects regression using dyad-level and object-level random intercepts and slopes for the difference from baseline. Listeners in the scripted condition spent more time hovering over the hidden cell when it contained a confusable distractor relative to baseline, again replicating BIBREF52 . In the unscripted condition there was no difference from baseline. Next, we test whether these improvements in listener performance in the unscripted condition are accompanied by more informative speaker behavior than the scripted utterances allowed. The simplest measure of speaker informativity is the raw number of words used in referring expressions. Compared to the scripted referring expressions, speakers in the unscripted condition used significantly more words to refer to critical objects ( $b = 0.54, t = 2.6, p=0.019$ in a mixed-effects regression on difference scores using a fixed intercept and random intercepts for object and dyads). However, this is a coarse measure: for example, the shorter “Pyrex glass” may be more specific than “large measuring glass” despite using fewer words. For a more direct measure, we extracted the referring expressions generated by speakers in all critical trials and standardized spelling and grammar, yielding 122 unique labels after including scripted utterances. We then recruited an independent sample of 20 judges on Amazon Mechanical Turk to rate how well each label fit the target and hidden distractor objects on a slider from “strongly disagree” (meaning the label “doesn't match the object at all”) to “strongly agree” (meaning the label “matches the object perfectly”). They were shown objects in the context of the full grid (with no occlusions) such that they could feasibly judge spatial or relative references like “bottom block.” We excluded 4 judges for guessing with response times $< 1s$ . Inter-rater reliability was relatively high, with intra-class correlation coefficient of $0.54\, (95\% CI = [0.47, 0.61])$ . We computed the informativity of an utterance (the tape) as the difference in how well it was judged to apply to the target (the cassette tape) relative to the distractor object (the roll of tape). Our primary measure of interest is the difference in informativity across scripted and unscripted utterances. We found that speakers in the unscripted condition systematically produced more informative utterances than the scripted utterances ( $d = 0.5$ , 95% bootstrapped CI = $[0.27, 0.77], p < .001$ ; see Appendix C for details). Scripted labels fit the hidden distractor just as well or better than the target, but unscripted labels fit the target better and the hidden distractor much worse (see Fig. 6 A). In other words, the scripted labels used in BIBREF52 were less informative than expressions speakers would normally produce to refer to the same object in this context. These results strongly suggest that the speaker's informativity influences listener accuracy. In support of this hypothesis, we found a strong negative correlation between informativity and error rates across items and conditions: listeners make fewer errors when utterances are a better fit for the target relative to the distractor ( $\rho = -0.81$ , bootstrapped 95% CI $= [-0.9, -0.7]$ ; Fig. 6 B). This result suggests that listener behavior is driven by an expectation of speaker informativity: listeners interpret utterances proportionally to how well they fit objects in context. General Discussion Are human adults expert mind-readers, or fundamentally egocentric? The longstanding debate over the role of theory of mind in communication has largely centered around whether listeners (or speakers) with private information consider their partner's perspective BIBREF30 , BIBREF16 . Our work presents a more nuanced picture of how a speaker and a listener use theory of mind to modulate their pragmatic expectations. The Gricean cooperative principle emphasizes a natural division of labor in how the joint effort of being cooperative is shared BIBREF4 , BIBREF60 . It can be asymmetric when one partner is expected to, and able to, take on more complex reasoning than the other, in the form of visual perspective-taking, pragmatic inference, or avoiding further exchanges of clarification and repair. One such case is when the speaker has uncertainty over what the listener can see, as in the director-matcher task. Our Rational Speech Act (RSA) formalization of cooperative reasoning in this context predicts that speakers (directors) naturally increase the informativity of their referring expressions to hedge against the increased risk of misunderstanding; Exp. 1 presents direct evidence in support of this hypothesis. Importantly, when the director (speaker) is expected to be appropriately informative, communication can be successful even when the matcher (listener) does not reciprocate the effort. If visual perspective-taking is effortful and cognitively demanding BIBREF39 , the matcher will actually minimize joint effort by not taking the director's visual perspective. This suggests a less egocentric explanation of when and why listeners neglect the speaker's visual perspective; they do so when they expect the speaker to disambiguate referents sufficiently. While adaptive in most natural communicative contexts, such neglect might backfire and lead to errors when the speaker (inexplicably) violates this expectation. From this point of view, the “failure” of listener theory of mind in these tasks is not really a failure; instead, it suggests that both speakers and listeners may use theory of mind to know when (and how much) they should expect others to be cooperative and informative, and subsequently allocate their resources accordingly BIBREF36 . Exp. 2 is consistent with this hypothesis; when directors used underinformative scripted instructions (taken from prior work), listeners made significantly more errors than when speakers were allowed to provide referring expressions at their natural level of informativity, and speaker informativeness strongly modulated listener error rates. Our work adds to the growing literature on the debate over the role of pragmatics in the director-matcher task. A recent study questions the communicative nature of the task itself by showing that selective attention alone is sufficient for successful performance on this task, and that listeners become suspicious of the director's visual access when the director shows unexpectedly high levels of specificity in their referring expressions BIBREF61 . Our results further sbolster the argument that pragmatic reasoning about appropriate levels of informativity is an integral aspect of theory of mind use in the director-matcher task (and communication more generally). Note however that in BIBREF61 , participants became suspicious, while in our study participants overtrusted the speaker to be informative; a more detailed look at differences between experimental paradigms, as well as further experimental work, is necessary to better understand why participants had different expectations about the speaker. Prior work also suggests that although speakers tend to be over-informative in their referring expressions BIBREF62 a number of situational factors (e.g., perceptual saliency of referents) can modulate this tendency. Our work hints at an additional principle that guides speaker informativity: speakers maintain uncertainty about the listener's visual context and their ability to disambiguate the referent in that context. Additionally, while our model builds on probabilistic models weighting different perspectives BIBREF32 , BIBREF33 , we leave the formal integration of resource-rational recursive reasoning mechanisms with perspective-weighting mechanisms for future work. While BIBREF33 focused on cases where the speaker has private information unknown to the listener, our model focuses on the reverse case: how speakers behave when they know that the listener has additional private information BIBREF52 . Furthermore, whether the allocation of resources, and ensuing perspective neglect, is a fixed strategy or one that adjusts dynamically remains an open question: given sufficient evidence of an unusually underinformative partner, listeners may realize that vigilance about which objects are occluded yields a more effective strategy for the immediate interaction. An important direction for future work is to directly explore listener adaptability in adjusting their use of visual perspective-taking as a function of Gricean expectations for a given partner BIBREF63 , BIBREF64 . In sum, our findings suggest that language use is well-adapted to contexts of uncertainty and knowledge asymmetry. The pragmatic use of theory of mind to establish division of labor is also critical for other forms of social cooperation, including pedagogy BIBREF65 and team-based problem solving BIBREF66 , BIBREF67 . Enriching our notion of theory of mind use to encompass these pragmatic expectations, not only expectations about what our partner knows or desires, may shed new light on the flexibility of social interaction more broadly. Acknowledgements This manuscript is based in part on work presented at the 38th Annual Conference of the Cognitive Science Society. The first author is supported by a NSF Graduate Research Fellowship and a Stanford Graduate Fellowship. A pilot of expt. 2 was originally conducted under the supervision of Michael Frank, with early input from Desmond Ong. We’re grateful to Boaz Keysar for providing select materials for our replication. This work was supported by ONR grants N00014-13-1-0788 and N00014-13- 1-0287, and a James S. McDonnell Foundation Scholar Award to NDG. Author contributions R.X.D.H. and N.D.G. initially formulated project. R.X.D.H. performed experiments, analyzed data, and performed computational modeling. All authors planned experiments, interpreted result, and wrote the paper. Unless otherwise mentioned, all analyses and materials were preregistered at https://osf.io/qwkmp/. Code and materials for reproducing the experiment as well as all data and analysis scripts are open and available at https://github.com/hawkrobe/pragmatics_of_perspective_taking. Appendix A: Derivation of qualitative model predictions Our experiments are motivated by the Gricean observation that speakers should attempt to be more informative when there is an asymmetry in visual access, such that their partner sees something they do not. In this appendix, we formalize this scenario in a computational model of communication as recursive social reasoning and prove that the predicted increase in informativity qualitatively holds under fairly unrestrictive conditions. Following recent advances in the Rational Speech Act (RSA) framework, we define a speaker as a decision-theoretic agent who must choose a referring expression $u$ to refer to a target object $o$ in a context $C$ by (soft)-maximizing a utility function $U$ : $S(u | o, C) \propto \exp \lbrace \alpha U(u; o, C)\rbrace $ Definition The basic utility used in RSA models captures the informativeness of each utterance to an imagined literal listener agent $L$ who is attempting to select the target object from alternatives in context: $U_{basic}(u; o, C) = \log L(o | u, C)$ This information-theoretic expression measures how certain the listener becomes about the intended object after hearing the utterance. The literal listener is assumed to update their beliefs about the target object according to Bayesian inference, conditioning on the literal meaning of the utterance being true of it: $L(o | u, C) \propto \mathcal {L}(o,u) P(o)$ where normalization takes place over objects $o \in C$ and $\mathcal {L}$ represents the lexical semantics of $u$ . If $u$ is true of $o$ then $\mathcal {L}(o,u) = 1$ ; otherwise, $\mathcal {L}(o,u) = 0$ . This basic setup assumes that the speaker reasons about a listener sharing the same context $C$ in common ground. How should it be extended to handle asymmetries in visual access between the speaker and listener, where the speaker has uncertainty over the possible distractors behind the occlusions? In the RSA framework, speaker uncertainty is represented straightforwardly by a prior over the state of the world: for example, BIBREF48 examined a case where the speaker has limited perceptual access to the objects they are describing. For the director-matcher task, we construct this prior by positing a space of alternative objects $\mathcal {O}$ , introducing uncertainty $P(o_h)$ over which object $o_h \in \mathcal {O}$ , if any, is hidden behind an occlusion, and marginalizing over these alternatives when reasoning about the listener. Definition This gives us a utility for conditions of asymmetries in visual access: $U_{asym}(u; o, C) =\sum _{o_h \in \mathcal {O}} P(o_h) \log L(o | u, C \cup o_h)$ where $C$ denotes the set of objects in context that the speaker perceives. We define “specificity” extensionally, in the sense that if $u_0$ is more specific than $u_1$ , then the objects for which $u_0$ is true is a subset of the objects for which $u_1$ is true: Definition Utterance $u_0$ is said to be more specific than $u_1$ iff $\mathcal {L}(u_0, o_h) \le \mathcal {L}(u_1, o_h)\ \forall o_h \in \mathcal {O}$ and there exists a subset of objects $\mathcal {O}^* \subset \mathcal {O}$ such that $\sum _{o^* \in \mathcal {O}^*} P(o^*) > 0$ and $\mathcal {L}(u_0, o^*) < \mathcal {L}(u_1, o^*)$ for $o* \in \mathcal {O}^*$ . We now show that the recursive reasoning model predicts that speakers should prefer more informative utterances in contexts with occlusions. In other words, that the asymmetry utility leads to a preference for more specific referring expressions than the basic utility. Theorem If $u_0$ is more specific than $u_1$ , then the following holds for any target $o^t$ and shared context $C$ : $ \frac{S_{asym}(u_0 | o^t, C)}{S_{asym}(u_1| o^t, C)} > \frac{S_{basic}(u_0 | o^t, C)}{S_{basic}(u_1 | o^t, C)} $ Since $S(u_0|o^t, C)/S(u_1|o^t, C) = \exp (\alpha \cdot (U(u_0; o^t, C) - U(u_1;o^t,C)))$ it is sufficient to show $ U_{asym}(u_0 ; o, C) - U_{asym}(u_1; o, C) > U_{basic}(u_0 ; o, C) - U_{basic}(u_1 ; o, C) $ We first break apart the sum on the left-hand side: $$U_{asym}(u_0 | o^t, C) - U_{asym}(u_1 | o^t, C) &=& \displaystyle \sum _{o_h \in \mathcal {O}} p(o_h)\left[\log L(o | u_0, C\cup o_h) - \log L(o|u_1, C \cup o_h)\right] \\ & = & \displaystyle \sum _{o^*\in \mathcal {O}^*} p(o^*) \log \frac{L(o^t|u_0, C\cup o^*)}{L(o^t|u_1, C\cup o^*)} \\ & & + \displaystyle \sum _{o_h\in \mathcal {O}\setminus \mathcal {O}^*} p(o_h) \log \frac{L(o^t|u_0, C\cup o_h)}{L(o^t|u_1, C\cup o_h)} $$ (Eq. 9) By the definition of “more specific” and because we defined $o^*\in \mathcal {O^*}$ to be precisely the subset of objects for which $\mathcal {L}(u_0, o^*) < \mathcal {L}(u_1, o^*)$ , for objects $o_h$ in the complementary set $\mathcal {O} \setminus \mathcal {O^*}$ we have $\mathcal {L}(u_0, o_h) = \mathcal {L}(u_1, o_h)$ . Therefore, for , $L(o^t | u_i, C \cup o_h) = L(o^t | u_i, C)$ , giving us $\log \frac{L(o^t | u_0, C)}{L(o^t|u_1, C)}\sum _{o_h\in \mathcal {O}\setminus \mathcal {O}^*}p(o_h)$ For the ratio in 9 , we can substitute the definition of the listener $L$ and simplify: $ \begin{array}{rcl} \displaystyle \frac{L(o^t|u_0, C\cup o^*)}{L(o^t|u_1, C\cup o^*)} & = & \displaystyle \frac{\mathcal {L}(o^t, u_0) [\sum _{o\in C \cup o^*}\mathcal {L}(o,u_1)]}{\mathcal {L}(o^t, u_1) [\sum _{o\in C \cup o^*}\mathcal {L}(o,u_0)]} \\[.5cm] & = & \displaystyle \frac{\mathcal {L}(o^t, u_0) [\sum _{o\in C}\mathcal {L}(o,u_1) + \mathcal {L}(o^*, u_1)]}{\mathcal {L}(o^t, u_1) [\sum _{o\in C}\mathcal {L}(o,u_0) + \mathcal {L}(o^*, u_0)]} \\[.5cm] & < & \displaystyle \frac{\mathcal {L}(o^t, u_0) [\sum _{o\in C}\mathcal {L}(o,u_1)]}{\mathcal {L}(o^t, u_1) [\sum _{o\in C}\mathcal {L}(o,u_0)]} \\[.5cm] & = & \displaystyle \frac{L(o^t|u_0, C)}{L(o^t|u_1, C)} \end{array} $ Thus, $ \begin{array}{rcl} U_{asym}(u_0 | o^t, C) - U_{asym}(u_1 | o^t, C) & < & \log \frac{L(o^t | u_0, C)}{L(o^t|u_1, C)}\left(\displaystyle \sum _{o^*\in \mathcal {O}^*}p(o^*) + \displaystyle \sum _{o_h\in \mathcal {O}\setminus \mathcal {O}^*}p(o_h)\right) \\ &=& \log L(o^t | u_0, C) - \log L(o^t | u_1, C) \\ &=& U_{basic}(u_0 | o^t, C) - U_{basic}(u_1 | o^t, C) \end{array} $ Note that this proof also holds when an utterance-level cost term $\textrm {cost}(u)$ penalizing longer or more effortful utterances is incorporated into the utilities $ \begin{array}{lcl} U_{asym}(u; o, C_s) & = & \sum _{o_h \in \mathcal {O}} \log L_0(o | u, C_s \cup o_h)P(o_h) - \textrm {cost}(u) \\ U_{basic}(u; o, C) & = & \log L(o | u, C) - \textrm {cost}(u) \end{array} $ since the same constant appears on both sides of inequality. In principle, it can also be extended to real-valued meanings $\mathcal {L}$ , though additional assumptions must be made. Appendix B: Quantitative model fit for Exp. 1 In addition to the qualitative predictions derived in the previous section, our speaker model makes direct quantitative predictions about Exp. 1 data. Here, we describe the details of a Bayesian Data Analysis evaluating this model on the empirical data, and comparing it to an occlusion-blind model which does not reason about possible hidden objects. Because there were no differences observed in production based on the particular levels of target features (e.g. whether the target was blue or red), we collapse across these details and only feed the model which features of each distractor differed from the target on each trial. After this simplification, there were only 4 possible contexts: far contexts, where the distractors differed in every dimension, and three varieties of close contexts, where the critical distractor differed in only shape, shape and color, or shape and texture. In addition, we included in the model information about whether each trial had cells occluded or not. The space of utterances used in our speaker model is derived from our feature annotations: for each trial, the speaker model selected among 7 utterances referring to each combination of features: only mentioning the target's shape, only mentioning the target's color, mentioning the shape and the color, and so on. For the set of alternative objects $\mathcal {O}$ , we used the full 64-object stimulus space used in our experiment design, and we placed a uniform prior over these objects such that the occlusion-sensitive speaker assumed they were equally likely to be hidden. Our model has four free parameters which we infer from the data using Bayesian inference. The speaker optimality parameter, $\alpha $ , is a soft-max temperature such that at $\alpha = 1$ , the speaker produces utterances directly proportional to their utility, and as $\alpha \rightarrow \infty $ the speaker maximizes. In addition, to account for the differential production of the three features (see Fig. 2B), we assume separate production costs for each feature: a texture cost $c_t$ , a color cost $c_c$ , and a shape cost $c_s$ . We use (uninformative) uniform priors for all parameters: $ \begin{array}{rcl} \alpha & \sim & \textrm {Unif}(0,50) \\ c_t, c_c, c_s & \sim & \textrm {Unif}(0,10) \end{array} $ We compute speaker predictions for a particular parameter setting using (nested) enumeration and infer the posterior over parameters using MCMC. We discard 5000 burn-in samples and then take 5000 samples from the posterior with a lag of 2. Our posterior predictives are computed from these posteriors by taking the expected number of features produced by the speaker marginalizing over parameters and possible non-critical distractors in context (this captures the statistics of our experimental contexts, where there was always a distractor sharing the same color or texture but a different shape as the target). Finally, to precisely compute the Bayes Factor, we enumerated over a discrete grid of parameter values in the prior. We implemented our models and conducted inference in the probabilistic programming language WebPPL (Goodman & Stuhlmuller, 2014). All code necessary to reproduce our model results are available at the project github: https://github.com/hawkrobe/pragmatics_of_perspective_taking. Appendix C: Multi-stage bootstrap procedure for Expt. 2 The statistical dependency structure of our ratings was more complex than standard mixed-effect model packages are designed to handle and the summary statistic we needed for our test was a simple difference score across conditions, so we instead implemented a simple multi-stage, non-parametric bootstrap scheme to appropriately account for different sources of variance. In particular, we needed to control for effects of judge, item, and speaker. First, to control for the repeated measurements of each judge rating the informativity of all labels, we resampled our set of sixteen judge ids with replacement. For each label, we then computed informativity as the difference between the target and distractor fits within every judge's ratings, and took the mean across our bootstrapped sample of judges. Next, we controlled for item effects by resampling our eight item ids with replacement. Finally, we resampled speakers from pairs within each condition (scripted vs. unscripted), and looked up the mean informativity of each utterance they produced for each of the resampled set of items. Now, we can take the mean within each condition and compute the difference across conditions, which is our desired test statistic. We repeated this multi-stage resampling procedure 1000 times to get the bootstrapped distribution of our test statistic that we reported in the main text. Individual errors bars in Fig. 4 are derived from the same procedure but without taking difference scores.
No
a60030cfd95d0c10b1f5116c594d50cb96c87ae6
a60030cfd95d0c10b1f5116c594d50cb96c87ae6_0
Q: How long is new model trained on 3400 hours of data? Text: Introduction Recently, with the advancement of deep learning, great progress has been made in end-to-end (E2E) automatic speech recognition (ASR). With the goal of directly mapping a sequence of speech frames to a sequence of output tokens, an E2E ASR system incorporates the acoustic model, language model and pronunciation model of a conventional ASR system into a single deep neural network (DNN). The most dominant approaches for E2E ASR include connectionist temporal classification (CTC) BIBREF0, BIBREF1, recurrent neural network transducer (RNNT) BIBREF2 and attention-based encoder-decoder (AED) models BIBREF3, BIBREF4, BIBREF5. However, the performance of E2E ASR degrades significantly when an acoustic mismatch exists between training and test conditions. An intuitive solution is domain adaptation where a well-trained source-domain E2E model is adapted to the data in the target domain. Different from speaker adaption, domain adaptation allows for the usage of a large amount of adaptation data in both source and target domains. There has been plenty of domain adaptation methods for hybrid systems that we can leverage for adapting E2E systems. One popular approach is the adversarial learning in which an intermediate deep feature BIBREF6, BIBREF7, BIBREF8 or a front-end speech feature BIBREF9, BIBREF10 is learned to be invariant to the shifts between source and target domains. Adversarial domain adaptation is suitable for the situation where no transcription or parallel adaptation data in both domains are available. It can also effectively suppress the environment BIBREF11, BIBREF12, BIBREF13 and speaker BIBREF14, BIBREF15 variability during domain adaptation. However, in speech area, a parallel sequence of target-domain data can be easily simulated from the source-domain data such that the speech from both domains are frame-by-frame synchronized. To take advantage of this, teacher-student (T/S) learning BIBREF16 was proposed for the unsupervised domain adaptation of acoustic models in DNN-hidden Markov model (HMM) hybrid systems BIBREF17. In T/S learning, the Kullback-Leibler (KL) divergence between the output senone distributions of teacher and student acoustic models given parallel source and target domain data at the input is minimized by updating only the student model parameters. T/S training was shown to outperform the cross entropy training directly using the hard label in the target domain BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. One drawback of unsupervised T/S learning is that, the teacher model is not perfect and will sometimes make inaccurate predictions that mislead the student model toward suboptimal directions. To overcome this, one-hot ground-truth labels are used to compensate for teacher's imperfections. Hinton et al. proposed interpolated T/S (IT/S) learning BIBREF22 to interpolate the teacher's soft class posteriors with one-hot ground truth using a pair of globally fixed weights. However, the optimal weights are data-dependent and can only be determined through careful tuning on a dev set. More recently, conditional T/S (CT/S) learning was proposed in BIBREF20 where the student model selectively chooses to learn from either the teacher or the ground truth depending on whether the teacher's prediction is correct or not. CT/S does not disturb the statistical relationships among classes naturally embedded in the class posteriors and achieves significant word error rate (WER) improvement over T/S for domain adaptation on CHiME-3 dataset BIBREF23. In this work, we focus on the domain adaptation of AED models for E2E ASR by using T/S learning which was previously applied to learn small-footprint AED models in BIBREF24, BIBREF25, BIBREF26 by distilling knowledge from a large powerful teacher AED. For unsupervised domain adaptation, we extend T/S learning to AED models by introducing a two-level knowledge transfer: in addition to learning from the teacher's soft token posteriors, the student AED also conditions its decoder on the one-best token sequence decoded by the teacher AED. We further propose an adaptive T/S (AT/S) learning method to improve T/S learning using ground-truth labels. By taking advantage of both IT/S and CT/S, AT/S adaptively assigns a pair of weights to the teacher's soft token posteriors and the one-hot ground-truth label at each decoder step depending on the confidence scores on each of the labels. The confidence scores are dynamically estimated as a function of soft and one-hot labels. The student AED learns from an adaptive linear combination of both labels. AT/S inherits the linear interpolation of soft and one-hot labels from IT/S and borrows from CT/S the judgement on the credibility of both knowledge sources before merging them. It is expected to achieve improved performance over the other T/S methods for domain adaptation. As a general deep learning method, AT/S can be widely applied to the domain adaptation or model compression of any DNN. With 3400 hours close-talk and far-field Microsoft Cortana data for domain adaptation, T/S learning achieves up to 24.9% and 6.3% relative WER gains over close-talk and far-field baseline AEDs, respectively. AT/S improves the close-talk and far-field AEDs by 28.2% and 10.3%, respectively, consistently outperforming IT/S and CT/S. Attention-Based Encoder-Decoder (AED) Model In this work, we perform domain adaptation on AED models BIBREF3, BIBREF4, BIBREF5. AED model was first introduced in BIBREF27, BIBREF28 for neural machine translation. Without any conditional independence assumption as in CTC BIBREF0, AED was successfully applied to to E2E ASR in BIBREF3, BIBREF4, BIBREF5 and has recently achieved superior performance to conventional hybrid systems in BIBREF29. AED directly models the conditional probability distribution $P(\mathbf {Y} | \mathbf {X})$ over sequences of output tokens $\mathbf {Y}=\lbrace y_1, \ldots , y_L\rbrace $ given a sequence of input speech frames $\mathbf {X}=\lbrace \mathbf {x}_1, \ldots , \mathbf {x}_N\rbrace $ as below: To achieve this, the AED model incorporates an encoder, a decoder and an attention network. The encoder maps a sequence of input speech frames $\mathbf {X}$ into a sequence of high-level features $\mathbf {H} = \lbrace \mathbf {h}_1, \ldots , \mathbf {h}_N\rbrace $ through an RNN. An attention network is used to determine which encoded features in $\mathbf {H}$ should be attended to predict the output label $y_l$ and to generate a context vector $\mathbf {z}_l$ as a linear combination of $\mathbf {H}$ BIBREF3. A decoder is used to model $P(\mathbf {Y}|\mathbf {H})$ which is equivalent to $P(\mathbf {Y}|\mathbf {X})$. At each time step $t$, the decoder RNN takes the sum of the previous token embedding $\mathbf {e}_{l-1}$ and the context vector $\mathbf {z}_{l-1}$ as the input to predict the conditional probability of each token, i.e., $P(u | \mathbf {Y}_{0:l-1}, \mathbf {H}), u \in \mathbb {U}$, at the decoder step $l$, where $\mathbb {U}$ is the set of all the output tokens: In Eq. (DISPLAY_FORM2) and Eq. (), we sum together the $\mathbf {z}_l$ and $\mathbf {q}_l$ (or $\mathbf {e}_t$) instead of concatenation, because, by summation, we get a lower-dimensional combined vector than concatenation, saving the number of parameters by half for the subsequent projection operation. In our experiments, concatenation does not improve the performance even with more parameters. where $\mathbf {q}_l$ is the hidden state of the decoder RNN. bias $\mathbf {b}_y$ and the matrix $K_y$ are learnable parameters. An AED model is trained to minimize the following cross-entropy (CE) loss on the training corpus $\mathbb {T_r}$. where $\mathbf {Y}^G = \lbrace y^G_1, \ldots , y^G_{L^G}\rbrace $ is the sequence of grouth-truth tokens, $L^G$ represents the number of elements in $\mathbf {Y}^G$ and $\theta $ denotes all the model parameters in AED. T/S Learning for Unsupervised Domain Adaptation of AED For unsupervised domain adaptation, we want to make use of a large amount of unlabeled data that is widely available. As shown in Fig. FIGREF4, with T/S learning, only two sequences of parallel data are required: an input sequence of source-domain speech frames to the teacher AED $\mathbf {X}^T=\lbrace \mathbf {x}^T_{1}, \ldots , \mathbf {x}^T_{N}\rbrace $ and an input sequence of target-domain speech frames to the student model $\mathbf {X}^S=\lbrace \mathbf {x}^S_{1}, \ldots , \mathbf {x}^S_{N}\rbrace $. $\mathbf {X}^T$ and $\mathbf {X}^S$ are parallel to each other, i.e, each pair of $\mathbf {x}^S_n$ and $\mathbf {x}^T_n, \forall n \in \lbrace 1, \ldots , N\rbrace $ are frame-by-frame synchronized. For most domain adaptation tasks in ASR, such as adapting from clean to noisy speech, close-talk to far-field speech, wide-band to narrow-band speech, the parallel data in the target domain can be easily simulated from the data in the source domain BIBREF17, BIBREF19. Our goal is to train a student AED that can accurately predict the tokens of the target-domain data by forcing the student to emulate the behaviors of the teacher. To achieve this, we minimize the Kullback-Leibler (KL) divergence between the token-level output distributions of the teacher and the student AEDs given the parrallel data $\mathbf {X}^T$ and $\mathbf {X}^S$ are fed as the input to the AEDs. The KL divergence between the token-level output distributions of the teacher and student AEDs are formulated below where $\mathbf {Y}^{T}=\lbrace y^T_1, \ldots , y^T_{L^T}\rbrace $ is the sequence of one-best token sequence decoded by the teacher AED as follows where $L^T$ is the number of tokens in $\mathbf {Y}^T$, and $\mathbf {\theta }^{T}$, $\mathbf {\theta }^{S}$ denote all the parameters in the teacher and student AED models, respectively. Note that, for unsupervised domain adaptation, the teacher AED can only condition its decoder on the token $y^T_{l-1}$ predicted at the previous step since the ground-truth labels $\mathbf {Y}^G$ are not available. We minimize the KL divergence with respect to $\theta ^S$ while keeping $\theta ^T$ fixed on the adaptation data corpus $\mathbb {A}$, which is equivalent to minimizing the token-level T/S loss function below: The steps of token-level T/S learning for unsupervised domain adaptation of AED model are summarized as follows: Clone the student AED from a teacher AED well-trained with transcribed source-domain data by minimizing Eq. (DISPLAY_FORM3). Forward-propagate the source-domain data $\mathbf {X}^T$ through the teacher AED, generate teacher's one-best token sequence $\mathbf {Y}^T$ using Eq. (DISPLAY_FORM6) and teacher's soft posteriors for each decoder step $P(u|\mathbf {Y}^T_{0:l-1},\mathbf {X}^T; \mathbf {\theta }^{T}), u \in \mathbb {U}$ by Eqs. (DISPLAY_FORM2) and (). Forward-propagate the target-domain data $\mathbf {X}^S$ (parallel to $\mathbf {X}^T$) through the student AED, generate student's soft posteriors for each teacher's decoder step $P(u|\mathbf {Y}^T_{0:l-1},$ $\mathbf {X}^S; \mathbf {\theta }^{S}), u \in \mathbb {U}$ by Eqs. (DISPLAY_FORM2) and (). Compute error signal of the T/S loss function in Eq. (DISPLAY_FORM7) , back-propagate the error through student AED and update the parameters of the student AED. Repeat Steps UNKREF9 to UNKREF11 until convergence. After T/S learning, only the adapted student AED is used for testing and the teacher AED is discarded. From Eqs. (DISPLAY_FORM6) and (DISPLAY_FORM7), to extend T/S learning to AED-based E2E models, two levels of knowledge transfer are involved: 1) the student learns from the teacher's soft token posteriors $P(u|\mathbf {Y}^T_{0:l-1}, \mathbf {X}^T; \mathbf {\theta }^T)$ at each decoder step; 2) the student AED conditions its decoder on the previous token $y^T_{l-1}$ predicted by the teacher to make the current prediction. Sequence-level T/S learning BIBREF24, BIBREF30 is another method for unsupervised domain adaptation in which a KL divergence between the sequence-level output distributions of the teacher and student AEDs are minimized. Equivalently, we minimize the sequence-level T/S loss function below with respect to $\theta ^S$ where $\mathbb {V}$ is the set of all possible token sequences and the teacher's sequence-level output distribution $P(\mathbf {V} | \mathbf {X}^T)$ is approximated by $\mathbb {1}[\mathbf {V}=\mathbf {Y}^T]$ for easy implementation. $\mathbb {1}[\cdot ]$ is an indicator function which equals to 1 if the condition in the squared bracket is satisfied and 0 otherwise. From Eq. (DISPLAY_FORM13), we see that only one level of knowledge transfer exists in sequence-level T/S, i.e., the one-best token sequence $\mathbf {Y}^T$ decoded by the teacher AED. The student AED learns from $\mathbf {Y}^T$ and conditions its decoder on it at each step. Different from token-level T/S, in sequence-level T/S, one-hot labels in $\mathbf {Y}^T$ are used as training targets of the student AED instead of the soft token posteriors. Adaptive T/S (AT/S) Learning for Supervised Domain Adaptation of AED In this section, we want to make good use of the ground-truth labels of the adaptation data to further improve the T/S domain adaptation. Note that different from unsupervised T/S in Section SECREF3, in supervised domain adaptation, the teacher AED conditions its decoder on the ground-truth token instead of its previous decoding result because the token transcription $\mathbf {Y}^G$ is available in addition to $\mathbf {X}^S$ and $\mathbf {X}^T$. One shortcoming of unsupervised T/S learning is that the teacher model can sporadically predict inaccurate token posteriors which misleads the student AED towards suboptimal performance. One-hot ground-truth labels can be utilized to alleviate this issue. One possible solution is the interpolated T/S (IT/S) learning BIBREF22 in which a weighted sum of teacher's soft posteriors and the one-hot ground truth is used as the target to train the student AED. A pair of global weights summed to be one is applied to each pair of soft and one-hot labels. However, the optimal global weights are hard to determine because they are data-dependent and need to be carefully tuned on a dev set. To address this issue, conditional T/S learning (CT/S) BIBREF20 was proposed recently in which the student selectively chooses to learn from either the teacher AED or the ground truth conditioned on whether the teacher AED can correctly predict the ground-truth labels. CT/S have shown significant WER improvements over T/S and IT/S for both domain and speaker adaptation on CHiME-3 dataset. However, in CT/S, the student is still not “smart” enough because, for each token, the student AED solely relies on either the teacher's posteriors or the ground truth instead of dynamically extracting useful knowledge from both. To further improve the effectiveness of knowledge transfer, we propose an adaptive teacher-student (AT/S) learning method by taking advantage of both CT/S and IT/S. As shown in Fig. FIGREF14, instead of assigning a fixed pair of soft weight $w$ and one-hot weight $(1-w)$ for all the decoder steps, we adaptively weight the teacher's soft posteriors at the $l^\text{th}$ decoder step, $P(u|\mathbf {Y}^G_{0:l-1},\mathbf {X}^T;\mathbf {\theta }^{T}), u\in \mathbb {U}$, by $w_l \in [0,1]$ and the one-hot vector of the $l^\text{th}$ token in the ground-truth sequence $\mathbf {Y}^G$ by $(1-w_l)$. In order to quantify the value of the knowledge to be transferred, $w_l$ should be positively correlated with a confidence score $c_l$ on the teacher's prediction on token posteriors, while $(1-w_l)$ should be positively correlated with a confidence score on the ground truth $d_l$. To achieve this, we compute $w_l$ by normalizing $c_l$ against its summation with $d_l$. It is in general true that the higher posterior $P(y_l^G|\mathbf {Y}^G_{0:l-1},$ $\mathbf {X}^T;\theta ^T)$ a teacher assigns to the correct (ground-truth) token $y_l^G$, the more accurate the teacher's soft posteriors are at this decoder step. Therefore, the confidence score $c_l$ on teacher's soft posteriors $P(u|\mathbf {Y}^G_{0:l-1}, \mathbf {X}^T; \theta ^T), u\in \mathbb {U}$ can be any monotonically increasing function of the correct token posterior predicted by the teacher $P(y^G_l|\mathbf {Y}^G_{0:l-1}, \mathbf {X}^T; \theta ^T)$, while the confidence score $d_l$ on the one-hot ground truth can be any monotonically increasing function of $(1-P(y^G_l|\mathbf {Y}^G_{0:l-1}, \mathbf {X}^T;$ $\theta ^T))$ as follow where both $f_1$ and $f_2$ are any monotonically increasing functions on the interval $[0, 1]$. In this work, we simply assume that $f_1$ and $f_2$ are both power functions of the same form, i.e., $f_1(x) = f_2(x) = x^{\lambda }, \; \lambda > 0 $. Note that $w_l$ equals to $P(y^G_l|\mathbf {Y}^G_{0:l-1}, \mathbf {X}^T; \theta ^T)$ when $\lambda =1$. In AT/S, a linear combination of the teacher's soft posteriors and the one-hot ground truth weighted by $w_l$ and $(1- w_l)$, respectively, is used as the training target for the student AED at each decoder step. The AT/S loss function is formulated as The steps of AT/S learning for supervised domain adaptation of AED model are summarized as follows: Perform token-level unsupervised T/S adaptation by following the steps in Section SECREF3 as the initialization. Forward-propagate the parallel source and target domain data $\mathbf {X}^T$ and $\mathbf {X}^S$ through the teacher and student AEDs, generate teacher and student's soft posteriors $P(u|\mathbf {Y}^G_{0:l-1},\mathbf {X}^T; \mathbf {\theta }^{T})$ and $P(u|\mathbf {Y}^G_{0:l-1},$ $\mathbf {X}^S; \mathbf {\theta }^{S}), u \in \mathbb {U}$ for each decoder step by Eqs. (DISPLAY_FORM2) and (). Compute the confidence scores $c_l$ and $d_l$ for teacher's soft posteriors and one-hot vector of ground truth $y^G_l$ by Eqs. (DISPLAY_FORM16) and (), compute the adaptive weight $w_l$ by Eq. (DISPLAY_FORM15). Compute error signal of the AT/S loss function in Eq. (DISPLAY_FORM17) , back-propagate the error through student AED and update the parameters of the student AED. Repeat Steps UNKREF9 to UNKREF11 until convergence. AT/S is superior to IT/S in that the combination weights for soft and one-hot labels at each decoder step are adaptively assigned according to the confidence score on both labels. AT/S will degenerate to IT/S if the combination weights $w_l$ are fixed globally. Compared to CT/S, in AT/S, the student always adaptively learns from both the teacher's soft posteriors and the one-hot ground truth rather than choosing either of them depending on the correctness of teacher's prediction. Experiments We adapt a close-talk AED model to the far-field data through various T/S learning methods with parallel close-talk and far-field Microsoft Cortana data for E2E ASR. Experiments ::: Data Preparation For both training and adaptation, close-talk data consisting of 3400 hours of Microsoft live US English Cortana utterances are collected through a number of deployed speech services including voice search and SMD. We simulate 3400 hours of far-field Microsoft Cortana data by convolving the close-talk signal with different room impulse responses and adding various environmental noise for both training and adaptation. The 3400 hours far-field data is parallel with the 3400 hours close-talk data. We collect 17.5k far-field utterances (about 19 hours) from Harman Kardon (HK) speaker as the test set. 80-dimensional log Mel filter bank features are extracted from the training, adaptation and test speech every 10 ms over a 25 ms window. We stack 3 consecutive frames and stride the stacked frame by 30 ms, to form a sequence of 240-dimensional input speech frames. We first generate 34k mixed-units consisting of words and multi-letter units as in BIBREF31 based on the training transcription and then tokenize the training, adaptation transcriptions correspondingly. We insert a special token <space> between every two adjacent words to indicate the word boundary and add <sos>, <eos> to the beginning and end of each utterance, respectively. Experiments ::: AED Baseline System We first train an AED model predicting 34k mixed units with 3400 hours close-talk training data and it ground-truth labels for E2E ASR as in BIBREF32, BIBREF33, BIBREF34. The encoder is a bi-directional gated recurrent units (GRU)-recurrent neural network (RNN) BIBREF27, BIBREF35 with 6 hidden layers, each with 512 hidden units. We use GRU instead of long short-term memory (LSTM) BIBREF36, BIBREF37 for RNN because it has less parameters and is trained faster than LSTM with no loss of performance. Layer normalization BIBREF38 is applied for each encoder hidden layer. Each mixed unit is represented as a 512-dimensional embedding vector. The decoder is a uni-directional GRU-RNN with 2 hidden layers, each with 512 hidden units. The 34k-dimensional output layer of the decoder predicts the posteriors of all the mixed units in the vocabulary. During training, scheduled sampling BIBREF39 is applied to the decoder with a sampling probability starting at 0.0 and gradually increasing to 0.4 BIBREF29. Dropout BIBREF40 with a probability of 0.1 is used in both encoder and decoder. A label-smoothed cross-entropy BIBREF41 loss is minimized during training. Greedy decoding is performed to generate the ASR transcription. We use PyTorch BIBREF42 toolkit for the experiments. Table TABREF25 shows that the close-talk AED model achieves 7.58% and 17.39% WERs on a close-talk Cortana test set used in BIBREF33 and the far-field HK speaker test set, respectively. Using the well-trained close-talk AED as the initialization, we then train a far-field AED with 3400 hours far-field data and its ground-truth labels by following the same procedure. When evaluated on the HK speaker test set, the baseline far-field AED achieves 13.93% WER for ASR as in Table TABREF25. Experiments ::: Unsupervised Domain Adaptation with T/S Learning We adapt the close-talk baseline AED to the 3400 hours far-field data using token and sequence level T/S learning as discussed in Section SECREF3. To achieve this, we feed the 3400 hours close-talk adaptation data as the input to the teacher AED and the 3400 hours parallel far-field adaptation data as the input to the student AED. The student AED conditions its decoder on one-best token sequences generated by the teacher AED through greedy decoding. In token-level T/S, the soft posteriors generated by the teacher serve as the training targets of the student while in sequence-level T/S, the one-best sequences decoded by the teacher are used the targets. As shown in Table TABREF25, the token-level T/S achieves 13.06% WER on HK speaker test set, which is 24.9% and 6.25% relative improvements over the close-talk and far-field AED models, respectively. The sequence-level T/S achieves 14.00% WER, which is 19.5% relative improvement over the close-talk AED model. The sequence-level T/S performs slightly worse than the far-field AED trained with ground-truth labels because the one-best decoding from the teacher AED is not always reliable to serve as the training targets for the student model. The sequence-level T/S can be improved by using multiple decoded hypotheses generated by the teacher AED as the training targets as in BIBREF25, BIBREF26. We did not perform N-best decoding because it will drastically increase the computational cost and will consumes much more adaptation time than the other T/S methods. The 6.7% relative WER gain obtained by token-level T/S over sequence-level T/S shows the benefit of using soft posteriors generated by the teacher AED as the training target at each decoder step when a reliable ground-truth transcription is not available. The 6.3% relative WER gain of token T/S over far-field AED baseline shows that the unsupervised T/S learning with no ground-truth labels can significantly outperform the supervised domain adaptation with such information available. Compared to the one-hot labels, the soft posteriors accurately models the inherent statistical relationships among different token classes in addition to the token identity encoded by a one-hot vector. It proves to be a more powerful target for the student to learn from which is consistent with what was observed in BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Experiments ::: Supervised Domain Adaptation with AT/S Learning As discussed in Section SECREF4, we want to further improve the T/S learning by using one-hot ground-truth labels when they are available. As in BIBREF22, we perform IT/S learning for supervised domain adaptation by using the linear interpolation of soft posterior and one-hot ground truth as the training target of the student. The interpolation weights are globally fixed at 0.5 and 0.5 for all decoder steps. By following BIBREF20, we also conduct CT/S for supervised domain adaptation where soft posteriors are used as the training target of the student if the teacher's prediction is correct at the current decoder step, otherwise the one-hot ground truth is used as the target. Finally, AT/S domain adaptation is performed by adaptively adjusting the weights assigned to the soft and one-hot labels at each decoder step as in Eqs. (DISPLAY_FORM15) to (). We explore using different power functions as $f_1(x)$ and $f_2(x)$ to compute the confidence scores by adjusting $\lambda $. For all the above supervised T/S learning methods, the 3400 hours close-talk and 3400 hours far-field parallel adaptation data is fed as the input to the teacher and student AEDs, respectively. As shown in Table TABREF25, IT/S with $w = 0.2$ achieves 13.95% WER on HK speaker test set which is 25.5%, 7.0% and 0.8% relative improvements over the close-talk, far-field and token-level T/S adapted AED models, respectively. With a 12.82% WER, CT/S relatively improves the close-talk, far-field and token-level T/S adapted AED models by 26.3%, 8.0% and 1.8% respectively. Among different $\lambda $s for AT/S, the best WER is 12.49%, which is 28.2%, 10.3% and 4.4% relative gains over close-talk, far-field and token-level T/S adapted AEDs. The minimum WER is reached when $\lambda =0.25$ and $c_l = P(y_l|\mathbf {Y}^G_{0:l-1}, \mathbf {X}^T; \theta ^T)^{0.25}$. Compared to $\lambda >1$, AT/S works better for $\lambda \in [0, 1]$ when confidence scores $c_l$, $d_l$ are both concave functions of the correct token posterior and the sum of incorrect token posteriors, respectively. All the IT/S, CT/S and AT/S outperform the unsupervised T/S learning indicating that the one-hot ground truth can further improve T/S domain adaptation when it is properly used. AT/S achieves the largest gain in supervised domain adaptation methods showing the superiority of adaptively extracting useful knowledge from both the soft and one-hot labels depending on their confidence scores. Conclusion In this paper, we extend T/S learning to unsupervised domain adaptation of AED models for E2E ASR. T/S learning requires only unlabeled parallel source and target domain data as the input to the teacher and student AEDs, respectively. In T/S, the student AED conditions its decoder on the one-best token sequences generated by the teacher. The teacher's soft posteriors and decoded one-hot tokens are used as the training target of the student AED for token-level and sequence-level T/S learning, respectively. For supervised domain adaption, we propose adaptive T/S learning in which the student always learns from a linear combination of the teacher's soft posteriors and the one-hot ground truth. The combination weights are adaptively computed at each decoder step based on the confidence scores on both knowledge sources. Domain adaptation is conducted on 3400 hours close-talk and 3400 hours far-field Microsoft Cortana data. Token-level T/S achieves 6.3% relative WER improvement over the baseline far-field AED model trained with CE criterion. By making use of the ground-truth labels, AT/S further improves the token-level T/S by 4.4% relative and achieves a total 10.3% relative gain over the far-field AED. AT/S also consistently outperforms IT/S and CT/S showing the advantage of learning from both the teacher and the ground truth as well as the adaptive adjustment of the combination weights.
Unanswerable
efe49829725cfe54de01405c76149a4fe4d18747
efe49829725cfe54de01405c76149a4fe4d18747_0
Q: How much does HAS-QA improve over baselines? Text: Introduction Open-domain question answering (OpenQA) aims to seek answers for a broad range of questions from a large knowledge sources, e.g., structured knowledge bases BIBREF0 , BIBREF1 and unstructured documents from search engine BIBREF2 . In this paper we focus on the OpenQA task with the unstructured knowledge sources retrieved by search engine. Inspired by the reading comprehension (RC) task flourishing in the area of natural language processing BIBREF3 , BIBREF4 , BIBREF5 , some recent works have viewed OpenQA as an RC task, and directly applied the existing RC models to it BIBREF6 , BIBREF7 , BIBREF3 , BIBREF8 . However, these RC models do not well fit for the OpenQA task. Firstly, they directly omit the paragraphs without answer string. RC task assumes that the given paragraph contains the answer string (Figure 1 top), however, it is not valid for the OpenQA task (Figure 1 bottom). That's because the paragraphs to provide answer for an OpenQA question is collected from a search engine, where each retrieved paragraph is merely relevant to the question. Therefore, it contains many paragraphs without answer string, for instance, in Figure 1 Paragraph2. When applying RC models to OpenQA task, we have to omit these paragraphs in the training phase. However, during the inference phase, when model meets one paragraph without answer string, it will pick out a text span as an answer span with high confidence, since RC model has no evidence to justify whether a paragraph contains the answer string. Secondly, they only consider the first answer span in the paragraph, but omit the remaining rich multiple answer spans. In RC task, the answer and its positions in the paragraph are provided by the annotator in the training data. Therefore RC models only need to consider the unique answer span, e.g., in SQuAD BIBREF9 . However, the OpenQA task only provides the answer string as the ground-truth. Therefore, multiple answer spans are detected in the given paragraph, which cannot be considered by the traditional RC models. Take Figure 1 as an example, all text spans contain `fat' are treated as answer span, so we detect two answer spans in Paragraph1. Thirdly, they assume that the start position and end position of an answer span is independent. However, the end position is evidently related with the start position, especially when there are multiple answer spans in a paragraph. Therefore, it may introduce some problems when using such independence assumption. For example, the detected end position may correspond to another answer span, rather than the answer span located by the start position. In Figure 1 Paragraph1, `fat in their $\cdots $ insulating effect fat' has a high confidence to be an answer span under independence assumption. In this paper, we propose a Hierarchical Answer Span Model, named HAS-QA, based on a new three-level probabilistic formulation of OpenQA task, as shown in Figure 2 . At the question level, the conditional probability of the answer string given a question and a collection of paragraphs, named answer probability, is defined as the product of the paragraph probability and conditional answer probability, based on the law of total probability. At the paragraph level, paragraph probability is defined as the degree to which a paragraph can answer the question. This probability is used to measure the quality of a paragraph and targeted to tackle the first problem mentioned, i.e. identify the useless paragraphs. For calculation, we first apply bidirectional GRU and an attention mechanism on the question aware context embedding to obtain a score. Then, we normalize the scores across the multiple paragraphs. In the training phase, we adopt a negative sampling strategy for optimization. Conditional answer probability is the conditional probability that a text string is the answer given the paragraph. Considering multiple answer spans in a paragraph, the conditional answer probability can be further represented as the aggregation of several span probability, defined later. In this paper, four types of functions, i.e. HEAD, RAND, MAX and SUM, are used for aggregation. At the span level, span probability represents the probability that a text span in a paragraph is the answer span. Similarly to previous work BIBREF3 , span probability can be computed as the product of two location probability, i.e., location start probability and location end probability. Then a conditional pointer network is proposed to model the probabilistic dependences between the start and end positions, by making generation of end position depended on the start position directly, rather than internal representation of start position BIBREF10 . The contributions of this paper include: 1) a probabilistic formulation of the OpenQA task, based on the a three-level hierarchical structure, i.e. the question level, the paragraph level and the answer span level; 2) the proposal of an end-to-end HAS-QA model to implement the three-level probabilistic formulation of OpenQA task (Section "HAS-QA Model" ), which tackles the three problems of direct applying existing RC models to OpenQA; 3) extensive experiments on QuasarT, TriviaQA and SearchQA datasets, which show that HAS-QA outperforms traditional RC baselines and recent OpenQA baselines. Related Works Research in reading comprehension grows rapidly, and many successful RC models have been proposed BIBREF11 , BIBREF4 , BIBREF3 in this area. Recently, some works have treated OpenQA task as an RC task and directly applied existing RC models. In this section, we first review the approach of typical RC models, then introduce some recent OpenQA models which are directly based on the RC approach. RC models typically have two components: context encoder and answer decoder. Context encoder is used to obtain the embeddings of questions, paragraphs and their interactions. Most of recent works are based on the attention mechanism and its extensions. The efficient way is to treat the question as a key to attention paragraph BIBREF3 , BIBREF6 . Adding the attention from paragraph to question BIBREF4 , BIBREF5 , enriches the representations of context encoder. Some works BIBREF12 , BIBREF13 , BIBREF8 find that self-attention is useful for RC task. Answer decoder aims to generate answer string based on the context embeddings. There exist two sorts of approaches, generate answer based on the entail word vocabulary BIBREF14 and retrieve answer from the current paragraph. Almost all works in RC task choose the retrieval-based method. Some of them use two independently position classifiers BIBREF6 , BIBREF15 , the others use the pointer networks BIBREF3 , BIBREF4 , BIBREF12 , BIBREF13 . An answer length limitation is applied in these models, i.e. omit the text span longer than 8. We find that relaxing length constrain leads to performance drop. Some recent works in OpenQA research directly introduce RC model to build a pure data driven pipline. DrQA BIBREF6 is the earliest work that applies RC model in OpenQA task. However, its RC model is trained using typical RC dataset SQuAD BIBREF9 , which turns to be over-confidence about its predicted results even if the candidate paragraphs contain no answer span. R ${}^3$ BIBREF16 introduces a ranker model to rerank the original paragraph list, so as to improve the input quality of the following RC model. The training data of the RC model is solely limited to the paragraphs containing the answer span and the first appeared answer span location is chosen as the ground truth. Shared-Norm BIBREF8 applied a shared-norm trick which considers paragraphs without answer span in training RC models. The trained RC model turns to be robust for the useless paragraphs and generates the lower span scores for them. However, it assumes that the start and the end positions of an answer span are independent, which is not suitable for modeling multiple answer spans in one paragraph. Therefore, we realize that the existing OpenQA models rarely consider the differences between RC and OpenQA task. In this paper, we directly model the OpenQA task based on a probabilistic formulation, in order to identify the useless paragraphs and utilize the multiple answer spans. Probabilistic Views of OpenQA In OpenQA task, the question $Q$ and its answer string $A$ are given. Entering question $Q$ into a search engine, top $K$ relevant paragraphs are returned, denote as a list $\mathbf {P} = [P_1,\dots , P_K]$ . The target of OpenQA is to find the maximum probability of $P(A|Q, \mathbf {P})$ , named answer probability for short. We can see the following three characteristics of OpenQA: 1) we cannot guarantee that paragraph retrieved by search engine contains the answer span for the question, so the paragraphs without answer span have to be deleted when using the above RC models. However, these paragraphs are useful for distinguishing the quality of paragraphs in training. More importantly, the quality of a paragraph plays an important role in determining the answer probability in the inference phase. It is clear that directly applying RC models fails to meet this requirement. 2) only answer string is provided, while the location of the answer string is unknown. That means there may be many answer spans in the paragraph. It is well known that traditional RC models are only valid for a single answer span. To tackle this problem, the authors of BIBREF7 propose a distantly supervised method to use the first exact match location of answer string in the paragraph as the ground-truth answer span. However, this method omit the valuable multiple answer spans information, which may be important for the calculation of the answer probability. 3) the start and end positions are coupled together to determine a specific answer span, since there may be multiple answer spans. However, existing RC models usually assume that the start and end positions are independent. That's because there is only one answer span in the RC scenario. This may introduce serious problem in the OpenQA task. For example, if we do not consider the relations between the start and end position, the end position may be another answer span's end position, instead of the one determined by the start position. Therefore, it is not appropriate to assume independence between start and end positions. In this paper, we propose to tackle the above three problems. Firstly, according to the law of total probability, the answer probability can be rewritten as the following form. $$P(A|Q, \mathbf {P})\! =\! \sum _{i=1}^{K} P(P_i|Q, \mathbf {P}) P(A|Q, P_i).$$ (Eq. 4) We name $P(P_i|Q, \mathbf {P})$ and $P(A|Q, P_i)$ as the paragraph probability and conditional answer probability, respectively. We can see that the paragraph probability measures the quality of paragraph $P_i$ across the list $\mathbf {P}$ , while the conditional answer probability measures the probability that string $A$ is an answer string given paragraph $P_i$ . The conditional answer probability can be treated as a function of multiple span probabilities $\lbrace P(L_j(A)|Q, P_i)\rbrace _j$ , as shown in Eq 5 . $$\begin{aligned} P(A|Q, P_i) &:= \mathcal {F}(\lbrace P(L_j(A)|Q, P_i)\rbrace _j), \\ &j \in [1, |\mathcal {L}(A,P_i)|], \end{aligned}$$ (Eq. 5) where the aggregation function $\mathcal {F}$ treats a list of spans $\mathcal {L}(A,P_i)$ as input, and $|\mathcal {L}(A,P_i)|$ denotes the number of the text spans contain the string $A$ . A proper aggregation function makes use of all the answer spans information in OpenQA task. Previous work BIBREF7 can be treated as a special case, which uses a function of selecting first match span as the aggregation function $\mathcal {F}$ . The span probability $P(L_j(A)|Q, P_i)$ represents the probability that a text span $L_j(A)$ in the paragraph $P_i$ is an answer span. We further decompose it into the product of location start probability $P(L^s_j(A)|Q, P_i)$ and location end probability $P(L^e_j(A)|Q, P_i, L^s_j(A))$ , shown in Eq 6 . $$\begin{aligned} P(L_j(A)|Q, P_i) = &P(L^s_j(A)|Q, P_i) \\ \cdot &P(L^e_j(A)|Q, P_i, L^s_j(A)). \end{aligned}$$ (Eq. 6) Some previous work such as DrQA BIBREF6 treats them as the two independently position classification tasks, thus $L^{s}(A)$ and $L^{e}(A)$ are modeled by two different functions. Match-LSTM BIBREF3 treats them as the pointer networks BIBREF10 . The difference is that $L^{e}(A)$ is the function of the hidden state of $L^{s}(A)$ , denote as $\mathbf {M^s}$ . However, $L^{s}(A)$ and $L^{e}(A)$ are still independent in probabilistic view, because $L^{e}(A)$ depends on the hidden state $\mathbf {M^s}$ , not the start position $L^{s}(A)$ . In this paper, the span positions $L^{e}(A)$0 and $L^{e}(A)$1 are determined by the question $L^{e}(A)$2 and the paragraph $L^{e}(A)$3 . Specially, end position $L^{e}(A)$4 is also conditional on start position $L^{e}(A)$5 directly. With this conditional probability, we can naturally remove the answer length limitation. With above formulation, we find that RC task is a special case of OpenQA task, where we set the number of paragraph $K$ to 1, set the paragraph probability to constant number 1, treat $P(A|Q,P){=}P(L(A)|Q, P)$ , $P(L(A)|Q, P){=}P(L^{s}(A)|Q, P)P(L^{e}(A)|Q, P)$ , where $P$ is the idealized paragraph that contain the answer string $A$ , and the right position $L(A)$ is also known. HAS-QA Model In this section, we propose a Hierarchical Answer Span Model (HAS-QA) for OpenQA task, based on the probabilistic view of OpenQA in Section "Probabilistic Views of OpenQA" . HAS-QA has four components: question aware context encoder, conditional span predictor, multiple spans aggregator and paragraph quality estimator. We will introduce them one by one. Question Aware Context Encoder The question aware context embeddings $\mathbf {C}$ is generated by the context encoder, while HAS-QA do not limit the use of context encoder. We choose a simple but efficient context encoder in this paper. It takes advantage of previous works BIBREF8 , BIBREF3 , which contains the character-level embedding enhancement, the bi-directional attention mechanism BIBREF4 and the self-attention mechanism BIBREF12 . We briefly describe the process below . Word Embeddings: use size 300 pre-trained GloVe BIBREF17 word embeddings. Char Embeddings: encode characters in size 20, which are learnable. Then obtain the embedding of each word by convolutional layer and max pooling layer. Context Embeddings: concatenate word embeddings and char embeddings, and apply bi-directional GRU BIBREF18 to obtain the context embeddings. Both question and paragraph get their own context embeddings. Question Aware Context Embeddings: use bi-directional attention mechanism from the BiDAF BIBREF4 to build question aware context embeddings. Additionally, we subsequently apply a layer of self-attention to get the final question aware context embeddings. After the processes above, we get the final question aware context embeddings, denoted $\mathbf {C} \in \mathbb {R}^{n \times r}$ , where $n$ is the length of the paragraph and $r$ is size of the embedding. Conditional Span Predictor Conditional span predictor defines the span probability for each text span in a paragraph using a conditional pointer network. We first review the answer decoder in traditional RC models. It mainly has two types: two independently position classifiers (IndCls) and the pointer networks (PtrNet). Both of these approaches generate a distribution of start position $\mathbf {p^s} \in \mathbb {R}^n$ and a distribution of end position $\mathbf {p^e} \in \mathbb {R}^n$ , where $n$ is the length of the paragraph. Starting from the context embeddings $\mathbf {C}$ , two intermedia representations $\mathbf {M^s} \in \mathbb {R}^{n \times 2d}$ and $\mathbf {M^e} \in \mathbb {R}^{n \times 2d}$ are generated using two bidirectional GRUs with the output dimension $d$ . $$ \mathbf {M^s} &= \mathrm {BiGRU}(\mathbf {C})\\ \textrm {IndCls:}\; \mathbf {M^e} &= \mathrm {BiGRU}(\mathbf {C}), \\ \textrm {PtrNet:}\; \mathbf {M^e} &= \mathrm {BiGRU}([\mathbf {C}, \mathbf {M^s}]).$$ (Eq. 10) Then an additional Softmax function is used to generate the final positional distributions, $$ \begin{aligned} &\mathbf {p^s}\! =\! \mathrm {softmax}(\mathbf {M^s}w_s), \\ &\mathbf {p^e}\! =\! \mathrm {softmax}(\mathbf {M^e}w_e). \end{aligned}$$ (Eq. 11) where $w_s, w_e \in \mathbb {R}^{2d}$ denotes the linear transformation parameters. As mentioned in Section "Probabilistic Views of OpenQA" , IndCls and PtrNet both treat start and end position as probabilistic independent. Given the independent start and end positions can not distinguish the different answer spans in a paragraph properly, so it is necessary to build a conditional model for them. Therefore, we proposed a conditional pointer network which directly feed the start position to the process of generating the end position: $$ \begin{aligned} \mathbf {M^e_j} &= \mathrm {BiGRU}([\mathbf {C}, \mathbf {M^s}, \mathrm {OneHot}(L^s_j)]), \\ \mathbf {p^e_j} &= \mathrm {softmax}(\mathbf {M^e_j}w_e), \end{aligned}$$ (Eq. 12) where $L^s_j$ denotes the start position selected from the start positional distribution $\mathbf {p^s}$ and $\mathrm {OneHot}(\cdot )$ denotes the transformation from a position index to an one-hot vector. In the training phase, we are given the start and end positions of each answer span, denote as $L^s_j$ and $L^e_j$ . The span probability is: $$ P(L_j(A)|Q, P_i) = s_j = \mathbf {p^s}[L^s_j] \cdot \mathbf {p^e_j}[L^e_j].$$ (Eq. 13) In the inference phase, we first select the start position $L^s_j$ from the start distribution $\mathbf {p^s}$ . Then we yield its corresponding end distribution $\mathbf {p^e_j}$ using Eq 12 , and select the end position $L^e_j$ from it. Finally, we get the span probability using Eq 13 . Multiple Spans Aggregator Multiple span aggregator is used to build the relations among multiple answer spans and outputs the conditional answer probability. In this paper, we design four types of aggregation functions $\mathcal {F}$ : $$ \begin{aligned} &\textrm {HEAD:} \; P(A|Q, P_i) = s_1 \\ &\textrm {RAND:} \; P(A|Q, P_i) = \textrm {Random}(s_j) \\ &\textrm {MAX:} \;\;\; P(A|Q, P_i) = \max _j\nolimits (s_j) \\ &\textrm {SUM:} \;\;\; P(A|Q, P_i) = \sum _j\nolimits (s_j) \\ \end{aligned}$$ (Eq. 15) where $s_j$ denotes the span probability defined in Eq 13 , $s_1$ denotes the first match answer span and $\textrm {Random}$ denotes a stochastic function for randomly choosing an answer span. Different aggregation functions represent different assumptions about the distribution of the oracle answer spans in a paragraph. The oracle answer span represents the answer of the question that can be merely determined by its context, e.g. in Figure 1 , the first answer span `fat' is the oracle answer span, while the second one is not, because we could retrieval the answer directly, if we have read `concentrating body fat in their humps'. HEAD operation simply chooses the first match span probability as the conditional answer probability, which simulates the answer preprocessing in previous works BIBREF16 , BIBREF7 . This function only encourages the first match answer span as the oracle, while punishes the others. It can be merely worked in a paragraph with definition, such as first paragraph in WikiPedia. RAND operation randomly chooses a span probability as the conditional answer probability. This function assumes that all answer spans are equally important, and must be treated as oracle. However, balancing the probabilities of answer spans is hard. It can be used in paraphrasing answer spans appear in a list. MAX operation chooses the maximum span probability as the conditional answer probability. This function assumes that only one answer span is the oracle. It can be used in a noisy paragraph, especially for those retrieved by a search engine. SUM operation sums all the span probabilities as the conditional answer probability. This function assumes that one or more answer spans are the oracle. It can be used in a broad range of scenarios, for its relatively weak assumption. In the training phase, all annotated answer spans contain the same answer string $A$ , we directly apply the Eq 15 to obtain the conditional answer probability in paragraph level. In the inference phase, we treat the top $K$ span probabilities $s_j$ as the input of the aggregation function. However, we have to check all possible start and end positions to get the precise top $K$ span probabilities. Instead, we use a beam search strategy BIBREF19 which only consider the top $K_1$ start positions and the top $K_2$ end positions, where $K_1 K_2 \ge K$ . Different span probabilities $s_j$ represent variance answer strings $A_t$ . Following the definition in Eq 15 , we group them by different answer strings respectively. Paragraph Quality Estimator Paragraph quality estimator takes the useless paragraphs into consideration, which implements the paragraph probability $P(P_i|Q, \mathbf {P})$ directly. Firstly, we use an attention-based network to generate a quality score, denotes as $\hat{q}_i$ , in order to measure the quality of the given paragraph $P_i$ . $$ \begin{aligned} &\mathbf {M^c} = \textrm {BiGRU}(\mathbf {C}),\\ &\hat{q}_i = (\mathbf {M^c}^{\top } \cdot \mathbf {p^s}) \cdot w_c. \end{aligned}$$ (Eq. 17) where $\mathbf {M^c} \in \mathbb {R}^{n \times 2d}$ is the intermedia representation obtained by applying bidirectional GRU on the context embedding $\mathbf {C}$ . Then, let start distribution $\mathbf {p^s} \in \mathbb {R}^n$ as a key to attention $\mathbf {M^c}$ and transform it to 1-d value using weight $w_c \in \mathbb {R}^{2d}$ . Finally, we get the quality score $\hat{q}_i$ . Paragraph probabilities $P(P_i|Q, \mathbf {P})$ are generated by normalizing across $\mathbf {P}$ , $$ P(P_i|Q, \mathbf {P})\! =\! q_i =\! \frac{\exp (\hat{q}_i)}{\sum _{P_j \in \mathbf {P}} \exp (\hat{q}_j)}.$$ (Eq. 18) In the training phase, we conduct a negative sampling strategy with one negative sample, for efficient training. Thus a pair of paragraphs, $P^+$ as positive and $P^-$ as negative, are used to approximate $q^+ \approx P(P^+|Q, [P^+, P^-])$ and $q^- \approx P(P^-|Q, [P^+, P^-])$ . In the inference phase, the probability $q_i$ is obtained by normalizing across all the retrieved paragraphs $\mathbf {P}$ . [h] HAS-QA Model in Training Phase [1] $Q$ : question; $A$ : answer string; $\mathbf {P}$ : retrieved paragraphs; $\mathcal {L}$ : loss function $P^+$ , $P^-$ in $\mathbf {P}$ : Get answer locations $\mathbf {L^s}$ , $\mathbf {L^e}$ for $P^+$ ; Get the context embedding $\mathbf {C}$ ; Compute $\mathbf {p^s}$ ; (Eq 11 ) $L^s_j, L^e_j$ in $\mathbf {L^s}, \mathbf {L^e}$ : $P^-$0 ; Compute $P^-$1 ; (Eq 12 ) $P^-$2 ; $P^-$3 ; Apply function: $P^-$4 ; Compute $P^-$5 in $P^-$6 ; (Eq 17 , Eq 18 ) $P^-$7 ; $P^-$8 . [h] HAS-QA Model in Inference Phase [1] $Q$ : question; $\mathbf {P}$ : retrieved paragraphs; $A_{best}$ : answer string $P_i$ in $\mathbf {P}$ : Get the context embedding $\mathbf {C}$ ; Compute $\mathbf {p^s}$ ; (Eq 11 ) $L^s_j$ in Top- $K_1$ $\mathbf {p^s}$ : $p^s_j \leftarrow \mathbf {p^s}[L^s_j]$ ; Compute $\mathbf {p^e_j}$ ; (Eq 12 ) $L^e_{jk}$ in Top- $\mathbf {P}$0 $\mathbf {P}$1 : $\mathbf {P}$2 ; $\mathbf {P}$3 ; Group $\mathbf {P}$4 by extracted answer string $\mathbf {P}$5 ; Apply function: $\mathbf {P}$6 ; Compute $\mathbf {P}$7 ; (Eq 17 ) Normalize $\lbrace \hat{q}_i\rbrace $ get $\lbrace q_i\rbrace $ ; (Eq 18 ) $S(A_t) \leftarrow \sum _i q_i \cdot p^{A_t}_i$ ; $A_{best} \leftarrow \arg \max (S(A_t))$ . Above all, we describe our model with Algorithm "Paragraph Quality Estimator" in the training phase and Algorithm "Paragraph Quality Estimator" in the inference phase. Datasets We evaluate our model on three OpenQA datasets, QuasarT BIBREF21 , TriviaQA BIBREF7 and SearchQA BIBREF22 . QuasarT: consists of 43k open-domain trivia questions whose answers obtained from various internet sources. ClueWeb09 BIBREF23 serves as the background corpus for providing evidences paragraphs. We choose the Long version, which is truncated to 2048 characters and 20 paragraphs for each question. TriviaQA: consists of 95k open-domain question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents from Bing Web Search and Wikipedia, six per question on average. We focus on the open domain setting contains unfiltered documents. SearchQA: is based on a Jeopardy! questions and collects about top 50 web page snippets from Google search engine for each question. As we can see in Table 1 , there exist amounts of negative paragraphs which contains no answer span, especially in TriviaQA and SearchQA. For all datasets, more than 4 answer spans averagely obtained per paragraph. These statistics illustrate that problems mentioned above exist in OpenQA datasets. Experimental Settings For RC baseline models GA BIBREF11 , BiDAF BIBREF4 and AQA BIBREF20 , their experimental results are collected from published papers BIBREF22 , BIBREF7 . The DrQA BIBREF6 , R ${}^3$ BIBREF16 and Shared-Norm BIBREF8 are evaluated using their released code. Our model adopts the same data preprocessing and question context encoder presented in BIBREF8 . In training step, we use the Adadelta optimizer BIBREF24 with the batch size of 30, and we choose the model performed the best on develop set . The hidden dimension of GRU is 200, and the dropout ratio is 0.8. We use 300 dimensional word embeddings pre-trained by GloVe (released by BIBREF17 ) and do not fine-tune in training step. Additionally, 20 dimensional character embeddings are left as learnable parameters. In inference step, for baseline models we set the answer length limitation to 8, while for our models it is unlimited. We analyze different answer length limitation settings in the Section UID31 . The parameters of beam search are $K_1=3$ and $K_2=1$ . Overall Results The experimental results on three OpenQA datasets are shown in Table 2 . It concludes as follow: 1) HAS-QA outperforms traditional RC baselines with a large gap, such as GA, BiDAF, AQA listed in the first part. For example, in QuasarT, it improves 16.8% in EM score and 20.4% in F1 score. As RC task is just a special case of OpenQA task. Some experiments on standard SQuAD dataset(dev-set) BIBREF9 show that HAS-QA yields EM/F1:0.719/0.798, which is comparable with the best released single model Reinforced Mnemonic Reader BIBREF25 in the leaderboard (dev-set) EM/F1:0.721/0.816. Our performance is slightly worse because Reinforced Mnemonic Reader directly use the accurate answer span, while we use multiple distantly supervised answer spans. That may introduce noises in the setting of SQuAD, since only one span is accurate. 2) HAS-QA outperforms recent OpenQA baselines, such as DrQA, R ${}^3$ and Shared-Norm listed in the second part. For example, in QuasarT, it improves 4.6% in EM score and 3.5% in F1 score. Model Analysis In this subsection, we analyze our model by answering the following fine-grained analytic questions: 1) What advantages does HAS-QA have via modeling answer span using the conditional pointer network? 2) How much does HAS-QA gain from modeling multiple answer spans in a paragraph? 3) How does the paragraph quality work in HAS-QA? The following three parts are used to answer these questions respectively. In order to demonstrate the effect of the conditional pointer networks, we compare Shared-Norm, which uses pointer networks, with our model. Then, we gradually remove the answer length limitation, from restricting 4 words to 128 words until no limitation (denote as $\infty $ ). Finally, we draw the tendency of the EM performance and average predicted answer length according to the different answer length limitations. As shown in Figure 3 (TopLeft), the performance of Shared-Norm decreases when removing the answer length limitation, while the performance of HAS-QA first increases then becomes stable. In Figure 3 (TopRight), we find that the average predicted answer length increases in Shared-Norm when removing the answer length limitation. However, our model stably keeps average about 1.8 words, where the oracle average answer length is about 1.9 words. Example in Figure 3 (Bottom) illustrates that start/end pointers in Shared-Norm search their own optimal positions independently, such as two `Louis' in paragraph. It leads to an unreasonable answer span prediction. The effects of utilizing multiple answer spans lay into two aspects, 1) choose the aggregation functions in training phase, and 2) select the parameters of beam search in inference phase. In the training phase, we evaluate four types of aggregation functions introduced in Section "Multiple Spans Aggregator" . The experimental results on QuasarT dataset, shown in Table 3 , demonstrate the superiority of SUM and MAX operations. They take advantages of using multiple answer spans for training and improve about 6% - 10% in EM comparing to the HEAD operation. The performance of MAX operation is a little better than the SUM operation. The failure of RAND operation, mainly comes down to the conflicting training samples. Therefore, simple way to make use of multiple answer spans may not improve the performance. In the inference phase, Table 4 shows the effects of parameters in beam search. We find that the larger $K_1$ yields the better performance, while $K_2$ seems irrelevant to the performance. As a conclusion, we choose the parameters $K_1=3, K_2=1$ to balance the performance and the speed. The paragraph probability is efficient to measure the quality of paragraphs, especially for that containing useless paragraphs. Figure 4 (Left) shows that with the increasing number of given paragraphs which ordered by the rank of a search engine, EM performance of HAS-QA sustainably grows. However, EM performance of Shared-Norm stops increasing at about 15 paragraphs and our model without paragraph quality (denotes PosOnly) stops increasing at about 5 paragraphs. So that with the help of paragraph probability, model performance can be improved by adding more evidence paragraphs. We also evaluate the Mean Average Precision (MAP) score between the predicted scores and the label whether a paragraph contains answer spans (Figure 4 (Right)). The paragraph probability in our model outperforms PosOnly and Shared-Norm, so that it can rank the high quality paragraphs in the front of the given paragraph list. Conclusions In this paper, we point out three distinct characteristics of OpenQA, which make it inappropriate to directly apply existing RC models to this task. In order to tackle these problems, we first propose a new probabilistic formulation of OpenQA, where the answer probability is written as the question, paragraph and span, three-level structure. In this formulation, RC can be treated as a special case. Then, Hierarchical Answer Spans Model (HAS-QA) is designed to implement this structure. Specifically, a paragraph quality estimator makes it robust for the paragraphs without answer spans; a multiple span aggregator points out that it is necessary to combine the contributions of multiple answer spans in a paragraph, and a conditional span predictor is proposed to model the dependence between the start and end positions of each answer span. Experiments on public OpenQA datasets, including QuasarT, TriviaQA and SearchQA, show that HAS-QA significantly outperforms traditional RC baselines and recent OpenQA baselines. Acknowledgments This work was funded by the National Natural Science Foundation of China (NSFC) under Grants No. 61773362, 61425016, 61472401, 61722211, and 61872338, the Youth Innovation Promotion Association CAS under Grants No. 20144310, and 2016102, and the National Key R&D Program of China under Grants No. 2016QY02D0405.
For example, in QuasarT, it improves 16.8% in EM score and 20.4% in F1 score. , For example, in QuasarT, it improves 4.6% in EM score and 3.5% in F1 score.
3d49b678ff6b125ffe7fb614af3e187da65c6f65
3d49b678ff6b125ffe7fb614af3e187da65c6f65_0
Q: What does "explicitly leverages their probabilistic correlation to guide the training process of both models" mean? Text: Introduction Question answering (QA) and question generation (QG) are two fundamental tasks in natural language processing BIBREF0 , BIBREF1 . Both tasks involve reasoning between a question sequence $q$ and an answer sentence $a$ . In this work, we take answer sentence selection BIBREF2 as the QA task, which is a fundamental QA task and is very important for many applications such as search engine and conversational bots. The task of QA takes a question sentence $q$ and a list of candidate answer sentences as the input, and finds the top relevant answer sentence from the candidate list. The task of QG takes a sentence $a$ as input, and generates a question sentence $q$ which could be answered by $a$ . It is obvious that the input and the output of these two tasks are (almost) reverse, which is referred to as “duality” in this paper. This duality connects QA and QG, and potentially could help these two tasks to improve each other. Intuitively, QA could improve QG through measuring the relevance between the generated question and the answer. This QA-specific signal could enhance the QG model to generate not only literally similar question string, but also the questions that could be answered by the answer. In turn, QG could improve QA by providing additional signal which stands for the probability of generating a question given the answer. Moreover, QA and QG have probabilistic correlation as both tasks relate to the joint probability between $q$ and $a$ . Given a question-answer pair $\langle q, a \rangle $ , the joint probability $P(q, a)$ can be computed in two equivalent ways. $$P(q, a) = P(a) P(q|a) = P(q)P(a|q)$$ (Eq. 1) The conditional distribution $P(q|a)$ is exactly the QG model, and the conditional distribution $P(a|q)$ is closely related to the QA model. Existing studies typically learn the QA model and the QG model separately by minimizing their own loss functions, while ignoring the probabilistic correlation between them. Based on these considerations, we introduce a training framework that exploits the duality of QA and QG to improve both tasks. There might be different ways of exploiting the duality of QA and QG. In this work, we leverage the probabilistic correlation between QA and QG as the regularization term to influence the training process of both tasks. Specifically, the training objective of our framework is to jointly learn the QA model parameterized by $\theta _{qa}$ and the QG model parameterized by $\theta _{qg}$ by minimizing their loss functions subject to the following constraint. $$P_a(a) P(q|a;\theta _{qg}) = P_q(q)P(a|q;\theta _{qa})$$ (Eq. 3) $P_a(a)$ and $P_q(q)$ are the language models for answer sentences and question sentences, respectively. We examine the effectiveness of our training criterion by applying it to strong neural network based QA and QG models. Specifically, we implement a generative QG model based on sequence-sequence learning, which takes an answer sentence as input and generates a question sentence in an end-to-end fashion. We implement a discriminative QA model based on recurrent neural network, where both question and answer are represented as continuous vector in a sequential way. As every component in the entire framework is differentiable, all the parameters could be conventionally trained through back propagation. We conduct experiments on three datasets BIBREF2 , BIBREF3 , BIBREF4 . Empirical results show that our training framework improves both QA and QG tasks. The improved QA model performs comparably with strong baseline approaches on all three datasets. The Proposed Framework In this section, we first formulate the task of QA and QG, and then present the proposed algorithm for jointly training the QA and QG models. We also describe the connections and differences between this work and existing studies. Task Definition and Notations This work involves two tasks, namely question answering (QA) and question generation (QG). There are different kinds of QA tasks in natural language processing community. In this work, we take answer sentence selection BIBREF2 as the QA task, which takes a question $q$ and a list of candidate answer sentences $A = \lbrace a_1, a_2, ... , a_{|A|}\rbrace $ as input, and outputs one answer sentence $a_i$ from the candidate list which has the largest probability to be the answer. This QA task is typically viewed as a ranking problem. Our QA model is abbreviated as $f_{qa}(a,q;\theta _{qa})$ , which is parameterized by $\theta _{qa}$ and the output is a real-valued scalar. The task of QG takes a sentence $a$ as input, and outputs a question $q$ which could be answered by $a$ . In this work, we regard QG as a generation problem and develop a generative model based on sequence-to-sequence learning. Our QG model is abbreviated as $P_{qg}(q|a;\theta _{qg})$ , which is parameterized by $\theta _{qg}$ and the output is the probability of generating a natural language question $q$ . Algorithm Description We describe the proposed algorithm in this subsection. Overall, the framework includes three components, namely a QA model, a QG model and a regularization term that reflects the duality of QA and QG. Accordingly, the training objective of our framework includes three parts, which is described in Algorithm 1. The QA specific objective aims to minimize the loss function $l_{qa}(f_{qa}(a,q;\theta _{qa}), label)$ , where $label$ is 0 or 1 that indicates whether $a$ is the correct answer of $q$ or not. Since the goal of a QA model is to predict whether a question-answer pair is correct or not, it is necessary to use negative QA pairs whose labels are zero. The details about the QA model will be presented in the next section. For each correct question-answer pair, the QG specific objective is to minimize the following loss function, $$l_{qg}(q, a) = -log P_{qg}(q|a;\theta _{qg})$$ (Eq. 6) where $a$ is the correct answer of $q$ . The negative QA pairs are not necessary because the goal of a QG model is to generate the correct question for an answer. The QG model will be described in the following section. [tb] Algorithm Description Input: Language models $P_a(a)$ and $P_q(q)$ for answer and question, respectively; hyper parameters $\lambda _q$ and $\lambda _a$ ; optimizer $opt$ Output: QA model $f_{qa}(a,q)$ parameterized by $\theta _{qa}$ ; QG model $P_{qg}(q|a)$ parameterized by $\theta _{qg}$ Randomly initialize $\theta _{qa}$ and $P_q(q)$0 Get a minibatch of positive QA pairs $P_q(q)$1 , where $P_q(q)$2 is the answer of $P_q(q)$3 ; Get a minibatch of negative QA pairs $P_q(q)$4 , where $P_q(q)$5 is not the answer of $P_q(q)$6 ; Calculate the gradients for $\theta _{qa}$ and $\theta _{qg}$ . $$\nonumber G_{qa} = \triangledown _{\theta _{qa}} &\frac{1}{m}\sum _{i = 1}^{m}[l_{qa}(f_{qa}(a^p_i,q^p_i;\theta _{qa}), 1) \\ &\nonumber + l_{qa}(f_{qa}(a^n_i,q^n_i;\theta _{qa}),0) \\ & +\lambda _al_{dual}(a^p_i,q^p_i;\theta _{qa}, \theta _{qg})]$$ (Eq. 7) $$\nonumber G_{qg} = \triangledown _{\theta _{qg}} &\frac{1}{m}\sum _{i = 1}^{m}[\ l_{qg}(q^p_i,a^p_i) \\& + \lambda _ql_{dual}(q^p_i,a^p_i;\theta _{qa}, \theta _{qg})]$$ (Eq. 8) Update $\theta _{qa}$ and $\theta _{qg}$ $\theta _{qa} \leftarrow opt(\theta _{qa}, G_{qa})$ , $\theta _{qg} \leftarrow opt(\theta _{qg}, G_{qg})$ models converged The third objective is the regularization term which satisfies the probabilistic duality constrains as given in Equation 3 . Specifically, given a correct $\langle q, a \rangle $ pair, we would like to minimize the following loss function, $$ \nonumber l_{dual}(a,q;\theta _{qa}, \theta _{qg}) &= [logP_a(a) + log P(q|a;\theta _{qg}) \\ & - logP_q(q) - logP(a|q;\theta _{qa})]^2$$ (Eq. 9) where $P_a(a)$ and $P_q(q)$ are marginal distributions, which could be easily obtained through language model. $P(a|q;\theta _{qg})$ could also be easily calculated with the markov chain rule: $P(q|a;\theta _{qg}) = \prod _{t=1}^{|q|} P(q_t|q_{<t}, a;\theta _{qg})$ , where the function $P(q_t|q_{<t}, a;\theta _{qg})$ is the same with the decoder of the QG model (detailed in the following section). However, the conditional probability $P(a|q;\theta _{qa})$ is different from the output of the QA model $f_{qa}(a,q;\theta _{qa})$ . To address this, given a question $q$ , we sample a set of answer sentences $A^{\prime }$ , and derive the conditional probability $P(a|q;\theta _{qa})$ based on our QA model with the following equation. $$\nonumber &P(a|q;\theta _{qa}) = \\ &\dfrac{exp(f_{qa}(a,q;\theta _{qa}))}{exp(f_{qa}(a,q;\theta _{qa})) + \sum _{a^{\prime } \in A^{\prime }} exp(f_{qa}(a^{\prime },q;\theta _{qa}))}$$ (Eq. 10) In this way, we learn the models of QA and QG by minimizing the weighted combination between the original loss functions and the regularization term. Relationships with Existing Studies Our work differs from BIBREF5 in that they regard reading comprehension (RC) as the main task, and regard question generation as the auxiliary task to boost the main task RC. In our work, the roles of QA and QG are the same, and our algorithm enables QA and QG to improve the performance of each other simultaneously. Our approach differs from Generative Domain-Adaptive Nets BIBREF5 in that we do not pretrain the QA model. Our QA and QG models are jointly learned from random initialization. Moreover, our QA task differs from RC in that the answer in our task is a sentence rather than a text span from a sentence. Our approach is inspired by dual learning BIBREF6 , BIBREF7 , which leverages the duality between two tasks to improve each other. Different from the dual learning BIBREF6 paradigm, our framework learns both models from scratch and does not need task-specific pretraining. The recently introduced supervised dual learning BIBREF7 has been successfully applied to image recognition, machine translation and sentiment analysis. Our work could be viewed as the first work that leveraging the idea of supervised dual learning for question answering. Our approach differs from Generative Adversarial Nets (GAN) BIBREF8 in two respects. On one hand, the goal of original GAN is to learn a powerful generator, while the discriminative task is regarded as the auxiliary task. The roles of the two tasks in our framework are the same. On the other hand, the discriminative task of GAN aims to distinguish between the real data and the artificially generated data, while we focus on the real QA task. The Question Answering Model We describe the details of the question answer (QA) model in this section. Overall, a QA model could be formulated as a function $f_{qa}(q, a;\theta _{qa})$ parameterized by $\theta _{qa}$ that maps a question-answer pair to a scalar. In the inference process, given a $q$ and a list of candidate answer sentences, $f_{qa}(q, a;\theta _{qa})$ is used to calculate the relevance between $q$ and every candidate $a$ . The top ranked answer sentence is regarded as the output. We develop a neural network based QA model. Specifically, we first represent each word as a low dimensional and real-valued vector, also known as word embedding BIBREF9 , BIBREF10 , BIBREF11 . Afterwards, we use recurrent neural network (RNN) to map a question of variable length to a fixed-length vector. To avoid the problem of gradient vanishing, we use gated recurrent unit (GRU) BIBREF12 as the basic computation unit. The approach recursively calculates the hidden vector $h_{t}$ based on the current word vector $e^q_t$ and the output vector $h_{t-1}$ in the last time step, $$&z_i = \sigma (W_{z}e^q_{i} + U_{z}{h}_{i-1}) \\ &r_i = \sigma (W_{r}e^q_{i} + U_{r}{h}_{i-1}) \\ &\widetilde{h}_i = \tanh (W_{h}e^q_{i} + U_{h}(r_i \odot {h}_{i-1})) \\ &{h}_{i} = z_i \odot \widetilde{h}_i + (1-z_i) \odot {h}_{i-1}$$ (Eq. 12) where $z_i$ and $r_i$ are update and reset gates of s, $\odot $ stands for element-wise multiplication, $\sigma $ is sigmoid function. We use a bi-directional RNN to get the meaning of a question from both directions, and use the concatenation of two last hidden states as the final question vector $v_q$ . We compute the answer sentence vector $v_a$ in the same way. After obtaining $v_q$ and $v_a$ , we implement a simple yet effective way to calculate the relevance between question-sentence pair. Specifically, we represent a question-answer pair as the concatenation of four vectors, namely $v(q, a) = [v_q; v_a; v_q \odot v_a ; e_{c(q,a)}]$ , where $\odot $ means element-wise multiplication, $c(q,a)$ is the number of co-occurred words in $q$ and $a$ . We observe that incorporating the embedding of the word co-occurrence $e^c_{c(q,a)}$ could empirically improve the QA performance. We use an additional embedding matrix $L_c \in \mathbb {R}^{d_c \times |V_c|}$ , where $d_c$ is the dimension of word co-occurrence vector and $v_a$0 is vocabulary size. The values of $v_a$1 are jointly learned during training. The output scalar $v_a$2 is calculated by feeding $v_a$3 to a linear layer followed by $v_a$4 . We feed $v_a$5 to a $v_a$6 layer and use negative log-likelihood as the QA specific loss function. The basic idea of this objective is to classify whether a given question-answer is correct or not. We also implemented a ranking based loss function $v_a$7 , whose basic idea is to assign the correct QA pair a higher score than a randomly select QA pair. However, our empirical results showed that the ranking loss performed worse than the negative log-likelihood loss function. We use log-likelihood as the QA loss function in the experiment. The Question Generation Model We describe the question generation (QG) model in this section. The model is inspired by the recent success of sequence-to-sequence learning in neural machine translation. Specifically, the QG model first calculates the representation of the answer sentence with an encoder, and then takes the answer vector to generate a question in a sequential way with a decoder. We will present the details of the encoder and the decoder, respectively. The goal of the encoder is to represent a variable-length answer sentence ${a}$ as a fixed-length continuous vector. The encoder could be implemented with different neural network architectures such as convolutional neural network BIBREF13 , BIBREF14 and recurrent neural network (RNN) BIBREF15 , BIBREF16 . In this work, we use bidirectional RNN based on GRU unit, which is consistent with our QA model as described in Section 3. The concatenation of the last hidden vectors from both directions is used as the output of the encoder, which is also used as the initial hidden state of the decoder. The decoder takes the output of the encoder and generates the question sentence. We implement a RNN based decoder, which works in a sequential way and generates one question word at each time step. The decoder generates a word $q_{t}$ at each time step $t$ based on the representation of $a$ and the previously predicted question words $q_{<t}=\lbrace q_1,q_2,...,q_{t-1}\rbrace $ . This process is formulated as follows. $$p(q|a)=\prod ^{|q|}_{t=1}p(q_{t}|q_{<t},a)$$ (Eq. 14) Specifically, we use an attention-based architecture BIBREF17 , which selectively finds relevant information from the answer sentence when generating the question word. Therefore, the conditional probability is calculated as follows. $$p(q_{t}|q_{<t},a)=f_{dec}(q_{t-1},s_{t}, c_t)$$ (Eq. 15) where $s_{t}$ is the hidden state of GRU based RNN at time step $t$ , and $c_t$ is the attention state at time step $t$ . The attention mechanism assigns a probability/weight to each hidden state in the encoder at one time step, and calculates the attention state $c_t$ through weighted averaging the hidden states of the encoder: $c_{t}=\sum ^{|a|}_{i=1}\alpha _{\langle t,i\rangle }h_i$ . When calculating the attention weight of $h_i$ at time step $t$ , we also take into account of the attention distribution in the last time step. Potentially, the model could remember which contexts from answer sentence have been used before, and does not repeatedly use these words to generate the question words. $$\alpha _{\langle t,i\rangle }=\frac{\exp {[z(s_{t},h_i,\sum ^{N}_{j=1}\alpha _{\langle t-1,j\rangle }h_j)]}}{\sum ^{H}_{i^{\prime }=1}\exp {[z(s_{t},h_{i^{\prime }},\sum ^{N}_{j=1}\alpha _{\langle t-1,j\rangle }h_{j})]}}$$ (Eq. 16) Afterwards, we feed the concatenation of $s_t$ and $c_t$ to a linear layer followed by a $softmax$ function. The output dimension of the $softmax$ layer is equal to the number of top frequent question words (e.g. 30K or 50K) in the training data. The output values of the $softmax$ layer form the probability distribution of the question words to be generated. Furthermore, we observe that question sentences typically include informative but low-frequency words such as named entities or numbers. These low-frequency words are closely related to the answer sentence but could not be well covered in the target vocabulary. To address this, we add a simple yet effective post-processing step which replaces each “unknown word” with the most relevant word from the answer sentence. Following BIBREF18 , we use the attention probability as the relevance score of each word from the answer sentence. Copying mechanism BIBREF19 , BIBREF20 is an alternative solution that adaptively determines whether the generated word comes from the target vocabulary or from the answer sentence. Since every component of the QG model is differentiable, all the parameters could be learned in an end-to-end way with back propagation. Given a question-answer pair $\langle q,a\rangle $ , where $a$ is the correct answer of the question $q$ , the training objective is to minimize the following negative log-likelihood. $$l_{qg}(q,a)=-\sum ^{|q|}_{t=1}\log [p(y_t|y_{<t},a)]$$ (Eq. 17) In the inference process, we use beam search to get the top- $K$ confident results, where $K$ is the beam size. The inference process stops when the model generates the symbol $\langle eos \rangle $ which stands for the end of sentence. Experiment We describe the experimental setting and report empirical results in this section. Experimental Setting We conduct experiments on three datasets, including MARCO BIBREF4 , SQUAD BIBREF3 , and WikiQA BIBREF2 . The MARCO and SQUAD datasets are originally developed for the reading comprehension (RC) task, the goal of which is to answer a question with a text span from a document. Despite our QA task (answer sentence selection) is different from RC, we use these two datasets because of two reasons. The first reason is that to our knowledge they are the QA datasets that contains largest manually labeled question-answer pairs. The second reason is that, we could derive two QA datasets for answer sentence selection from the original MARCO and SQUAD datasets, with an assumption that the answer sentences containing the correct answer span are correct, and vice versa. We believe that our training framework could be easily applied to RC task, but we that is out of the focus of this work. We also conduct experiments on WikiQA BIBREF2 , which is a benchmark dataset for answer sentence selection. Despite its data size is relatively smaller compared with MARCO and SQUAD, we still apply our algorithm on this data and report empirical results to further compare with existing algorithms. It is worth to note that a common characteristic of MARCO and SQUAD is that the ground truth of the test is invisible to the public. Therefore, we randomly split the original validation set into the dev set and the test set. The statistics of SQUAD and MARCO datasets are given in Table 1 . We use the official split of the WikiQA dataset. We apply exactly the same model to these three datasets. We evaluate our QA system with three standard evaluation metrics: Mean Average Precision (MAP), Mean Reciprocal Rank (MRR) and Precision@1 (P@1) BIBREF23 . It is hard to find a perfect way to automatically evaluate the performance of a QG system. In this work, we use BLEU-4 BIBREF24 score as the evaluation metric, which measures the overlap between the generated question and the ground truth. Implementation Details We train the parameters of the QA model and the QG model simultaneously. We randomly initialize the parameters in both models with a combination of the fan-in and fan-out BIBREF25 . The parameters of word embedding matrices are shared in the QA model and the QG model. In order to learn question and answer specific word meanings, we use two different embedding matrices for question words and answer words. The vocabularies are the most frequent 30K words from the questions and answers in the training data. We set the dimension of word embedding as 300, the hidden length of encoder and decoder in the QG model as 512, the hidden length of GRU in the QA model as 100, the dimension of word co-occurrence embedding as 10, the vocabulary size of the word co-occurrence embedding as 10, the hidden length of the attention layer as 30. We initialize the learning rate as 2.0, and use AdaDelta BIBREF26 to adaptively decrease the learning rate. We use mini-batch training, and empirically set the batch size as 64. The sampled answer sentences do not come from the same passage. We get 10 batches (640 instances) and sort them by answer length for accelerating the training process. The negative samples come from these 640 instances, which are from different passages. In this work, we use smoothed bigram language models as $p_a(a)$ and $p_q(q)$ . We also tried trigram language model but did not get improved performance. Alternatively, one could also implement neural language model and jointly learn the parameters in the training process. Results and Analysis We first report results on the MARCO and SQUAD datasets. As the dataset is splitted by ourselves, we do not have previously reported results for comparison. We compare with the following four baseline methods. It has been proven that word co-occurrence is a very simple yet effective feature for this task BIBREF2 , BIBREF22 , so the first two baselines are based on the word co-occurrence between a question sentence and the candidate answer sentence. WordCnt and WgtWordCnt use unnormalized and normalized word co-occurrence. The ranker in these two baselines are trained with with FastTree, which performs better than SVMRank and linear regression in our experiments. We also compare with CDSSM BIBREF21 , which is a very strong neural network approach to model the semantic relatedness of a sentence pair. We further compare with ABCNN BIBREF22 , which has been proven very powerful in various sentence matching tasks. Basic QA is our QA model which does not use the duality between QA and QG. Our ultimate model is abbreviated as Dual QA. The QA performance on MARCO and SQUAD datasets are given in Table 2 . We can find that CDSSM performs better than the word co-occurrence based method on MARCO dataset. On the SQUAD dataset, Dual QA achieves the best performance among all these methods. On the MARCO dataset, Dual QA performs comparably with ABCNN. We can find that Dual QA still yields better accuracy than Basic QA, which shows the effectiveness of the joint training algorithm. It is interesting that word co-occurrence based method (WgtWordCnt) is very strong and hard to beat on the MARCO dataset. Incorporating sophisticated features might obtain improved performance on both datasets, however, this is not the focus of this work and we leave it to future work. Results on the WikiQA dataset is given in Table 3 . On this dataset, previous studies typically report results based on their deep features plus the number of words that occur both in the question and in the answer BIBREF2 , BIBREF22 . We also follow this experimental protocol. We can find that our basic QA model is simple yet effective. The Dual QA model achieves comparably to strong baseline methods. To give a quantitative evaluation of our training framework on the QG model, we report BLEU-4 scores on MARCO and SQUAD datasets. The results of our QG model with or without using joint training are given in Table 5 . We can find that, despite the overall BLEU-4 scores are relatively low, using our training algorithm could improve the performance of the QG model. We would like to investigate how the joint training process improves the QA and QG models. To this end, we analyze the results of development set on the SQUAD dataset. We randomly sample several cases that the Basic QA model gets the wrong answers while the Dual QA model obtains the correct results. Examples are given in Table 4 . From these examples, we can find that the questions generated by Dual QG tend to have more word overlap with the correct question, despite sometimes the point of the question is not correct. For example, compared with the Basic QG model, the Dual QG model generates more informative words, such as “green” in the first example, “purpose” in the second example, and “how much” in the third example. We believe this helps QA because the QA model is trained to assign a higher score to the question which looks similar with the generated question. It also helps QG because the QA model is trained to give a higher score to the real question-answer pair, so that generating more answer-alike words gives the generated question a higher QA score. Despite the proposed training framework obtains some improvements on QA and QG, we believe the work could be further improved from several directions. We find that our QG model not always finds the point of the reference question. This is not surprising because the questions from these two reading comprehension datasets only focus on some spans of a sentence, rather than the entire sentence. Therefore, the source side (answer sentence) carries more information than the target side (question sentence). Moreover, we do not use the answer position information in our QG model. Accordingly, the model may pay attention to the point which is different from the annotator's direction, and generates totally different questions. We are aware of incorporating the position of the answer span could get improved performance BIBREF29 , however, the focus of this work is a sentence level QA task rather than reading comprehension. Therefore, despite MARCO and SQUAD are of large scale, they are not the desirable datasets for investigating the duality of our QA and QG tasks. Pushing forward this area also requires large scale sentence level QA datasets. Discussion We would like to discuss our understanding about the duality of QA and QG, and also present our observations based on the experiments. In this work, “duality” means that the QA task and the QG task are equally important. This characteristic makes our work different from Generative Domain-Adaptive Nets BIBREF5 and Generative Adversarial Nets (GAN) BIBREF8 , both of which have a main task and regard another task as the auxiliary one. There are different ways to leverage the “duality” of QA and QG to improve both tasks. We categorize them into two groups. The first group is about the training process and the second group is about the inference process. From this perspective, dual learning BIBREF6 is a solution that leverages the duality in the training process. In particular, dual learning first pretrains the models for two tasks separately, and then iteratively fine-tunes the models. Our work also belongs to the first group. Our approach uses the duality as a regularization item to guide the learning of QA and QG models simultaneously from scratch. After the QA and QG models are trained, we could also use the duality to improve the inference process, which falls into the second group. The process could be conducted on separately trained models or the models that jointly trained with our approach. This is reasonable because the QA model could directly add one feature to consider $q$ and $q^{\prime }$ , where $q^{\prime }$ is the question generated by the QG model. The first example in Table 4 also motivates this direction. Similarly, the QA model could give each $\langle q^{\prime }, a \rangle $ a score which could be assigned to each generated question $q^{\prime }$ . In this work we do not apply the duality in the inference process. We leave it as a future plan. This work could be improved by refining every component involved in our framework. For example, we use a simple yet effective QA model, which could be improved by using more complex neural network architectures BIBREF30 , BIBREF22 or more external resources. We use a smoothed language model for both question and answer sentences, which could be replaced by designed neural language models whose parameters are jointly learned together with the parameters in QA and QG models. The QG model could be improved as well, for example, by developing more complex neural network architectures to take into account of more information about the answer sentence in the generation process. In addition, it is also very important to investigate an automatic evaluation metric to effectively measure the performance of a QG system. BLEU score only measures the literal similarity between the generated question and the ground truth. However, it does not measure whether the question really looks like a question or not. A desirable evaluation system should also have the ability to judge whether the generated question could be answered by input sentence, even if the generated question use totally different words to express the meaning. Related Work Our work relates to existing studies on question answering (QA) and question generation (QG). There are different types of QA tasks including text-level QA BIBREF31 , knowledge based QA BIBREF32 , community based QA BIBREF33 and the reading comprehension BIBREF3 , BIBREF4 . Our work belongs to text based QA where the answer is a sentence. In recent years, neural network approaches BIBREF30 , BIBREF31 , BIBREF22 show promising ability in modeling the semantic relation between sentences and achieve strong performances on QA tasks. Question generation also draws a lot of attentions in recent years. QG is very necessary in real application as it is always time consuming to create large-scale QA datasets. In literature, BIBREF34 use Minimal Recursion Semantics (MRS) to represent the meaning of a sentence, and then realize the MSR structure into a natural language question. BIBREF35 present a overgenerate-and-rank framework consisting of three stages. They first transform a sentence into a simpler declarative statement, and then transform the statement to candidate questions by executing well-defined syntactic transformations. Finally, a ranker is used to select the questions of high-quality. BIBREF36 focus on generating questions from a topic. They first get a list of texts related to the topic, and then generate questions by exploiting the named entity information and the predicate argument structures of the texts. BIBREF37 propose an ontology-crowd-relevance approach to generate questions from novel text. They encode the original text in a low-dimensional ontology, and then align the question templates obtained via crowd-sourcing to that space. A final ranker is used to select the top relevant templates. There also exists some studies on generating questions from knowledge base BIBREF38 , BIBREF39 . For example, BIBREF39 develop a neural network approach which takes a knowledge fact (including a subject, an object, and a predicate) as input, and generates the question with a recurrent neural network. Recent studies also investigate question generation for the reading comprehension task BIBREF40 , BIBREF29 . The approaches are typically based on the encoder-decoder framework, which could be conventionally learned in an end-to-end way. As the answer is a text span from the sentence/passage, it is helpful to incorporate the position of the answer span BIBREF29 . In addition, the computer vision community also pays attention to generating natural language questions about an image BIBREF41 . Conclusion We focus on jointly training the question answering (QA) model and the question generation (QG) model in this paper. We exploit the “duality” of QA and QG tasks, and introduce a training framework to leverage the probabilistic correlation between the two tasks. In our approach, the “duality” is used as a regularization term to influence the learning of QA and QG models. We implement simple yet effective QA and QG models, both of which are neural network based approaches. Experimental results show that the proposed training framework improves both QA and QG on three datasets.
The framework jointly learns parametrized QA and QG models subject to the constraint in equation 2. In more detail, they minimize QA and QG loss functions, with a third dual loss for regularization.
b686e10a725254695821e330a277c900792db69f
b686e10a725254695821e330a277c900792db69f_0
Q: How does this compare to contextual embedding methods? Text: Introduction To model language, we must represent words. We can imagine representing every word with a binary one-hot vector corresponding to a dictionary position. But such a representation contains no valuable semantic information: distances between word vectors represent only differences in alphabetic ordering. Modern approaches, by contrast, learn to map words with similar meanings to nearby points in a vector space BIBREF0 , from large datasets such as Wikipedia. These learned word embeddings have become ubiquitous in predictive tasks. BIBREF1 recently proposed an alternative view, where words are represented by a whole probability distribution instead of a deterministic point vector. Specifically, they model each word by a Gaussian distribution, and learn its mean and covariance matrix from data. This approach generalizes any deterministic point embedding, which can be fully captured by the mean vector of the Gaussian distribution. Moreover, the full distribution provides much richer information than point estimates for characterizing words, representing probability mass and uncertainty across a set of semantics. However, since a Gaussian distribution can have only one mode, the learned uncertainty in this representation can be overly diffuse for words with multiple distinct meanings (polysemies), in order for the model to assign some density to any plausible semantics BIBREF1 . Moreover, the mean of the Gaussian can be pulled in many opposing directions, leading to a biased distribution that centers its mass mostly around one meaning while leaving the others not well represented. In this paper, we propose to represent each word with an expressive multimodal distribution, for multiple distinct meanings, entailment, heavy tailed uncertainty, and enhanced interpretability. For example, one mode of the word `bank' could overlap with distributions for words such as `finance' and `money', and another mode could overlap with the distributions for `river' and `creek'. It is our contention that such flexibility is critical for both qualitatively learning about the meanings of words, and for optimal performance on many predictive tasks. In particular, we model each word with a mixture of Gaussians (Section "Word Representation" ). We learn all the parameters of this mixture model using a maximum margin energy-based ranking objective BIBREF2 , BIBREF1 (Section "Discussion" ), where the energy function describes the affinity between a pair of words. For analytic tractability with Gaussian mixtures, we use the inner product between probability distributions in a Hilbert space, known as the expected likelihood kernel BIBREF3 , as our energy function (Section "Energy Function" ). Additionally, we propose transformations for numerical stability and initialization "Implementation" , resulting in a robust, straightforward, and scalable learning procedure, capable of training on a corpus with billions of words in days. We show that the model is able to automatically discover multiple meanings for words (Section "Word Representation"7 ), and significantly outperform other alternative methods across several tasks such as word similarity and entailment (Section "Word Similarity" , "Word Similarity for Polysemous Words" , "Word Entailment" ). We have made code available at http://github.com/benathi/word2gm, where we implement our model in Tensorflow tensorflow. Related Work In the past decade, there has been an explosion of interest in word vector representations. word2vec, arguably the most popular word embedding, uses continuous bag of words and skip-gram models, in conjunction with negative sampling for efficient conditional probability estimation BIBREF0 , BIBREF4 . Other popular approaches use feedforward BIBREF5 and recurrent neural network language models BIBREF6 , BIBREF7 , BIBREF8 to predict missing words in sentences, producing hidden layers that can act as word embeddings that encode semantic information. They employ conditional probability estimation techniques, including hierarchical softmax BIBREF9 , BIBREF10 , BIBREF11 and noise contrastive estimation BIBREF12 . A different approach to learning word embeddings is through factorization of word co-occurrence matrices such as GloVe embeddings BIBREF13 . The matrix factorization approach has been shown to have an implicit connection with skip-gram and negative sampling BIBREF14 . Bayesian matrix factorization where row and columns are modeled as Gaussians has been explored in BIBREF15 and provides a different probabilistic perspective of word embeddings. In exciting recent work, BIBREF1 propose a Gaussian distribution to model each word. Their approach is significantly more expressive than typical point embeddings, with the ability to represent concepts such as entailment, by having the distribution for one word (e.g. `music') encompass the distributions for sets of related words (`jazz' and `pop'). However, with a unimodal distribution, their approach cannot capture multiple distinct meanings, much like most deterministic approaches. Recent work has also proposed deterministic embeddings that can capture polysemies, for example through a cluster centroid of context vectors BIBREF16 , or an adapted skip-gram model with an EM algorithm to learn multiple latent representations per word BIBREF17 . BIBREF18 also extends skip-gram with multiple prototype embeddings where the number of senses per word is determined by a non-parametric approach. BIBREF19 learns topical embeddings based on latent topic models where each word is associated with multiple topics. Another related work by BIBREF20 models embeddings in infinite-dimensional space where each embedding can gradually represent incremental word sense if complex meanings are observed. Probabilistic word embeddings have only recently begun to be explored, and have so far shown great promise. In this paper, we propose, to the best of our knowledge, the first probabilistic word embedding that can capture multiple meanings. We use a Gaussian mixture model which allows for a highly expressive distributions over words. At the same time, we retain scalability and analytic tractability with an expected likelihood kernel energy function for training. The model and training procedure harmonize to learn descriptive representations of words, with superior performance on several benchmarks. Methodology In this section, we introduce our Gaussian mixture (GM) model for word representations, and present a training method to learn the parameters of the Gaussian mixture. This method uses an energy-based maximum margin objective, where we wish to maximize the similarity of distributions of nearby words in sentences. We propose an energy function that compliments the GM model by retaining analytic tractability. We also provide critical practical details for numerical stability, hyperparameters, and initialization. Word Representation We represent each word $w$ in a dictionary as a Gaussian mixture with $K$ components. Specifically, the distribution of $w$ , $f_w$ , is given by the density $$f_w(\vec{x}) &= \sum _{i=1}^K p_{w,i} \ \mathcal {N}\left[ \vec{x}; \vec{\mu }_{w,i} , \Sigma _{w,i} \right] \\ &= \sum _{i=1}^K \frac{p_{w,i} }{\sqrt{2 \pi | \Sigma _{w,i} | }} e^{-\frac{1}{2} (\vec{x} - \vec{\mu }_{w,i})^{\top } \Sigma _{w,i}^{-1} (\vec{x} - \vec{\mu }_{w,i})} \,, $$ (Eq. 2) where $\sum _{i=1}^K p_{w,i} = 1$ . The mean vectors $\vec{\mu }_{w,i}$ represent the location of the $i^{th}$ component of word $w$ , and are akin to the point embeddings provided by popular approaches like word2vec. $p_{w,i}$ represents the component probability (mixture weight), and $\Sigma _{w,i}$ is the component covariance matrix, containing uncertainty information. Our goal is to learn all of the model parameters $\vec{\mu }_{w,i}, p_{w,i}, \Sigma _{w,i}$ from a corpus of natural sentences to extract semantic information of words. Each Gaussian component's mean vector of word $w$ can represent one of the word's distinct meanings. For instance, one component of a polysemous word such as `rock' should represent the meaning related to `stone' or `pebbles', whereas another component should represent the meaning related to music such as `jazz' or `pop'. Figure 1 illustrates our word embedding model, and the difference between multimodal and unimodal representations, for words with multiple meanings. Skip-Gram The training objective for learning $\theta = \lbrace \vec{\mu }_{w,i}, p_{w,i}, \Sigma _{w,i}\rbrace $ draws inspiration from the continuous skip-gram model BIBREF0 , where word embeddings are trained to maximize the probability of observing a word given another nearby word. This procedure follows the distributional hypothesis that words occurring in natural contexts tend to be semantically related. For instance, the words `jazz' and `music' tend to occur near one another more often than `jazz' and `cat'; hence, `jazz' and `music' are more likely to be related. The learned word representation contains useful semantic information and can be used to perform a variety of NLP tasks such as word similarity analysis, sentiment classification, modelling word analogies, or as a preprocessed input for complex system such as statistical machine translation. Energy-based Max-Margin Objective Each sample in the objective consists of two pairs of words, $(w,c)$ and $(w,c^{\prime })$ . $w$ is sampled from a sentence in a corpus and $c$ is a nearby word within a context window of length $\ell $ . For instance, a word $w = $ `jazz' which occurs in the sentence `I listen to jazz music' has context words (`I', `listen', `to' , `music'). $c^{\prime }$ is a negative context word (e.g. `airplane') obtained from random sampling. The objective is to maximize the energy between words that occur near each other, $w$ and $c$ , and minimize the energy between $w$ and its negative context $c^{\prime }$ . This approach is similar to negative sampling BIBREF0 , BIBREF4 , which contrasts the dot product between positive context pairs with negative context pairs. The energy function is a measure of similarity between distributions and will be discussed in Section "Energy Function" . We use a max-margin ranking objective BIBREF2 , used for Gaussian embeddings in BIBREF1 , which pushes the similarity of a word and its positive context higher than that of its negative context by a margin $m$ : $$\nonumber L_\theta (w, c, c^{\prime }) = \max (0, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \nonumber m - \log E_\theta (w, c) + \log E_\theta (w, c^{\prime }) )$$ (Eq. 6) This objective can be minimized by mini-batch stochastic gradient descent with respect to the parameters $\theta = \lbrace \vec{\mu }_{w,i}, p_{w,i}, \Sigma _{w,i}\rbrace $ – the mean vectors, covariance matrices, and mixture weights – of our multimodal embedding in Eq. ( 2 ). We use a word sampling scheme similar to the implementation in word2vec BIBREF0 , BIBREF4 to balance the importance of frequent words and rare words. Frequent words such as `the', `a', `to' are not as meaningful as relatively less frequent words such as `dog', `love', `rock', and we are often more interested in learning the semantics of the less frequently observed words. We use subsampling to improve the performance of learning word vectors BIBREF4 . This technique discards word $w_i$ with probability $P(w_i) = 1 - \sqrt{t/f(w_i)}$ , where $f(w_i)$ is the frequency of word $w_i$ in the training corpus and $t$ is a frequency threshold. To generate negative context words, each word type $w_i$ is sampled according to a distribution $P_n(w_i) \propto U(w_i)^{3/4}$ which is a distorted version of the unigram distribution $U(w_i)$ that also serves to diminish the relative importance of frequent words. Both subsampling and the negative distribution choice are proven effective in word2vec training BIBREF4 . Energy Function For vector representations of words, a usual choice for similarity measure (energy function) is a dot product between two vectors. Our word representations are distributions instead of point vectors and therefore need a measure that reflects not only the point similarity, but also the uncertainty. We propose to use the expected likelihood kernel, which is a generalization of an inner product between vectors to an inner product between distributions BIBREF3 . That is, $ E(f,g) = \int f(x) g(x) \ d x = \langle f, g \rangle _{L_2} $ where $\langle \cdot , \cdot \rangle _{L_2} $ denotes the inner product in Hilbert space $L_2$ . We choose this form of energy since it can be evaluated in a closed form given our choice of probabilistic embedding in Eq. ( 2 ). For Gaussian mixtures $f,g$ representing the words $w_f, w_g$ , $f(x) = \sum _{i=1}^K p_i \mathcal {N}(x; \vec{\mu }_{f,i} , \Sigma _{f,i} ) $ and $g(x) = \sum _{i=1}^K q_i \mathcal {N}(x; \vec{\mu }_{g,i} , \Sigma _{g,i} )$ , $\sum _{i =1}^K p_i = 1 $ , and $\sum _{i =1}^K q_i = 1$ , we find (see Section "Derivation of Expected Likelihood Kernel" ) the log energy is $$ \log E_\theta (f,g) = \log \sum _{j=1}^K \sum _{i=1}^K p_i q_j e^{\xi _{i,j}}$$ (Eq. 9) where $$\nonumber \xi _{i,j} &\equiv \log \mathcal {N}(0; \vec{\mu }_{f,i} - \vec{\mu }_{g,j}, \Sigma _{f,i} + \Sigma _{g,j} ) \\ \nonumber &= - \frac{1}{2} \log \det ( \Sigma _{f,i} + \Sigma _{g,j} ) - \frac{D}{2} \log (2 \pi ) \\ - \frac{1}{2} & (\vec{\mu }_{f,i} - \vec{\mu }_{g,j} )^\top (\Sigma _{f,i} + \Sigma _{g,j} )^{-1} (\vec{\mu }_{f,i} - \vec{\mu }_{g,j} ) $$ (Eq. 10) We call the term $\xi _{i,j}$ partial (log) energy. Observe that this term captures the similarity between the $i^{th}$ meaning of word $w_f$ and the $j^{th}$ meaning of word $w_g$ . The total energy in Equation 9 is the sum of possible pairs of partial energies, weighted accordingly by the mixture probabilities $p_i$ and $q_j$ . The term $- (\vec{\mu }_{f,i} - \vec{\mu }_{g,j} )^\top (\Sigma _{f,i} + \Sigma _{g,j} )^{-1} (\vec{\mu }_{f,i} - \vec{\mu }_{g,j} ) $ in $\xi _{i,j}$ explains the difference in mean vectors of semantic pair $(w_f, i)$ and $(w_g, j)$ . If the semantic uncertainty (covariance) for both pairs are low, this term has more importance relative to other terms due to the inverse covariance scaling. We observe that the loss function $L_\theta $ in Section "Discussion" attains a low value when $E_\theta (w,c)$ is relatively high. High values of $E_\theta (w,c)$ can be achieved when the component means across different words $\vec{\mu }_{f,i}$ and $\vec{\mu }_{g,j}$ are close together (e.g., similar point representations). High energy can also be achieved by large values of $\Sigma _{f,i}$ and $\xi _{i,j}$0 , which washes out the importance of the mean vector difference. The term $\xi _{i,j}$1 serves as a regularizer that prevents the covariances from being pushed too high at the expense of learning a good mean embedding. At the beginning of training, $\xi _{i,j}$ roughly are on the same scale among all pairs $(i,j)$ 's. During this time, all components learn the signals from the word occurrences equally. As training progresses and the semantic representation of each mixture becomes more clear, there can be one term of $\xi _{i,j}$ 's that is predominantly higher than other terms, giving rise to a semantic pair that is most related. The negative KL divergence is another sensible choice of energy function, providing an asymmetric metric between word distributions. However, unlike the expected likelihood kernel, KL divergence does not have a closed form if the two distributions are Gaussian mixtures. Experiments We have introduced a model for multi-prototype embeddings, which expressively captures word meanings with whole probability distributions. We show that our combination of energy and objective functions, proposed in Section "Skip-Gram" , enables one to learn interpretable multimodal distributions through unsupervised training, for describing words with multiple distinct meanings. By representing multiple distinct meanings, our model also reduces the unnecessarily large variance of a Gaussian embedding model, and has improved results on word entailment tasks. To learn the parameters of the proposed mixture model, we train on a concatenation of two datasets: UKWAC (2.5 billion tokens) and Wackypedia (1 billion tokens) BIBREF21 . We discard words that occur fewer than 100 times in the corpus, which results in a vocabulary size of $314,129$ words. Our word sampling scheme, described at the end of Section "Qualitative Evaluation" , is similar to that of word2vec with one negative context word for each positive context word. After training, we obtain learned parameters $\lbrace \vec{\mu }_{w,i}, \Sigma _{w,i}, p_i\rbrace _{i=1}^K$ for each word $w$ . We treat the mean vector $\vec{\mu }_{w,i}$ as the embedding of the $i^{\text{th}}$ mixture component with the covariance matrix $\Sigma _{w,i}$ representing its subtlety and uncertainty. We perform qualitative evaluation to show that our embeddings learn meaningful multi-prototype representations and compare to existing models using a quantitative evaluation on word similarity datasets and word entailment. We name our model as Word to Gaussian Mixture (w2gm) in constrast to Word to Gaussian (w2g) BIBREF1 . Unless stated otherwise, w2g refers to our implementation of w2gm model with one mixture component. Hyperparameters Unless stated otherwise, we experiment with $K=2$ components for the w2gm model, but we have results and discussion of $K=3$ at the end of section 4.3. We primarily consider the spherical case for computational efficiency. We note that for diagonal or spherical covariances, the energy can be computed very efficiently since the matrix inversion would simply require $\mathcal {O}(d)$ computation instead of $\mathcal {O}(d^3)$ for a full matrix. Empirically, we have found diagonal covariance matrices become roughly spherical after training. Indeed, for these relatively high dimensional embeddings, there are sufficient degrees of freedom for the mean vectors to be learned such that the covariance matrices need not be asymmetric. Therefore, we perform all evaluations with spherical covariance models. Models used for evaluation have dimension $D=50$ and use context window $\ell = 10$ unless stated otherwise. We provide additional hyperparameters and training details in the supplementary material ( "Implementation" ). Similarity Measures Since our word embeddings contain multiple vectors and uncertainty parameters per word, we use the following measures that generalizes similarity scores. These measures pick out the component pair with maximum similarity and therefore determine the meanings that are most relevant. A natural choice for a similarity score is the expected likelihood kernel, an inner product between distributions, which we discussed in Section "Energy Function" . This metric incorporates the uncertainty from the covariance matrices in addition to the similarity between the mean vectors. This metric measures the maximum similarity of mean vectors among all pairs of mixture components between distributions $f$ and $g$ . That is, $\displaystyle d(f,g) = \max _{i,j= 1, \hdots , K} \frac{ \langle \mathbf {\mu }_{f,i}, \mathbf {\mu }_{g,j} \rangle }{ ||\mathbf {\mu }_{f,i}|| \cdot || \mathbf {\mu }_{g,j} || }$ , which corresponds to matching the meanings of $f$ and $g$ that are the most similar. For a Gaussian embedding, maximum similarity reduces to the usual cosine similarity. Cosine similarity is popular for evaluating embeddings. However, our training objective directly involves the Euclidean distance in Eq. ( 10 ), as opposed to dot product of vectors such as in word2vec. Therefore, we also consider the Euclidean metric: $\displaystyle d(f,g) = \min _{i,j= 1, \hdots , K} [ || \mathbf {\mu }_{f,i} - \mathbf {\mu }_{g,j} || ] $ . Qualitative Evaluation In Table 1 , we show examples of polysemous words and their nearest neighbors in the embedding space to demonstrate that our trained embeddings capture multiple word senses. For instance, a word such as `rock' that could mean either `stone' or `rock music' should have each of its meanings represented by a distinct Gaussian component. Our results for a mixture of two Gaussians model confirm this hypothesis, where we observe that the 0th component of `rock' being related to (`basalt', `boulders') and the 1st component being related to (`indie', `funk', `hip-hop'). Similarly, the word bank has its 0th component representing the river bank and the 1st component representing the financial bank. By contrast, in Table 1 (bottom), see that for Gaussian embeddings with one mixture component, nearest neighbors of polysemous words are predominantly related to a single meaning. For instance, `rock' mostly has neighbors related to rock music and `bank' mostly related to the financial bank. The alternative meanings of these polysemous words are not well represented in the embeddings. As a numerical example, the cosine similarity between `rock' and `stone' for the Gaussian representation of BIBREF1 is only $0.029$ , much lower than the cosine similarity $0.586$ between the 0th component of `rock' and `stone' in our multimodal representation. In cases where a word only has a single popular meaning, the mixture components can be fairly close; for instance, one component of `stone' is close to (`stones', `stonework', `slab') and the other to (`carving, `relic', `excavated'), which reflects subtle variations in meanings. In general, the mixture can give properties such as heavy tails and more interesting unimodal characterizations of uncertainty than could be described by a single Gaussian. We provide an interactive visualization as part of our code repository: https://github.com/benathi/word2gm#visualization that allows real-time queries of words' nearest neighbors (in the embeddings tab) for $K=1, 2, 3$ components. We use a notation similar to that of Table 1 , where a token w:i represents the component i of a word w. For instance, if in the $K=2$ link we search for bank:0, we obtain the nearest neighbors such as river:1, confluence:0, waterway:1, which indicates that the 0th component of `bank' has the meaning `river bank'. On the other hand, searching for bank:1 yields nearby words such as banking:1, banker:0, ATM:0, indicating that this component is close to the `financial bank'. We also have a visualization of a unimodal (w2g) for comparison in the $K=1$ link. In addition, the embedding link for our Gaussian mixture model with $K=3$ mixture components can learn three distinct meanings. For instance, each of the three components of `cell' is close to (`keypad', `digits'), (`incarcerated', `inmate') or (`tissue', `antibody'), indicating that the distribution captures the concept of `cellphone', `jail cell', or `biological cell', respectively. Due to the limited number of words with more than 2 meanings, our model with $K=3$ does not generally offer substantial performance differences to our model with $K=2$ ; hence, we do not further display $K=3$ results for compactness. Word Similarity We evaluate our embeddings on several standard word similarity datasets, namely, SimLex BIBREF22 , WS or WordSim-353, WS-S (similarity), WS-R (relatedness) BIBREF23 , MEN BIBREF24 , MC BIBREF25 , RG BIBREF26 , YP BIBREF27 , MTurk(-287,-771) BIBREF28 , BIBREF29 , and RW BIBREF30 . Each dataset contains a list of word pairs with a human score of how related or similar the two words are. We calculate the Spearman correlation BIBREF31 between the labels and our scores generated by the embeddings. The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels. The correlation results are shown in Table 2 using the scores generated from the expected likelihood kernel, maximum cosine similarity, and maximum Euclidean distance. We show the results of our Gaussian mixture model and compare the performance with that of word2vec and the original Gaussian embedding by BIBREF1 . We note that our model of a unimodal Gaussian embedding w2g also outperforms the original model, which differs in model hyperparameters and initialization, for most datasets. Our multi-prototype model w2gm also performs better than skip-gram or Gaussian embedding methods on many datasets, namely, WS, WS-R, MEN, MC, RG, YP, MT-287, RW. The maximum cosine similarity yields the best performance on most datasets; however, the minimum Euclidean distance is a better metric for the datasets MC and RW. These results are consistent for both the single-prototype and the multi-prototype models. We also compare out results on WordSim-353 with the multi-prototype embedding method by BIBREF16 and BIBREF18 , shown in Table 3 . We observe that our single-prototype model w2g is competitive compared to models by BIBREF16 , even without using a corpus with stop words removed. This could be due to the auto-calibration of importance via the covariance learning which decrease the importance of very frequent words such as `the', `to', `a', etc. Moreover, our multi-prototype model substantially outperforms the model of BIBREF16 and the MSSG model of BIBREF18 on the WordSim-353 dataset. Word Similarity for Polysemous Words We use the dataset SCWS introduced by BIBREF16 , where word pairs are chosen to have variations in meanings of polysemous and homonymous words. We compare our method with multiprototype models by Huang BIBREF16 , Tian BIBREF17 , Chen BIBREF32 , and MSSG model by BIBREF18 . We note that Chen model uses an external lexical source WordNet that gives it an extra advantage. We use many metrics to calculate the scores for the Spearman correlation. MaxSim refers to the maximum cosine similarity. AveSim is the average of cosine similarities with respect to the component probabilities. In Table 4 , the model w2g performs the best among all single-prototype models for either 50 or 200 vector dimensions. Our model w2gm performs competitively compared to other multi-prototype models. In SCWS, the gain in flexibility in moving to a probability density approach appears to dominate over the effects of using a multi-prototype. In most other examples, we see w2gm surpass w2g, where the multi-prototype structure is just as important for good performance as the probabilistic representation. Note that other models also use AvgSimC metric which uses context information which can yield better correlation BIBREF16 , BIBREF32 . We report the numbers using AvgSim or MaxSim from the existing models which are more comparable to our performance with MaxSim. Reduction in Variance of Polysemous Words One motivation for our Gaussian mixture embedding is to model word uncertainty more accurately than Gaussian embeddings, which can have overly large variances for polysemous words (in order to assign some mass to all of the distinct meanings). We see that our Gaussian mixture model does indeed reduce the variances of each component for such words. For instance, we observe that the word rock in w2g has much higher variance per dimension ( $e^{-1.8} \approx 1.65 $ ) compared to that of Gaussian components of rock in w2gm (which has variance of roughly $e^{-2.5} \approx 0.82$ ). We also see, in the next section, that w2gm has desirable quantitative behavior for word entailment. Word Entailment We evaluate our embeddings on the word entailment dataset from BIBREF33 . The lexical entailment between words is denoted by $w_1 \models w_2$ which means that all instances of $w_1$ are $w_2$ . The entailment dataset contains positive pairs such as aircraft $\models $ vehicle and negative pairs such as aircraft $\lnot \models $ insect. We generate entailment scores of word pairs and find the best threshold, measured by Average Precision (AP) or F1 score, which identifies negative versus positive entailment. We use the maximum cosine similarity and the minimum KL divergence, $\displaystyle d(f,g) = \min _{i,j= 1, \hdots , K} KL(f || g)$ , for entailment scores. The minimum KL divergence is similar to the maximum cosine similarity, but also incorporates the embedding uncertainty. In addition, KL divergence is an asymmetric measure, which is more suitable for certain tasks such as word entailment where a relationship is unidirectional. For instance, $w_1 \models w_2$ does not imply $w_2 \models w_1$ . Indeed, aircraft $\models $ vehicle does not imply vehicle $\models $ aircraft, since all aircraft are vehicles but not all vehicles are aircraft. The difference between $KL(w_1 || w_2)$ versus $KL(w_2 || w_1)$ distinguishes which word distribution encompasses another distribution, as demonstrated in Figure 1 . Table 5 shows the results of our w2gm model versus the Gaussian embedding model w2g. We observe a trend for both models with window size 5 and 10 that the KL metric yields improvement (both AP and F1) over cosine similarity. In addition, w2gm generally outperforms w2g. The multi-prototype model estimates the meaning uncertainty better since it is no longer constrained to be unimodal, leading to better characterizations of entailment. On the other hand, the Gaussian embedding model suffers from overestimatating variances of polysemous words, which results in less informative word distributions and reduced entailment scores. Discussion We introduced a model that represents words with expressive multimodal distributions formed from Gaussian mixtures. To learn the properties of each mixture, we proposed an analytic energy function for combination with a maximum margin objective. The resulting embeddings capture different semantics of polysemous words, uncertainty, and entailment, and also perform favorably on word similarity benchmarks. Elsewhere, latent probabilistic representations are proving to be exceptionally valuable, able to capture nuances such as face angles with variational autoencoders BIBREF34 or subtleties in painting strokes with the InfoGAN BIBREF35 . Moreover, classically deterministic deep learning architectures are actively being generalized to probabilistic deep models, for full predictive distributions instead of point estimates, and significantly more expressive representations BIBREF36 , BIBREF37 , BIBREF38 , BIBREF39 , BIBREF40 . Similarly, probabilistic word embeddings can capture a range of subtle meanings, and advance the state of the art. Multimodal word distributions naturally represent our belief that words do not have single precise meanings: indeed, the shape of a word distribution can express much more semantic information than any point representation. In the future, multimodal word distributions could open the doors to a new suite of applications in language modelling, where whole word distributions are used as inputs to new probabilistic LSTMs, or in decision functions where uncertainty matters. As part of this effort, we can explore different metrics between distributions, such as KL divergences, which would be a natural choice for order embeddings that model entailment properties. It would also be informative to explore inference over the number of components in mixture models for word distributions. Such an approach could potentially discover an unbounded number of distinct meanings for words, but also distribute the support of each word distribution to express highly nuanced meanings. Alternatively, we could imagine a dependent mixture model where the distributions over words are evolving with time and other covariates. One could also build new types of supervised language models, constructed to more fully leverage the rich information provided by word distributions. Acknowledgements We thank NSF IIS-1563887 for support. Derivation of Expected Likelihood Kernel We derive the form of expected likelihood kernel for Gaussian mixtures. Let $f,g$ be Gaussian mixture distributions representing the words $w_f, w_g$ . That is, $f(x) = \sum _{i=1}^K p_i \mathcal {N}(x; \mu _{f,i} , \Sigma _{f,i} ) $ and $g(x) = \sum _{i=1}^K q_i \mathcal {N}(x; \mu _{g,i} , \Sigma _{g,i} )$ , $\sum _{i =1}^K p_i = 1 $ , and $\sum _{i =1}^K q_i = 1$ . The expected likelihood kernel is given by $ E_\theta (f,g) &= \int \left( \sum _{i=1}^K p_i \mathcal {N}(x; \mu _{f,i} , \Sigma _{f,i} ) \right) \cdot \\ & \left( \sum _{j=1}^K q_j \mathcal {N}(x; \mu _{g,j} , \Sigma _{g,j} ) \right) \ d x \\ &= \sum _{i=1}^K \sum _{j=1}^K p_i q_j \int \mathcal {N}(x; \mu _{f,i} , \Sigma _{f,i} ) \cdot \mathcal {N}(x; \mu _{g,j} , \Sigma _{g,j} ) \ d x \\ &= \sum _{i=1}^K \sum _{j=1}^K p_i q_j \mathcal {N}(0; \mu _{f,i} - \mu _{g,j} , \Sigma _{f,i} + \Sigma _{g,j} ) \\ &= \sum _{i=1}^K \sum _{j=1}^K p_i q_j e^{\xi _{i,j}} $ where we note that $\int \mathcal {N}(x; \mu _i, \Sigma _i) \mathcal {N}(x; \mu _j, \Sigma _j) \ dx = \mathcal {N}(0, \mu _i - \mu _j , \Sigma _i + \Sigma _j)$ BIBREF1 and $\xi _{i,j}$ is the log partial energy, given by equation 10 . Implementation In this section we discuss practical details for training the proposed model. We use a diagonal $\Sigma $ , in which case inverting the covariance matrix is trivial and computations are particularly efficient. Let $\mathbf {d}^f, \mathbf {d}^g$ denote the diagonal vectors of $\Sigma _f, \Sigma _g$ The expression for $\xi _{i,j}$ reduces to $ \xi _{i,j} = - \frac{1}{2} \sum _{r=1}^D \log ( d^p_r + d^q_r) \\ - \frac{1}{2} \sum \left[ (\mathbf {\mu }_{p,i} - \mathbf {\mu }_{q,j}) \circ \frac{1}{ \mathbf {d^p + d^q} } \circ (\mathbf {\mu }_{p, i} - \mathbf {\mu }_{q,j}) \right] $ where $\circ $ denotes element-wise multiplication. The spherical case which we use in all our experiments is similar since we simply replace a vector $\mathbf {d}$ with a single value. We optimize $\log \mathbf {d}$ since each component of diagonal vector $\mathbf {d}$ is constrained to be positive. Similarly, we constrain the probability $p_i$ to be in $[0,1]$ and sum to 1 by optimizing over unconstrained scores $s_i \in (-\infty , \infty )$ and using a softmax function to convert the scores to probability $p_i = \frac{e^{s_i}}{\sum _{j=1}^K e^{s_j} }$ . The loss computation can be numerically unstable if elements of the diagonal covariances are very small, due to the term $ \log ( d^f_r + d^g_r) $ and $ \frac{1}{ \mathbf {d}^q + \mathbf {d}^p} $ . Therefore, we add a small constant $\epsilon = 10^{-4}$ so that $d^f_r + d^g_r$ and $ \mathbf {d}^q + \mathbf {d}^p $ becomes $d^f_r + d^g_r + \epsilon $ and $ \mathbf {d^q + d^p} + \epsilon $ . In addition, we observe that $\xi _{i,j}$ can be very small which would result in $e^{\xi _{i,j}} \approx 0$ up to machine precision. In order to stabilize the computation in eq. 9 , we compute its equivalent form $ \log E(f,g) = \xi _{i^{\prime },j^{\prime }} + \log \sum _{j=1}^K \sum _{i=1}^K p_i q_j e^{\xi _{i,j} - \xi _{i^{\prime },j^{\prime }}} $ where $ \xi _{i^{\prime },j^{\prime }} = \max _{i,j} \xi _{i,j}$ . In the loss function $L_\theta $ , we use a margin $m= 1$ and a batch size of 128. We initialize the word embeddings with a uniform distribution over $[ -\sqrt{\frac{3}{D}}, \sqrt{\frac{3}{D}} ]$ so that the expectation of variance is 1 and the mean is zero BIBREF44 . We initialize each dimension of the diagonal matrix (or a single value for spherical case) with a constant value $v = 0.05$ . We also initialize the mixture scores $s_i$ to be 0 so that the initial probabilities are equal among all $K$ components. We use the threshold $t = 10^{-5}$ for negative sampling, which is the recommended value for word2vec skip-gram on large datasets. We also use a separate output embeddings in addition to input embeddings, similar to word2vec implementation BIBREF0 , BIBREF4 . That is, each word has two sets of distributions $q_{I}$ and $q_{O}$ , each of which is a Gaussian mixture. For a given pair of word and context $(w,c)$ , we use the input distribution $q_{I}$ for $w$ (input word) and the output distribution $q_{O}$ for context $c$ (output word). We optimize the parameters of both $q_{I}$ and $q_{O}$ and use the trained input distributions $q_{I}$ as our final word representations. We use mini-batch asynchronous gradient descent with Adagrad BIBREF41 which performs adaptive learning rate for each parameter. We also experiment with Adam BIBREF43 which corrects the bias in adaptive gradient update of Adagrad and is proven very popular for most recent neural network models. However, we found that it is much slower than Adagrad ( $\approx 10$ times). This is because the gradient computation of the model is relatively fast, so a complex gradient update algorithm such as Adam becomes the bottleneck in the optimization. Therefore, we choose to use Adagrad which allows us to better scale to large datasets. We use a linearly decreasing learning rate from $0.05$ to $0.00001$ .
represent each word with an expressive multimodal distribution, for multiple distinct meanings, entailment, heavy tailed uncertainty, and enhanced interpretability. For example, one mode of the word `bank' could overlap with distributions for words such as `finance' and `money', and another mode could overlap with the distributions for `river' and `creek'.
40f87db3a8d1ac49b888ce3358200f7d52903ce7
40f87db3a8d1ac49b888ce3358200f7d52903ce7_0
Q: Does the new system utilize pre-extracted bounding boxes and/or features? Text: Introduction Visual question answering (VQA) comes as a classic task which combines visual and textual modal data into a unified system. Taking an image and a natural language question about it as input, a VQA system is supposed to output the corresponding natural language answer. VQA problem requires image and text understanding, common sense and knowledge inference. The solution of VQA problem will be a great progress in approaching the goal of Visual Turing Test, and is also conducive to tasks such as multi-modal retrieval, image captioning and accessibility facilities. After the first attempt and introduction of VQA BIBREF0 , more than thirty works on VQA have sprung up over the past one year from May, 2015. Over ten VQA datasets and a big VQA challenge BIBREF1 have been proposed so far. Four commonly used datasets (i.e. DAQUAR BIBREF0 , COCO-QA BIBREF2 , COCO-VQA BIBREF1 and Visual7W BIBREF3 ) feature different aspects. The common practice to tackle VQA problem is to translate the words as word embeddings and encode the questions using bag-of-word (BoW) or Long Short Term Memory (LSTM) network, and encode the images using deep convolutional neural networks (CNN). The following important step is to combine the image and question representations through some kind of fusing methods for answer generation, such as concatenation BIBREF4 , BIBREF5 , BIBREF6 , element-wise multiplication BIBREF1 , parameter prediction layer BIBREF7 , episode memory BIBREF8 , attention mechanism BIBREF9 , BIBREF10 , BIBREF11 , etc. Current VQA works focus on the fusion of these two features, yet no one cares about “where we focus” to ask questions on the image. It is a common practice to treat the VQA problem as either a generation, classification or a scoring task, and classification gains more popularity due to its simplicity and easiness for comparison. These works treat VQA as a discriminative model, learning the conditional probability of answer given the image and question. From the generative view, we emulate the behavior that before people ask questions about the given image they first glance at it and find some interesting regions. In terms of a single person, he has unique taste for choosing image regions that interest him. For a large amount of people, there are statistical region-of-interest (RoI) distributions. These region patterns are task-driven, e.g. the picture in Figure. 1 , for VQA task people may focus mostly on the beds, the chairs, the laptop and the notebook regions (namely the RoI patterns) as captured in the weighted image, but for image captioning task they pay attention to more areas including the striped floor. It is very valuable to intensify the interesting region features and suppress others, and this image preprocessing step provides more accurate visual features to the follow-up steps and is missing in current VQA works. By analogy with visual saliency which captures the standing out regions or objects of an image, we propose a region pre-selection mechanism named task-driven visual saliency which attaches interesting regions (more possibly questioned on) with higher weights. Taking advantage of the bidirectional LSTM (BiLSTM) BIBREF12 that the output at an arbitrary time step has complete and sequential information about all time steps before and after it, we compute the weight of interest for each region feature which is relative to all of them. To the best of our knowledge, this is the first work that employs and analyzes BiLSTM in VQA models for task-driven saliency detection, and this is the first contribution of our work. As a simple and effective VQA baseline method, BIBREF4 shows that question feature always contributes more to predict the answer than image feature. But image is as equally important as question for answer generation. It is necessary to further explore finer-grained image features to achieve better VQA performance, e.g. attention mechanism BIBREF13 . Current attention based models generally use the correlation scores between question and image representations as weights to perform weighted sum of region features, the resulting visual vector is concatenated to the question vector for final answer generation. The recent “multi-step” attention models (i.e. containing multiple attention layers) BIBREF14 , BIBREF11 dig deeper into the image understanding and help achieve better VQA performance than the “regular” attention models. However, the correlation score obtained by inner product between visual and textual features is essentially the sum of the correlation vector obtained by element-wise multiplication of the two features. Besides, BIBREF1 shows that element-wise multiplication of these features achieves more accurate results than concatenation of them in the baseline model. Hence we propose to employ element-wise multiplication way in the attention mechanism, the fused features are directly feed forward to a max pooling layer to get the final fused feature. Together with the saliency-like region pre-selection operation, this novel attention method effectively improves VQA performance and is the second contribution of this work. The remainder of the paper is organized as follows. We first briefly review saliency and the attention mechanism. Then, we elaborate our proposed method. We present experiments of some baseline models and compare with state-of-the-art models and visualize the pre-selection saliency and attention maps. Finally we summarize our work. Saliency Detection Modeling Saliency generally comes from contrasts between a pixel or an object and its surroundings, describing how outstanding it is. It could facilitate learning by focusing the most pertinent regions. Saliency detection methods mimic the human attention in psychology, including both bottom-up and top-down manners BIBREF15 . Typical saliency methods BIBREF16 , BIBREF17 are pixel- or object-oriented, which are not appropriate for VQA due to center bias and are difficulty in collecting large scale eye tracking data. We think task-driven saliency on image features could be conductive to solving VQA problem. What inspires us is that BiLSTM used in saliency detection has achieved good results on text and video tasks. In sentiment classification tasks, BIBREF18 assigns saliency scores to words related to sentiment for visualizing and understanding the effects of BiLSTM in textual sentence. While in video highlight detection, BIBREF19 uses a recurrent auto-encoder configured with BiLSTM cells and extracts video highlight segments effectively. BiLSTM has demonstrated its effectiveness in saliency detection, but to the best of our knowledge it has not been used in visual saliency for VQA task. Attention in VQA Models Visual attention mechanism has drawn great interest in VQA BIBREF14 , BIBREF3 , BIBREF9 and gained performance improvement from traditional methods using holistic image features. Attention mechanism is typically the weighted sum of the image region features at each spatial location, where the weights describe the correlation and are implemented as the inner products of the question and image features. It explores finer-grained visual features and mimics the behavior that people attend to different areas according to the questions. Focusing on “knowing where to look” for multiple-choice VQA tasks, BIBREF9 uses 99 detected object regions plus a holistic image feature to make correlation with the question encoding, and uses the correlation scores as weights to fuse the features. BIBREF14 uses the last pooling layer features ( $512\times 14\times 14$ ) of VGG-19 BIBREF20 as image region partitions, and adopts two-layer attention to obtain more effective fused features for complex questions. BIBREF21 proposes an ingenious idea to use assembled network modules according to the parsed questions, and achieves multi-step transforming attention by specific rules. However, these attention methods use correlation score (i.e. inner product between visual and textual feature) for each location, which is the sum of the correlation vector representation (i.e. element-wise multiplication between them). Besides, the concatenation of image and question features is less accurate than the element-wise multiplication vector of them shown in the baseline model BIBREF1 . Moreover, there are many answers derived from non-object and background regions, e.g. questions about scenes, hence it is not fit for the object detection based attention methods. Proposed Method Compared with image captioning which generates general descriptions about an image, VQA focuses on specific image regions depending on the question. On the one hand, these regions include non-object and background contents which are hard for object detection based VQA methods. On the other hand, although people may ask questions at any areas of a given image, there are always some region patterns that attract more questions. On the whole, there are statistical region-of-interest (RoI) patterns which represent human-interested areas that are important for later VQA task. We propose a saliency-like region pre-selection and attention-based VQA framework illustrated in Figure. 2 . The VQA is regarded as a classification task, which is simple and easy to transform to a generating or scoring model. Model In this section, we elaborate our model consisting of four parts: (a) image feature pre-selection part which models the tendency where people focus to ask questions, (b) question encoding part which encodes the question words as a condensed semantic embedding, (c) attention-based feature fusion part performs second selection on image features and (d) answer generation part which gives the answer output. As described above, current object detection based VQA methods may not be qualified and the answers may not be derived from these specific object regions in images, for example, when asked “Where is the bird/cat?”, the answers “fence/sink” are not contained in ILSVRC BIBREF22 (200 categories) and Pascal VOC BIBREF23 (20 categories) detection classes. Thus we use a more general pattern detector. In addition, from the generative perspective, we pay attention to where people focus to ask questions. General visual saliency provides analogous useful information of noticeable objects or areas which outstand the surroundings, but it is not the only case for VQA task. Current attention mechanism relates the question to the focusing location. As more samples are available, we could yield the region patterns that attract more questions by statistics. From the statistical behavior of large amounts of workers on Amazon Mechanical Turk (AMT) who have labeled the questions, we model the region-of-interest patterns that could attract more questions. We propose to perform saliency-like pre-selection operation to alleviate the problems and model the RoI patterns. The image is first divided into $g\times g$ grids as illustrated in Figure. 2 . Taking $m\times m$ grids as a region, with $s$ grids as the stride, we obtain $n\times n$ regions, where $n=\left\lfloor \frac{g-m}{s}\right\rfloor +1$ . We then feed the regions to a pre-trained ResNet BIBREF24 deep convolutional neural network to produce $n\times n\times d_I$ -dimensional region features, where $d_I$ is the dimension of feature from the layer before the last fully-connected layer. Since the neighboring overlapped regions share some visual contents, the corresponding features are related but focusing on different semantic information. We regard the sequence of regions as the result of eye movement when glancing at the image, and these regions are selectively allocated different degrees of interest. Specifically, the LSTM is a special kind of recurrent neural network (RNN), capable of learning long-term dependencies via the memory cell and the update gates, which endows itself with the ability to retain information of previous time-steps (i.e. the previous region sequence in this case). The update rules of the LSTM at time step $t$ are as follows: $$i_t&=\sigma (W^{(i)}x_t+U^{(i)}h_{t-1}+b^{(i)}),\\ f_t&=\sigma (W^{(f)}x_t+U^{(f)}h_{t-1}+b^{(f)}),\\ o_t&=\sigma (W^{(o)}x_t+U^{(o)}h_{t-1}+b^{(o)}),\\ u_t&=\tanh (W^{(u)}x_t+U^{(u)}h_{t-1}+b^{(u)}),\\ c_t&=u_t\odot i_t+c_{t-1}\odot f_t,\\ h_t&=o_t\odot \tanh (c_t),$$ (Eq. 7) where $i,f,o$ denote the input, forget and output gates, $x,c,h$ are the input region feature, memory cell and hidden unit output, and $W,U,b$ are the parameters to be trained. We activate the gates by the sigmoid nonlinearity $\sigma (x)=1/(1+e^{-x})$ and the cell contents by the hyperbolic tangent $\tanh (x)=(e^x-e^{-x})/(e^x+e^{-x})$ . The gates control the information in the memory cell to be retained or forgotten through element-wise multiplication $\odot $ . Inspired by the information completeness and high performance of BiLSTM, we encode the region features in two directions using BiLSTM and obtain a scalar output per region. The output of the BiLSTM is the summation of the forward and backward LSTM outputs at this region location: $h_t=h_t^{(f)}+h_{n-t+1}^{(b)}$ , where $n$ is the number of regions, $h_t^{(f)},h_{n-t+1}^{(b)}$ are computed using Eq. . Hence, the output at each location is influenced by the region features before and after it, which embodies the correlation among these regions. Note that, although the DMN+ work BIBREF8 uses similar bi-directional gated recurrent units (BiGRU) in the visual input module, their purpose is to produce input facts which contain global information. Besides, their BiGRU takes the features embedded to the textual space as inputs. In contrast, the BiLSTM used in our model takes directly visual CNN features as input, and the main purpose is to output weights for region feature selection. The output values of the BiLSTM are normalized through a softmax layer, and the resulting weights are multiplied by the region features. We treat the weights as degree of interest which are trained by error back-propagation of the final class cross entropy losses, and higher weights embody that the corresponding region patterns will attract more questions, in other words, these region patterns may get higher attention values in the latter interaction with question embeddings in a statistical way. Question can be encoded using various kinds of natural language processing (NLP) methods, such as BoW, LSTM, CNN BIBREF25 , BIBREF14 , gated recurrent units (GRU) BIBREF26 , skip-thought vectors BIBREF27 , or it can be parsed by Stanford Parser BIBREF28 , etc. Since question BoW encodings already dominate the contribution to answer generation compared with the image features BIBREF4 , we simply encode the question word as word2vec embedding, and use LSTM to encode the questions to match the pre-selected region features. To encode more abstract and higher-level information and achieve better performance, a deeper LSTM BIBREF1 , BIBREF29 for question encoding is adopted in our model. The question encoding LSTM in our model has $l$ hidden layers with $r$ hidden units per layer, and the question representation is the last output and the cell units of the LSTM, and the dimension is $d_Q=2\times l\times r$ . The resulting condensed feature vector encodes the semantic and syntactic information of the question. According to the statistic image-question-answer (IQA) training triples, the image feature pre-selection has attached the regions with different prior weights, generating more meaningful region features. But different questions may focus on different aspects of the visual content. It is necessary to use attention mechanism to second select regions by the question for more effective features. We propose a novel attention method, which takes the element-wise multiplication vector as correlation between image and question features at each spatial location. Specifically, given the pre-selected region features and question embedding, we map the visual and textual features into a common space of $d_C$ dimension and perform element-wise multiplication between them. The $n\times n\times d_C$ -dimensional fused features contain visual and textual information, and higher responses indicate more correlative features. In traditional attention models, the correlation score (scalar) achieved by inner product between the mapped visual and textual features per region, is essentially the sum of elements in our fused feature. This novel attention method has two noticeable advantages against traditional attention, i.e. information richer correlation vector versus correlation scalar, more effective element-wise multiplication vector versus the concatenated vector of the visual and textual features. Since higher responses in the fused features indicate more correlative visual and textual features, and the question may only focus on one or two regions. We choose to apply max pooling operation on the intermediate fused features to pick out the maximum responses. The produced $d_C$ -dimensional fused feature is then fed to the final answer generation part. Compared to the sum/average operation in traditional attention models, the max operation highlights the responses of the final fused feature from every spatial location. Taking the VQA problem as a classification task is simple to be implemented and evaluated, and it is easy to be extended to generation or multiple choice tasks through a network surgery using the fused feature in the previous step. We use a linear layer and a softmax layer to map from the fused feature to the answer candidates, of which the entries are the top-1000 answers from the training data. Considering multiple choice VQA problems, e.g. Visual7W BIBREF3 telling questions and COCO-VQA BIBREF1 multiple choice tasks, our model is adaptive to be extended by concatenating the question and answer vectors before fusion with visual features or by using bilinear model between the final fused feature and answer feature BIBREF9 , BIBREF30 , which is a possible future work. Meanwhile, in view of generation VQA problem, we can train an LSTM taking the fused feature as input to obtain answer word lists, phrases or sentences BIBREF5 , BIBREF6 . Training Our framework is trained end-to-end using back-propagation, while the feature extraction part using ResNet is kept fixed to speed up training and avoid the noisy gradients back-propagated from the LSTM as elaborated in BIBREF6 . RMSprop algorithm is employed with low initial learning rate of 3e-4 which is proved important to prevent the softmax from spiking too early and prevent the visual features from dominating too early BIBREF9 . Due to simplicity and proved similar performance as pre-trained word embedding parameters, we initialize the parameters of the network with random numbers. We randomly sample 500 IQA triples per iteration. Experiments In this section, we describe the implementation details and evaluate our model (SalAtt) on the large-scale COCO-VQA dataset. Besides, we visualize and analyze the role of pre-selection and the novel attention method. Implementation Details In our experiment, the input images are first scaled to $448\times 448\times 3$ pixels before we apply $4\times 4$ grids on them. We obtain $3\times 3$ regions by employing $2\times 2$ grids (i.e. $224\times 224\times 3$ pixels) as a region with stride 1 grid. Then we extract the 2048-D feature per region from the layer before the last fully-connected layer of ResNet. The dimension of word embedding is 200, and the weights of the embedding are initialized randomly from a uniform distribution on $[-0.08,0.08)$ due to similar performance as the pre-trained one. The pre-selection BiLSTM for region features has 1 layer and the size is 1, and the LSTM for question uses 2 layers and 512 hidden units per layer. The common space of visual and textual features is 1024-dimensional. We use dropout BIBREF31 after all convolutional and linear layers. The non-linear function is hyperbolic tangent. The training procedure is early stopped when there is no accuracy increase in validation set for $5,000$ iterations where we evaluate every $1,000$ iterations. It takes around 18 hours to train our model on a single NVIDIA Tesla K40 GPU for about $91,000$ iterations. And for evaluation, each sample needs less than 0.5 millisecond. Datasets The COCO-VQA dataset BIBREF1 is the largest among the commonly used VQA datasets, which contains two tasks (i.e. multiple-choice task and open-ended task) on two image datasets (i.e. real image MSCOCO dataset BIBREF32 and abstract scene dataset). We follow the common practice to evaluate models on two tasks on the real image dataset, which includes 248,349 training questions, 121,512 validation questions and 244,302 testing questions. There are many types of questions which require image and question understanding, commonsense knowledge, knowledge inference and even external knowledge. The answers are roughly divided into 3 types, i.e. “yes/no”, “number” and “other”. To evaluate the results, each answer is compared with 10 human-labeled answers, the accuracy is computed via this metric: $min(\frac{\#consistent\ human-labeled\ answers}{3},1)$ , i.e. the accuracy is $100\%$ if the predicted answer is consistent with at least 3 human-labeled answers. The COCO-VQA dataset provide human-labeled answers for the training and validation sets, and the results of testing set can only be tested on the evaluation server. The whole testing set is named test-standard and can be evaluated once per day and 5 times in total, and a smaller development set named test-dev can be tested 10 times per day and 9999 times in total. In short, the COCO-VQA dataset is large and hard enough for evaluating models and hence we choose to evaluate our model on it. Compared Models We compare our propose model (SalAtt) with some function-disabled models listed below, to prove the effectiveness of the region pre-selection via BiLSTM and the novel attention method. holistic: The baseline model which maps the holistic image feature and LSTM-encoded question feature to a common space and perform element-wise multiplication between them. TraAtt: The traditional attention model, implementation of WTL model BIBREF9 using the same $3\times 3$ regions in SalAtt model. RegAtt: The region attention model which employs our novel attention method, same as the SalAtt model but without region pre-selection. ConAtt: The convolutional region pre-selection attention model which replaces the BiLSTM in SalAtt model with a weight-sharing linear mapping, implemented by a convolutional layer. Besides, we also compare our SalAtt model with the popular baseline models i.e. iBOWIMG BIBREF4 , VQA BIBREF1 , and the state-of-the-art attention-based models i.e. WTL BIBREF9 , NMN BIBREF21 , SAN BIBREF14 , AMA BIBREF33 , FDA BIBREF34 , D-NMN BIBREF35 , DMN+ BIBREF8 on two tasks of COCO-VQA. Results and Analysis We train the function-disabled models on COCO-VQA training set and show the accuracies on validation set in Table. 1 . From the columns, we can see that: (1) holistic is better than TraAtt, proving the effectiveness of element-wise multiplication feature fusion compared with concatenation of features. (2) RegAtt is better than holistic, indicating our novel attention method indeed enriches the visual features and improves the performance. (3) SalAtt is better than RegAtt, demonstrating the strength of our region pre-selection mechanism. (4) ConAtt is worse than SalAtt, showing that BiLSTM is important for the region pre-selection part. From each row, we find the consistent improvement by the ResNet features, showing the importance of good CNN features to VQA. We summarize the accuracies on test-dev in Table. 2 and the test-standard results in Table. 3 . Our results are comparative or higher than the attention based methods, especially on multiple-choice tasks. The results on answer type “other”, which includes object and scene type questions, demonstrate the competence of our model in RoI detection. Note that, we only apply the proposed region pre-selection mechanism to the basic VQA model BIBREF1 , it can be embedded into any other attention-based models to improve their performance. Due to computation and training time, we use only $3\times 3$ regions compared with other attention-based methods (e.g. 100 or $14\times 14$ region features). Through observation, we find that many small objects could not be split by the $3\times 3$ regions, which is adverse to the counting questions and could be further improved and is a possible future work. We illustrate three groups of samples produced by our model in Figure. 3 . Each group contains four figures, from left to right and from top to bottom, they are respectively the original image, pre-selection weights on the image, and two attention maps for different questions with the corresponding questions (Q), ground truth answers (A) and the predicted answers (P) shown below them. And the number in the parentheses means the amount for this human-labeled answer entry. The weights are normalized to have minimum 0 and maximum 1 for visualization enhancement, i.e. the weight in the dark region may not necessarily be 0. Take the first sample for example, the pre-selection operation gives high weight to the boy's head region which may be interesting to people and attract more questions (e.g. questions containing the word “boy”). For the question “Is the boy dressed for the weather?”, the attention map focuses on the boy, his clothes and the surrounding regions to get a positive answer. While for question “What is the boy doing?”, it attends the boy and the snowboard, thus giving answer “snowboarding”. The third sample gives inaccurate but explainable answers, i.e. the birds may live in the park/zoo and come for food provided by the tourist so it may not be classified into pets, and the left hand of the woman holds indeed a phone while the human-labeled answers focus on the right hand. Conclusion In this work, we propose a general VQA solution which integrates region pre-selection and a novel attention method to capture generic class region and richer fused feature representation. These two procedures are independent, meanwhile they both contribute to better VQA performance. Although the model is simple, it achieves comparative or higher empirical results than state-of-the-art models. Possible future works include adopting finer-grained grids which capture more precise regions, employing stacked attention layers for multi-step reasoning and more accurate answer location, and applying the general pre-selection method to other attention-based VQA models. The pre-selection mechanism is valuable and applicable to similar task, such as image captioning.
Yes
36383971a852d1542e720d3ea1f5adeae0dbff18
36383971a852d1542e720d3ea1f5adeae0dbff18_0
Q: To which previous papers does this work compare its results? Text: Introduction Visual question answering (VQA) comes as a classic task which combines visual and textual modal data into a unified system. Taking an image and a natural language question about it as input, a VQA system is supposed to output the corresponding natural language answer. VQA problem requires image and text understanding, common sense and knowledge inference. The solution of VQA problem will be a great progress in approaching the goal of Visual Turing Test, and is also conducive to tasks such as multi-modal retrieval, image captioning and accessibility facilities. After the first attempt and introduction of VQA BIBREF0 , more than thirty works on VQA have sprung up over the past one year from May, 2015. Over ten VQA datasets and a big VQA challenge BIBREF1 have been proposed so far. Four commonly used datasets (i.e. DAQUAR BIBREF0 , COCO-QA BIBREF2 , COCO-VQA BIBREF1 and Visual7W BIBREF3 ) feature different aspects. The common practice to tackle VQA problem is to translate the words as word embeddings and encode the questions using bag-of-word (BoW) or Long Short Term Memory (LSTM) network, and encode the images using deep convolutional neural networks (CNN). The following important step is to combine the image and question representations through some kind of fusing methods for answer generation, such as concatenation BIBREF4 , BIBREF5 , BIBREF6 , element-wise multiplication BIBREF1 , parameter prediction layer BIBREF7 , episode memory BIBREF8 , attention mechanism BIBREF9 , BIBREF10 , BIBREF11 , etc. Current VQA works focus on the fusion of these two features, yet no one cares about “where we focus” to ask questions on the image. It is a common practice to treat the VQA problem as either a generation, classification or a scoring task, and classification gains more popularity due to its simplicity and easiness for comparison. These works treat VQA as a discriminative model, learning the conditional probability of answer given the image and question. From the generative view, we emulate the behavior that before people ask questions about the given image they first glance at it and find some interesting regions. In terms of a single person, he has unique taste for choosing image regions that interest him. For a large amount of people, there are statistical region-of-interest (RoI) distributions. These region patterns are task-driven, e.g. the picture in Figure. 1 , for VQA task people may focus mostly on the beds, the chairs, the laptop and the notebook regions (namely the RoI patterns) as captured in the weighted image, but for image captioning task they pay attention to more areas including the striped floor. It is very valuable to intensify the interesting region features and suppress others, and this image preprocessing step provides more accurate visual features to the follow-up steps and is missing in current VQA works. By analogy with visual saliency which captures the standing out regions or objects of an image, we propose a region pre-selection mechanism named task-driven visual saliency which attaches interesting regions (more possibly questioned on) with higher weights. Taking advantage of the bidirectional LSTM (BiLSTM) BIBREF12 that the output at an arbitrary time step has complete and sequential information about all time steps before and after it, we compute the weight of interest for each region feature which is relative to all of them. To the best of our knowledge, this is the first work that employs and analyzes BiLSTM in VQA models for task-driven saliency detection, and this is the first contribution of our work. As a simple and effective VQA baseline method, BIBREF4 shows that question feature always contributes more to predict the answer than image feature. But image is as equally important as question for answer generation. It is necessary to further explore finer-grained image features to achieve better VQA performance, e.g. attention mechanism BIBREF13 . Current attention based models generally use the correlation scores between question and image representations as weights to perform weighted sum of region features, the resulting visual vector is concatenated to the question vector for final answer generation. The recent “multi-step” attention models (i.e. containing multiple attention layers) BIBREF14 , BIBREF11 dig deeper into the image understanding and help achieve better VQA performance than the “regular” attention models. However, the correlation score obtained by inner product between visual and textual features is essentially the sum of the correlation vector obtained by element-wise multiplication of the two features. Besides, BIBREF1 shows that element-wise multiplication of these features achieves more accurate results than concatenation of them in the baseline model. Hence we propose to employ element-wise multiplication way in the attention mechanism, the fused features are directly feed forward to a max pooling layer to get the final fused feature. Together with the saliency-like region pre-selection operation, this novel attention method effectively improves VQA performance and is the second contribution of this work. The remainder of the paper is organized as follows. We first briefly review saliency and the attention mechanism. Then, we elaborate our proposed method. We present experiments of some baseline models and compare with state-of-the-art models and visualize the pre-selection saliency and attention maps. Finally we summarize our work. Saliency Detection Modeling Saliency generally comes from contrasts between a pixel or an object and its surroundings, describing how outstanding it is. It could facilitate learning by focusing the most pertinent regions. Saliency detection methods mimic the human attention in psychology, including both bottom-up and top-down manners BIBREF15 . Typical saliency methods BIBREF16 , BIBREF17 are pixel- or object-oriented, which are not appropriate for VQA due to center bias and are difficulty in collecting large scale eye tracking data. We think task-driven saliency on image features could be conductive to solving VQA problem. What inspires us is that BiLSTM used in saliency detection has achieved good results on text and video tasks. In sentiment classification tasks, BIBREF18 assigns saliency scores to words related to sentiment for visualizing and understanding the effects of BiLSTM in textual sentence. While in video highlight detection, BIBREF19 uses a recurrent auto-encoder configured with BiLSTM cells and extracts video highlight segments effectively. BiLSTM has demonstrated its effectiveness in saliency detection, but to the best of our knowledge it has not been used in visual saliency for VQA task. Attention in VQA Models Visual attention mechanism has drawn great interest in VQA BIBREF14 , BIBREF3 , BIBREF9 and gained performance improvement from traditional methods using holistic image features. Attention mechanism is typically the weighted sum of the image region features at each spatial location, where the weights describe the correlation and are implemented as the inner products of the question and image features. It explores finer-grained visual features and mimics the behavior that people attend to different areas according to the questions. Focusing on “knowing where to look” for multiple-choice VQA tasks, BIBREF9 uses 99 detected object regions plus a holistic image feature to make correlation with the question encoding, and uses the correlation scores as weights to fuse the features. BIBREF14 uses the last pooling layer features ( $512\times 14\times 14$ ) of VGG-19 BIBREF20 as image region partitions, and adopts two-layer attention to obtain more effective fused features for complex questions. BIBREF21 proposes an ingenious idea to use assembled network modules according to the parsed questions, and achieves multi-step transforming attention by specific rules. However, these attention methods use correlation score (i.e. inner product between visual and textual feature) for each location, which is the sum of the correlation vector representation (i.e. element-wise multiplication between them). Besides, the concatenation of image and question features is less accurate than the element-wise multiplication vector of them shown in the baseline model BIBREF1 . Moreover, there are many answers derived from non-object and background regions, e.g. questions about scenes, hence it is not fit for the object detection based attention methods. Proposed Method Compared with image captioning which generates general descriptions about an image, VQA focuses on specific image regions depending on the question. On the one hand, these regions include non-object and background contents which are hard for object detection based VQA methods. On the other hand, although people may ask questions at any areas of a given image, there are always some region patterns that attract more questions. On the whole, there are statistical region-of-interest (RoI) patterns which represent human-interested areas that are important for later VQA task. We propose a saliency-like region pre-selection and attention-based VQA framework illustrated in Figure. 2 . The VQA is regarded as a classification task, which is simple and easy to transform to a generating or scoring model. Model In this section, we elaborate our model consisting of four parts: (a) image feature pre-selection part which models the tendency where people focus to ask questions, (b) question encoding part which encodes the question words as a condensed semantic embedding, (c) attention-based feature fusion part performs second selection on image features and (d) answer generation part which gives the answer output. As described above, current object detection based VQA methods may not be qualified and the answers may not be derived from these specific object regions in images, for example, when asked “Where is the bird/cat?”, the answers “fence/sink” are not contained in ILSVRC BIBREF22 (200 categories) and Pascal VOC BIBREF23 (20 categories) detection classes. Thus we use a more general pattern detector. In addition, from the generative perspective, we pay attention to where people focus to ask questions. General visual saliency provides analogous useful information of noticeable objects or areas which outstand the surroundings, but it is not the only case for VQA task. Current attention mechanism relates the question to the focusing location. As more samples are available, we could yield the region patterns that attract more questions by statistics. From the statistical behavior of large amounts of workers on Amazon Mechanical Turk (AMT) who have labeled the questions, we model the region-of-interest patterns that could attract more questions. We propose to perform saliency-like pre-selection operation to alleviate the problems and model the RoI patterns. The image is first divided into $g\times g$ grids as illustrated in Figure. 2 . Taking $m\times m$ grids as a region, with $s$ grids as the stride, we obtain $n\times n$ regions, where $n=\left\lfloor \frac{g-m}{s}\right\rfloor +1$ . We then feed the regions to a pre-trained ResNet BIBREF24 deep convolutional neural network to produce $n\times n\times d_I$ -dimensional region features, where $d_I$ is the dimension of feature from the layer before the last fully-connected layer. Since the neighboring overlapped regions share some visual contents, the corresponding features are related but focusing on different semantic information. We regard the sequence of regions as the result of eye movement when glancing at the image, and these regions are selectively allocated different degrees of interest. Specifically, the LSTM is a special kind of recurrent neural network (RNN), capable of learning long-term dependencies via the memory cell and the update gates, which endows itself with the ability to retain information of previous time-steps (i.e. the previous region sequence in this case). The update rules of the LSTM at time step $t$ are as follows: $$i_t&=\sigma (W^{(i)}x_t+U^{(i)}h_{t-1}+b^{(i)}),\\ f_t&=\sigma (W^{(f)}x_t+U^{(f)}h_{t-1}+b^{(f)}),\\ o_t&=\sigma (W^{(o)}x_t+U^{(o)}h_{t-1}+b^{(o)}),\\ u_t&=\tanh (W^{(u)}x_t+U^{(u)}h_{t-1}+b^{(u)}),\\ c_t&=u_t\odot i_t+c_{t-1}\odot f_t,\\ h_t&=o_t\odot \tanh (c_t),$$ (Eq. 7) where $i,f,o$ denote the input, forget and output gates, $x,c,h$ are the input region feature, memory cell and hidden unit output, and $W,U,b$ are the parameters to be trained. We activate the gates by the sigmoid nonlinearity $\sigma (x)=1/(1+e^{-x})$ and the cell contents by the hyperbolic tangent $\tanh (x)=(e^x-e^{-x})/(e^x+e^{-x})$ . The gates control the information in the memory cell to be retained or forgotten through element-wise multiplication $\odot $ . Inspired by the information completeness and high performance of BiLSTM, we encode the region features in two directions using BiLSTM and obtain a scalar output per region. The output of the BiLSTM is the summation of the forward and backward LSTM outputs at this region location: $h_t=h_t^{(f)}+h_{n-t+1}^{(b)}$ , where $n$ is the number of regions, $h_t^{(f)},h_{n-t+1}^{(b)}$ are computed using Eq. . Hence, the output at each location is influenced by the region features before and after it, which embodies the correlation among these regions. Note that, although the DMN+ work BIBREF8 uses similar bi-directional gated recurrent units (BiGRU) in the visual input module, their purpose is to produce input facts which contain global information. Besides, their BiGRU takes the features embedded to the textual space as inputs. In contrast, the BiLSTM used in our model takes directly visual CNN features as input, and the main purpose is to output weights for region feature selection. The output values of the BiLSTM are normalized through a softmax layer, and the resulting weights are multiplied by the region features. We treat the weights as degree of interest which are trained by error back-propagation of the final class cross entropy losses, and higher weights embody that the corresponding region patterns will attract more questions, in other words, these region patterns may get higher attention values in the latter interaction with question embeddings in a statistical way. Question can be encoded using various kinds of natural language processing (NLP) methods, such as BoW, LSTM, CNN BIBREF25 , BIBREF14 , gated recurrent units (GRU) BIBREF26 , skip-thought vectors BIBREF27 , or it can be parsed by Stanford Parser BIBREF28 , etc. Since question BoW encodings already dominate the contribution to answer generation compared with the image features BIBREF4 , we simply encode the question word as word2vec embedding, and use LSTM to encode the questions to match the pre-selected region features. To encode more abstract and higher-level information and achieve better performance, a deeper LSTM BIBREF1 , BIBREF29 for question encoding is adopted in our model. The question encoding LSTM in our model has $l$ hidden layers with $r$ hidden units per layer, and the question representation is the last output and the cell units of the LSTM, and the dimension is $d_Q=2\times l\times r$ . The resulting condensed feature vector encodes the semantic and syntactic information of the question. According to the statistic image-question-answer (IQA) training triples, the image feature pre-selection has attached the regions with different prior weights, generating more meaningful region features. But different questions may focus on different aspects of the visual content. It is necessary to use attention mechanism to second select regions by the question for more effective features. We propose a novel attention method, which takes the element-wise multiplication vector as correlation between image and question features at each spatial location. Specifically, given the pre-selected region features and question embedding, we map the visual and textual features into a common space of $d_C$ dimension and perform element-wise multiplication between them. The $n\times n\times d_C$ -dimensional fused features contain visual and textual information, and higher responses indicate more correlative features. In traditional attention models, the correlation score (scalar) achieved by inner product between the mapped visual and textual features per region, is essentially the sum of elements in our fused feature. This novel attention method has two noticeable advantages against traditional attention, i.e. information richer correlation vector versus correlation scalar, more effective element-wise multiplication vector versus the concatenated vector of the visual and textual features. Since higher responses in the fused features indicate more correlative visual and textual features, and the question may only focus on one or two regions. We choose to apply max pooling operation on the intermediate fused features to pick out the maximum responses. The produced $d_C$ -dimensional fused feature is then fed to the final answer generation part. Compared to the sum/average operation in traditional attention models, the max operation highlights the responses of the final fused feature from every spatial location. Taking the VQA problem as a classification task is simple to be implemented and evaluated, and it is easy to be extended to generation or multiple choice tasks through a network surgery using the fused feature in the previous step. We use a linear layer and a softmax layer to map from the fused feature to the answer candidates, of which the entries are the top-1000 answers from the training data. Considering multiple choice VQA problems, e.g. Visual7W BIBREF3 telling questions and COCO-VQA BIBREF1 multiple choice tasks, our model is adaptive to be extended by concatenating the question and answer vectors before fusion with visual features or by using bilinear model between the final fused feature and answer feature BIBREF9 , BIBREF30 , which is a possible future work. Meanwhile, in view of generation VQA problem, we can train an LSTM taking the fused feature as input to obtain answer word lists, phrases or sentences BIBREF5 , BIBREF6 . Training Our framework is trained end-to-end using back-propagation, while the feature extraction part using ResNet is kept fixed to speed up training and avoid the noisy gradients back-propagated from the LSTM as elaborated in BIBREF6 . RMSprop algorithm is employed with low initial learning rate of 3e-4 which is proved important to prevent the softmax from spiking too early and prevent the visual features from dominating too early BIBREF9 . Due to simplicity and proved similar performance as pre-trained word embedding parameters, we initialize the parameters of the network with random numbers. We randomly sample 500 IQA triples per iteration. Experiments In this section, we describe the implementation details and evaluate our model (SalAtt) on the large-scale COCO-VQA dataset. Besides, we visualize and analyze the role of pre-selection and the novel attention method. Implementation Details In our experiment, the input images are first scaled to $448\times 448\times 3$ pixels before we apply $4\times 4$ grids on them. We obtain $3\times 3$ regions by employing $2\times 2$ grids (i.e. $224\times 224\times 3$ pixels) as a region with stride 1 grid. Then we extract the 2048-D feature per region from the layer before the last fully-connected layer of ResNet. The dimension of word embedding is 200, and the weights of the embedding are initialized randomly from a uniform distribution on $[-0.08,0.08)$ due to similar performance as the pre-trained one. The pre-selection BiLSTM for region features has 1 layer and the size is 1, and the LSTM for question uses 2 layers and 512 hidden units per layer. The common space of visual and textual features is 1024-dimensional. We use dropout BIBREF31 after all convolutional and linear layers. The non-linear function is hyperbolic tangent. The training procedure is early stopped when there is no accuracy increase in validation set for $5,000$ iterations where we evaluate every $1,000$ iterations. It takes around 18 hours to train our model on a single NVIDIA Tesla K40 GPU for about $91,000$ iterations. And for evaluation, each sample needs less than 0.5 millisecond. Datasets The COCO-VQA dataset BIBREF1 is the largest among the commonly used VQA datasets, which contains two tasks (i.e. multiple-choice task and open-ended task) on two image datasets (i.e. real image MSCOCO dataset BIBREF32 and abstract scene dataset). We follow the common practice to evaluate models on two tasks on the real image dataset, which includes 248,349 training questions, 121,512 validation questions and 244,302 testing questions. There are many types of questions which require image and question understanding, commonsense knowledge, knowledge inference and even external knowledge. The answers are roughly divided into 3 types, i.e. “yes/no”, “number” and “other”. To evaluate the results, each answer is compared with 10 human-labeled answers, the accuracy is computed via this metric: $min(\frac{\#consistent\ human-labeled\ answers}{3},1)$ , i.e. the accuracy is $100\%$ if the predicted answer is consistent with at least 3 human-labeled answers. The COCO-VQA dataset provide human-labeled answers for the training and validation sets, and the results of testing set can only be tested on the evaluation server. The whole testing set is named test-standard and can be evaluated once per day and 5 times in total, and a smaller development set named test-dev can be tested 10 times per day and 9999 times in total. In short, the COCO-VQA dataset is large and hard enough for evaluating models and hence we choose to evaluate our model on it. Compared Models We compare our propose model (SalAtt) with some function-disabled models listed below, to prove the effectiveness of the region pre-selection via BiLSTM and the novel attention method. holistic: The baseline model which maps the holistic image feature and LSTM-encoded question feature to a common space and perform element-wise multiplication between them. TraAtt: The traditional attention model, implementation of WTL model BIBREF9 using the same $3\times 3$ regions in SalAtt model. RegAtt: The region attention model which employs our novel attention method, same as the SalAtt model but without region pre-selection. ConAtt: The convolutional region pre-selection attention model which replaces the BiLSTM in SalAtt model with a weight-sharing linear mapping, implemented by a convolutional layer. Besides, we also compare our SalAtt model with the popular baseline models i.e. iBOWIMG BIBREF4 , VQA BIBREF1 , and the state-of-the-art attention-based models i.e. WTL BIBREF9 , NMN BIBREF21 , SAN BIBREF14 , AMA BIBREF33 , FDA BIBREF34 , D-NMN BIBREF35 , DMN+ BIBREF8 on two tasks of COCO-VQA. Results and Analysis We train the function-disabled models on COCO-VQA training set and show the accuracies on validation set in Table. 1 . From the columns, we can see that: (1) holistic is better than TraAtt, proving the effectiveness of element-wise multiplication feature fusion compared with concatenation of features. (2) RegAtt is better than holistic, indicating our novel attention method indeed enriches the visual features and improves the performance. (3) SalAtt is better than RegAtt, demonstrating the strength of our region pre-selection mechanism. (4) ConAtt is worse than SalAtt, showing that BiLSTM is important for the region pre-selection part. From each row, we find the consistent improvement by the ResNet features, showing the importance of good CNN features to VQA. We summarize the accuracies on test-dev in Table. 2 and the test-standard results in Table. 3 . Our results are comparative or higher than the attention based methods, especially on multiple-choice tasks. The results on answer type “other”, which includes object and scene type questions, demonstrate the competence of our model in RoI detection. Note that, we only apply the proposed region pre-selection mechanism to the basic VQA model BIBREF1 , it can be embedded into any other attention-based models to improve their performance. Due to computation and training time, we use only $3\times 3$ regions compared with other attention-based methods (e.g. 100 or $14\times 14$ region features). Through observation, we find that many small objects could not be split by the $3\times 3$ regions, which is adverse to the counting questions and could be further improved and is a possible future work. We illustrate three groups of samples produced by our model in Figure. 3 . Each group contains four figures, from left to right and from top to bottom, they are respectively the original image, pre-selection weights on the image, and two attention maps for different questions with the corresponding questions (Q), ground truth answers (A) and the predicted answers (P) shown below them. And the number in the parentheses means the amount for this human-labeled answer entry. The weights are normalized to have minimum 0 and maximum 1 for visualization enhancement, i.e. the weight in the dark region may not necessarily be 0. Take the first sample for example, the pre-selection operation gives high weight to the boy's head region which may be interesting to people and attract more questions (e.g. questions containing the word “boy”). For the question “Is the boy dressed for the weather?”, the attention map focuses on the boy, his clothes and the surrounding regions to get a positive answer. While for question “What is the boy doing?”, it attends the boy and the snowboard, thus giving answer “snowboarding”. The third sample gives inaccurate but explainable answers, i.e. the birds may live in the park/zoo and come for food provided by the tourist so it may not be classified into pets, and the left hand of the woman holds indeed a phone while the human-labeled answers focus on the right hand. Conclusion In this work, we propose a general VQA solution which integrates region pre-selection and a novel attention method to capture generic class region and richer fused feature representation. These two procedures are independent, meanwhile they both contribute to better VQA performance. Although the model is simple, it achieves comparative or higher empirical results than state-of-the-art models. Possible future works include adopting finer-grained grids which capture more precise regions, employing stacked attention layers for multi-step reasoning and more accurate answer location, and applying the general pre-selection method to other attention-based VQA models. The pre-selection mechanism is valuable and applicable to similar task, such as image captioning.
holistic, TraAtt, RegAtt, ConAtt, ConAtt, iBOWIMG , VQA, VQA, WTL , NMN , SAN , AMA , FDA , D-NMN, DMN+
1d941d390c0ee365aa7d7c58963e646eea74cbd6
1d941d390c0ee365aa7d7c58963e646eea74cbd6_0
Q: Do they consider other tasks? Text: Introduction Human beings are rational and a major component of rationality is the ability to reason. Reasoning is the process of combining facts and beliefs to make new decisions BIBREF0 , as well as the ability to manipulate knowledge to draw inferences BIBREF1 . Commonsense reasoning utilizes the basic knowledge that reflects our natural understanding of the world and human behaviors, which is common to all humans. Empowering machines with the ability to perform commonsense reasoning has been seen as the bottleneck of artificial general intelligence BIBREF2 . Recently, there have been a few emerging large-scale datasets for testing machine commonsense with various focuses BIBREF3 , BIBREF4 , BIBREF5 . In a typical dataset, CommonsenseQA BIBREF6 , given a question like “Where do adults use glue sticks?”, with the answer choices being {classroom(✗), office (✓), desk drawer (✗)}, a commonsense reasoner is expected to differentiate the correct choice from other “distractive” candidates. False choices are usually highly related to the question context, but just less possible in real-world scenarios, making the task even more challenging. This paper aims to tackle the research question of how we can teach machines to make such commonsense inferences, particularly in the question-answering setting. It has been shown that simply fine-tuning large, pre-trained language models such as Gpt BIBREF7 and Bert BIBREF8 can be a very strong baseline method. However, there still exists a large gap between performance of said baselines and human performance. Reasoning with neural models is also lacking in transparency and interpretability. There is no clear way as to how they manage to answer commonsense questions, thus making their inferences dubious. Merely relying on pre-training large language models on corpora cannot provide well-defined or reusable structures for explainable commonsense reasoning. We argue that it would be more beneficial to propose reasoners that can exploit commonsense knowledge bases BIBREF9 , BIBREF10 , BIBREF11 . Knowledge-aware models can explicitly incorporate external knowledge as relational inductive biases BIBREF12 to enhance their reasoning capacity, as well as to increase the transparency of model behaviors for more interpretable results. Furthermore, a knowledge-centric approach is extensible through commonsense knowledge acquisition techniques BIBREF13 , BIBREF14 . We propose a knowledge-aware reasoning framework for learning to answer commonsense questions, which has two major steps: schema graph grounding (§ "Schema Graph Grounding" ) and graph modeling for inference (§ "Knowledge-Aware Graph Network" ). As shown in Fig. 1 , for each pair of question and answer candidate, we retrieve a graph from external knowledge graphs (e.g. ConceptNet) in order to capture the relevant knowledge for determining the plausibility of a given answer choice. The graphs are named “schema graphs” inspired by the schema theory proposed by Gestalt psychologists BIBREF15 . The grounded schema graphs are usually much more complicated and noisier, unlike the ideal case shown in the figure. Therefore, we propose a knowledge-aware graph network module to further effectively model schema graphs. Our model is a combination of graph convolutional networks BIBREF16 and LSTMs, with a hierarchical path-based attention mechanism, which forms a GCN-LSTM-HPA architecture for path-based relational graph representation. Experiments show that our framework achieved a new state-of-the-art performance on the CommonsenseQA dataset. Our model also works better then other methods with limited supervision, and provides human-readable results via intermediate attention scores. Overview In this section, we first formalize the commonsense question answering problem in a knowledge-aware setting, and then introduce the overall workflow of our framework. Problem statement Given a commonsense-required natural language question $q$ and a set of $N$ candidate answers $\lbrace a_i\rbrace $ , the task is to choose one answer from the set. From a knowledge-aware perspective, we additionally assume that the question $q$ and choices $\lbrace a_i\rbrace $ can be grounded as a schema graph (denoted as $g$ ) extracted from a large external knowledge graph $G$ , which is helpful for measuring the plausibility of answer candidates. The knowledge graph $G=(V,E)$ can be defined as a fixed set of concepts $V$ , and typed edges $E$ describing semantic relations between concepts. Therefore, our goal is to effectively ground and model schema graphs to improve the reasoning process. Reasoning Workflow As shown in Fig. 2 , our framework accepts a pair of question and answer (QA-pair) denoted as $q$ and $a$ . It first recognizes the mentioned concepts within them respectively from the concept set $V$ of the knowledge graph. We then algorithmically construct the schema graph $g$ by finding paths between pairs of mentioned concepts (§ "Schema Graph Grounding" ). The grounded schema graph is further encoded with our proposed knowledge-aware graph network module (§ "Knowledge-Aware Graph Network" ). We first use a model-agnostic language encoder, which can either be trainable or a fixed feature extractor, to represent the QA-pair as a statement vector. The statement vector serves as an additional input to a GCN-LSTM-HPA architecture for path-based attentive graph modeling to obtain a graph vector. The graph vector is finally fed into a simple multi-layer perceptron to score this QA-pair into a scalar ranging from 0 to 1, representing the plausibility of the inference. The answer candidate with the maximum plausibility score to the same question becomes the final choice of our framework. Schema Graph Grounding The grounding stage is three-fold: recognizing concepts mentioned in text (§ "Conclusion" ), constructing schema graphs by retrieving paths in the knowledge graph (§ "Schema Graph Construction" ), and pruning noisy paths (§ "Path Pruning via KG Embedding" ). Concept Recognition We match tokens in questions and answers to sets of mentioned concepts ( $\mathcal {C}_q$ and $\mathcal {C}_a$ respectively) from the knowledge graph $G$ (for this paper we chose to use ConceptNet due to its generality). A naive approach to mentioned concept recognition is to exactly match n-grams in sentences with the surface tokens of concepts in $V$ . For example, in the question “Sitting too close to watch tv can cause what sort of pain?”, the exact matching result $\mathcal {C}_q$ would be {sitting, close, watch_tv, watch, tv, sort, pain, etc.}. We are aware of the fact that such retrieved mentioned concepts are not always perfect (e.g. “sort” is not a semantically related concept, “close” is a polysemous concept). How to efficiently retrieve contextually-related knowledge from noisy knowledge resources is still an open research question by itself BIBREF17 , BIBREF18 , and thus most prior works choose to stop here BIBREF19 , BIBREF20 . We enhance this straightforward approach with some rules, such as soft matching with lemmatization and filtering of stop words, and further deal with noise by pruning paths (§ "Path Pruning via KG Embedding" ) and reducing their importance with attention mechanisms (§ "Hierarchical Attention Mechanism" ). Schema Graph Construction ConceptNet. Before diving into the construction of schema graphs, we would like to briefly introduce our target knowledge graph ConceptNet. ConceptNet can be seen as a large set of triples of the form $(h, r, t)$ , like (ice, HasProperty, cold), where $h$ and $t$ represent head and tail concepts in the concept set $V$ and $r$ is a certain relation type from the pre-defined set $R$ . We delete and merge the original 42 relation types into 17 types, in order to increase the density of the knowledge graph for grounding and modeling. Sub-graph Matching via Path Finding. We define a schema graph as a sub-graph $g$ of the whole knowledge graph $G$ , which represents the related knowledge for reasoning a given question-answer pair with minimal additional concepts and edges. One may want to find a minimal spanning sub-graph covering all the question and answer concepts, which is actually the NP-complete “Steiner tree problem” in graphs BIBREF21 . Due to the incompleteness and tremendous size of ConceptNet, we find that it is impractical to retrieve a comprehensive but helpful set of knowledge facts this way. Therefore, we propose a straightforward yet effective graph construction algorithm via path finding among mentioned concepts ( $\mathcal {C}_q \cup \mathcal {C}_a$ ). Specifically, for each question concept $c_i \in \mathcal {C}_q$ and answer concept $c_j \in \mathcal {C}_a$ , we can efficiently find paths between them that are shorter than $k$ concepts. Then, we add edges, if any, between the concept pairs within $\mathcal {C}_q$ or $\mathcal {C}_a$ . Path Pruning via KG Embedding To prune irrelevant paths from potentially noisy schema graphs, we first utilize knowledge graph embedding (KGE) techniques, like TransE BIBREF22 , to pre-train concept embeddings $\mathbf {V}$ and relation type embeddings $\mathbf {R}$ , which are also used as initialization for (§ "Knowledge-Aware Graph Network" ). In order to measure the quality of a path, we decompose it into a set of triples, the confidence of which can be directly measured by the scoring function of the KGE method (i.e. the confidence of triple classification). Thus, we score a path with the multiplication product of the scores of each triple in the path, and then we empirically set a threshold for pruning (§ "Implementation Details of KagNet" ). Knowledge-Aware Graph Network The core component of our reasoning framework is the knowledge-aware graph network module . The first encodes plain structures of schema graphs with graph convolutional networks (§ "Graph Convolutional Networks" ) to accommodate pre-trained concept embeddings in their particular context within schema graphs. It then utilizes LSTMs to encode the paths between $\mathcal {C}_q$ and $\mathcal {C}_a$ , capturing multi-hop relational information (§ "Relational Path Encoding" ). Finally, we apply a hierarchical path-based attention mechanism (§ "Hierarchical Attention Mechanism" ) to complete the GCN-LSTM-HPA architecture, which models relational schema graphs with respect to the paths between question and answer concepts. Graph Convolutional Networks Graph convolutional networks (GCNs) encode graph-structured data by updating node vectors via pooling features of their adjacent nodes BIBREF16 . Our intuition for applying GCNs to schema graphs is to 1) contextually refine the concept vectors and 2) capture structural patterns of schema graphs for generalization. Although we have obtained concept vectors by pre-training (§ "Path Pruning via KG Embedding" ), the representations of concepts still need to be further accommodated to their specific schema graphs context. Think of polysemous concepts such as “close” (§ "Conclusion" ), which can either be a verb concept like in “close the door” or an adjective concept meaning “a short distance apart”. Using GCNs to update the concept vector with their neighbors is thus helpful for disambiguation and contextualized concept embedding. Also, the pattern of schema graph structures provides potentially valuable information for reasoning. For instance, shorter and denser connections between question and answer concepts could mean higher plausibility under specific contexts. As many works show BIBREF23 , BIBREF24 , relational GCNs BIBREF25 usually over-parameterize the model and cannot effectively utilize multi-hop relational information. We thus apply GCNs on the plain version (unlabeled, non-directional) of schema graphs, ignoring relation types on the edges. Specifically, the vector for concept $c_i\in \mathcal {V}_g$ in the schema graph $g$ is initialized by their pre-trained embeddings at first ( $h_i^{(0)} = \mathbf {V}_i$ ). Then, we update them at the $(l+1)$ -th layer by pooling features of their neighboring nodes ( $N_i$ ) and their own at the $l$ -th layer with an non-linear activation function $\sigma $ : $ h_i^{(l+1)} = \sigma (W_{self}^{(l)}h_i^{(l)}+\sum _{j\in N_i}\frac{1}{|N_i|}W^{(l)}h_j^{(l)}) $ Relational Path Encoding In order to capture the relational information in schema graphs, we propose an LSTM-based path encoder on top of the outputs of GCNs. Recall that our graph representation has a special purpose: “to measure the plausibility of a candidate answer to a given question”. Thus, we propose to represent graphs with respect to the paths between question concepts $\mathcal {C}_q$ and answer concepts $\mathcal {C}_a$ . We denote the $k$ -th path between $i$ -th question concept $c_i^{(q)}\in \mathcal {C}_q$ and $j$ -th answer concept $c_j^{(a)}\in \mathcal {C}_a$ as $P_{i,j}[k]$ , which is a sequence of triples: $ P_{i,j}[k] = [(c_i^{(q)}, r_0, t_0),...,(t_{n-1}, r_n,c_j^{(a)} )] $ Note that the relations are represented with trainable relation vectors (initialized with pre-trained relation embeddings), and concept vectors are the GCNs' outputs ( $h^{(l)}$ ). Thus, each triple can be represented by the concatenation of the three corresponding vectors. We employ LSTM networks to encode these paths as sequences of triple vectors, taking the concatenation of the first and the last hidden states: $\vspace{-10.0pt} \mathbf {R}_{i,j}= \frac{1}{|P_{i,j}|}\sum _k \texttt {LSTM}(P_{i,j}[k]) $ The above $\mathbf {R}_{i,j}$ can be viewed as the latent relation between the question concept $c_i^{(q)}$ and the answer concept $c_j^{(a)}$ , for which we aggregate the representations of all the paths between them in the schema graph. Now we can finalize the vector representation of a schema graph $\mathbf {g}$ by aggregating all vectors in the matrix $\mathbf {R}$ using mean pooling: $ \mathbf {T}_{i,j} &= \texttt {MLP}([\mathbf {s}~;~ \mathbf {c_q^{(i)}}~;~ \mathbf {c_a^{(j)}}]) \\ \mathbf {g}&= \frac{\sum _{i,j} [\mathbf {R}_{i,j}~;~ \mathbf {T}_{i,j}] }{|\mathcal {C}_q|\times |\mathcal {C}_a|} $ , where $[\cdot ~;~\cdot ]$ means concatenation of two vectors. The statement vector $\mathbf {s}$ in the above equation is obtained from a certain language encoder, which can either be a trainable sequence encoder like LSTM or features extracted from pre-trained universal language encoders like Gpt/Bert). To encode a question-answer pair with universal language encoders, we simply create a sentence combining the question and the answer with a special token (“question+ [sep] + answer”), and then use the vector of `[cls]' as suggested by prior works BIBREF6 .. We concatenate $\mathbf {R}_{i,j}$ $\mathbf {R}_{i,j}$ with an additional vector $\mathbf {T}_{i,j}$ $\mathbf {T}_{i,j}$ before doing average pooling. The $\mathbf {T}_{i,j}$ is inspired from the Relation Network BIBREF26 , which also encodes the latent relational information yet from the context in the statement ${s}$ instead of the schema graph $g$ . Simply put, we want to combine the relational representations of a pair of question/answer concepts from both the schema graph side ( symbolic space symbolic space ) and the language side ( semantic space semantic space ). Finally, the plausibility score of the answer candidate $a$ to the question $q$ can be computed as $\texttt {score}(q,a) = \texttt {sigmoid}(\texttt {MLP}(\mathbf {g}))$ . Hierarchical Attention Mechanism A natural argument against the above GCN-LSTM-mean architecture is that mean pooling over the path vectors does not always make sense, since some paths are more important than others for reasoning. Also, it is usually not the case that all pairs of question and answer concepts equally contribute to the reasoning. Therefore, we propose a hierarchical path-based attention mechanism to selectively aggregate important path vectors and then more important question-answer concept pairs. This core idea is similar to the work of BIBREF27 (2016), which proposes a document encoder that has two levels of attention mechanisms applied at the word- and sentence-level. In our case, we have path-level and concept-pair-level attention for learning to contextually model graph representations. We learn a parameter matrix $\mathbf {W}_1$ for path-level attention scores, and the importance of the path $P_{i,j}[k]$ is denoted as $\hat{\alpha }_{(i,j,\cdot )}$ . $ \alpha _{(i,j,k)} &= \mathbf {T}_{i,j} ~\mathbf {W}_{1} ~\texttt {LSTM}(P_{i,j}[k]) ,\\ \hat{\alpha }_{(i,j,\cdot )} &= \texttt {SoftMax}(\alpha _{(i,j,\cdot )}),\\ \hat{\mathbf {R}}_{i,j} &= \sum _k \hat{\alpha }_{(i,j,k)} \cdot \texttt {LSTM}(P_{i,j}[k]). $ Afterwards, we similarly obtain the attention over concept-pairs. $ \beta _{(i,j)} &= \mathbf {s_{}}~\mathbf {W}_{2} ~ {\mathbf {T}}_{i,j} \\ \hat{\beta }_{(\cdot ,\cdot )} &= \texttt {SoftMax}(\beta _{(\cdot ,\cdot )})\\ \hat{\mathbf {g}} &= \sum _{i,j} \hat{\beta }_{(i,j)} [\hat{\mathbf {R}}_{i,j}~;~ \mathbf {T}_{i,j}] $ The whole GCN-LSTM-HPA architecture is illustrated in Figure 3 . To sum up, we claim that the is a graph neural network module with the GCN-LSTM-HPA architecture that models relational graphs for relational reasoning under the context of both knowledge symbolic space knowledge symbolic space and language semantic space language semantic space . Experiments We introduce our setups of the CommonsenseQA dataset BIBREF6 , present the baseline methods, and finally analyze experimental results. Dataset and Experiment Setup The CommonsenseQA dataset consists of 12,102 (v1.11) natural language questions in total that require human commonsense reasoning ability to answer, where each question has five candidate answers (hard mode). The authors also release an easy version of the dataset by picking two random terms/phrases for sanity check. CommonsenseQA is directly gathered from real human annotators and covers a broad range of types of commonsense, including spatial, social, causal, physical, temporal, etc. To the best of our knowledge, CommonsenseQA may be the most suitable choice for us to evaluate supervised learning models for question answering. For the comparisons with the reported results in the CommonsenseQA's paper and leaderboard, we use the official split (9,741/1,221/1,140) named (OFtrain/OFdev/OFtest). Note that the performance on OFtest can only be tested by submitting predictions to the organizers. To efficiently test other baseline methods and ablation studies, we choose to use randomly selected 1,241 examples from the training data as our in-house data, forming an (8,500/1,221/1,241) split denoted as (IHtrain/IHdev/IHtest). All experiments are using the random-split setting as the authors suggested, and three or more random states are tested on development sets to pick the best-performing one. Compared Methods We consider two different kinds of baseline methods as follows: $\bullet $ Knowledge-agnostic Methods. These methods either use no external resources or only use unstructured textual corpora as additional information, including gathering textual snippets from search engine or large pre-trained language models like Bert-Large. QABilinear, QACompare, ESIM are three supervised learning models for natural language inference that can be equipped with different word embeddings including GloVe and ELMo. BIDAF++ utilizes Google web snippets as context and is further augmented with a self-attention layer while using ELMo as input features. Gpt/Bert-Large are fine-tuning methods with an additional linear layer for classification as the authors suggested. They both add a special token `[sep]' to the input and use the hidden state of the `[cls]' as the input to the linear layer. More details about them can be found in the dataset paper BIBREF6 . $\bullet $ Knowledge-aware Methods. We also adopt some recently proposed methods of incorporating knowledge graphs for question answering. KV-Mem BIBREF28 is a method that incorporates retrieved triples from ConceptNet at the word-level, which uses a key-valued memory module to improve the representation of each token individually by learning an attentive aggregation of related triple vectors. CBPT BIBREF19 is a plug-in method of assembling the predictions of any models with a straightforward method of utilizing pre-trained concept embeddings from ConceptNet. TextGraphCat BIBREF29 concatenates the graph-based and text-based representations of the statement and then feed it into a classifier. We create sentence template for generating sentences and then feed retrieved triples as additional text inputs as a baseline method TripleString. BIBREF30 (2019) propose to collect human explanations for commonsense reasoning from annotators as additional knowledge (CoS-E), and then train a language model based on such human annotations for improving the model performance. Implementation Details of KagNet Our best (tested on OFdev) settings of have two GCN layers (100 dim, 50dim respectively), and one bidirectional LSTMs (128dim) . We pre-train KGE using TransE (100 dimension) initialized with GloVe embeddings. The statement encoder in use is Bert-Large, which works as a pre-trained sentence encoder to obtain fixed features for each pair of question and answer candidate. The paths are pruned with path-score threshold set to 0.15, keeping 67.21% of the original paths. We did not conduct pruning on concept pairs with less than three paths. For very few pairs with none path, $\hat{\mathbf {R}}_{(i,j)}$ will be a randomly sampled vector. We learn our models with Adam optimizers BIBREF31 . In our experiments, we found that the recall of ConceptNet on commonsense questions and answers is very high (over 98% of QA-pairs have more than one grounded concepts). Performance Comparisons and Analysis Comparison with standard baselines. As shown in Table 2 , we first use the official split to compare our model with the baseline methods reported on the paper and leaderboard. Bert and Gpt-based pre-training methods are much higher than other baseline methods, demonstrating the ability of language models to store commonsense knowledge in an implicit way. This presumption is also investigated by BIBREF32 (2019) and BIBREF33 (2019). Our proposed framework achieves an absolute increment of 2.2% in accuracy on the test data, a state-of-the-art performance. We conduct the experiments with our in-house splits to investigate whether our can also work well on other universal language encoders (GPT and Bert-Base), particularly with different fractions of the dataset (say 10%, 50%, 100% of the training data). Table 1 shows that our -based methods using fixed pre-trained language encoders outperform fine-tuning themselves in all settings. Furthermore, we find that the improvements in a small data situation (10%) is relatively limited, and we believe an important future research direction is thus few-shot learning for commonsense reasoning. Comparison with knowledge-aware baselines. To compare our model with other adopted baseline methods that also incorporate ConceptNet, we set up a bidirectional LSTM networks-based model for our in-house dataset. Then, we add baseline methods and onto the BLSTMs to compare their abilities to utilize external knowledge. Table 3 shows the comparisons under both easy mode and hard mode, and our methods outperform all knowledge-aware baseline methods by a large margin in terms of accuracy. Note that we compare our model and the CoS-E in Table 2 . Although CoS-E also achieves better result than only fine-tuning BERT by training with human-generated explanations, we argue that our proposed KagNet does not utilize any additional human efforts to provide more supervision. Ablation study on model components. To better understand the effectiveness of each component of our method, we have done ablation study as shown in Table 4 . We find that replacing our GCN-LSTM-HPA architecture with traditional relational GCNs, which uses separate weight matrices for different relation types, results in worse performance, due to its over-parameterization. The attention mechanisms matters almost equally in two levels, and pruning also effectively filters noisy paths. Error analysis. In the failed cases, there are three kinds of hard problems that is still not good at. negative reasoning: the grounding stage is not sensitive to the negation words, and thus can choose exactly opposite answers. comparative reasoning strategy: For the questions with more than one highly plausible answers, the commonsense reasoner should benefit from explicitly investigating the difference between different answer candidates, while training method is not capable of doing so. subjective reasoning: Many answers actually depend on the “personality” of the reasoner. For instance, “Traveling from new place to new place is likely to be what?” The dataset gives the answer as “exhilarating” instead of “exhausting”, which we think is more like a personalized subjective inference instead of common sense. Case Study on Interpretibility Our framework enjoys the merit of being more transparent, and thus provides more interpretable inference process. We can understand our model behaviors by analyzing the hierarchical attention scores on the question-answer concept pairs and path between them. Figure 4 shows an example how we can analyze our framework through both pair-level and path-level attention scores. We first select the concept-pairs with highest attention scores and then look at the (one or two) top-ranked paths for each selected pair. We find that paths located in this way are highly related to the inference process and also shows that noisy concepts like `fountain' will be diminished while modeling. Model Transferability. We study the transferability of a model that is trained on CommonsenseQA (CSQA) by directly testing it with another task while fixing its parameters. Recall that we have obtained a Bert-Large model and a model trained on CSQA. Now we denoted them as Csqa-Bl and Csqa-Kn to suggest that they are not trainable anymore. In order to investigate their transferability, we separately test them on SWAG BIBREF3 and WSC BIBREF34 datasets. We first test them the 20k validation examples in SWAG. Csqa-Bl has an accuracy of $56.53\%$ , while our fixed Csqa-Kn model achieves $59.01\%$ . Similarly, we also test both models on the WSC-QA, which is converted from the WSC pronoun resolution to a multi-choice QA task. The Csqa-BL achieves an accuracy of $51.23\%$ , while our model Csqa-KN scores $53.51\%$ . These two comparisons further support our assumption that , as a knowledge-centric model, is more extensible in commonsense reasoning. As we expect for a good knowledge-aware frameworks to behave, our indeed enjoys better transferablity than only fine-tuning large language encoders like Bert. Recent methods on the leaderboard. We argue that the utilizes the ConceptNet as the only external resource and other methods are improving their performance in orthogonal directions: 1) we find that most of the other recent submissions (as of Aug. 2019) with public information on the leaderboard utilize larger additional textual corpora (e.g. top 10 matched sentences in full Wikipedia via information retrieval tools), and fine-tuning on larger pre-trained encoders, such as XLNet BIBREF35 , RoBERTa BIBREF36 . 2) there are also models using multi-task learning to transfer knowledge from other reading comprehension datasets, such as RACE BIBREF37 and OpenBookQA BIBREF38 . An interesting fact is that the best performance on the OFtest set is still achieved the original fine-tuned RoBERTa model, which is pre-trained with copora much larger than Bert. All other RoBERTa-extended methods have negative improvements. We also use statement vectors from RoBERTa as the input vectors for , and find that the performance on OFdev marginally improves from $77.47\%$ to $77.56\%$ . Based on our above-mentioned failed cases in error analysis, we believe fine-tuning RoBERTa has achieved the limit due to the annotator biases of the dataset and the lack of comparative reasoning strategies. Related Work Commonsense knowledge and reasoning. There is a recent surge of novel large-scale datasets for testing machine commonsense with various focuses, such as situation prediction (SWAG) BIBREF3 , social behavior understanding BIBREF11 , BIBREF4 , visual scene comprehension BIBREF5 , and general commonsense reasoning BIBREF6 , which encourages the study of supervised learning methods for commonsense reasoning. BIBREF39 (2018) find that large language models show promising results in WSC resolution task BIBREF34 , but this approach can hardly be applied in a more general question answering setting and also not provide explicit knowledge used in inference. A unique merit of our method is that it provides grounded explicit knowledge triples and paths with scores, such that users can better understand and put trust in the behaviors and inferences of the model. Injecting external knowledge for NLU. Our work also lies in the general context of using external knowledge to encode sentences or answer questions. BIBREF40 (2017) are the among first ones to propose to encode sentences by keeping retrieving related entities from knowledge bases and then merging their embeddings into LSTM networks computations, to achieve a better performance on entity/event extraction tasks. BIBREF17 (2017), BIBREF28 (2018), and BIBREF41 (2018) follow this line of works to incorporate the embeddings of related knowledge triples at the word-level and improve the performance of natural language understanding tasks. In contrast to our work, they do not explicitly impose graph-structured knowledge into models , but limit its potential within transforming word embeddings to concept embeddings. Some other recent attempts BIBREF19 , BIBREF29 to use ConceptNet graph embeddings are adopted and compared in our experiments (§ "Experiments" ). BIBREF30 (2019) propose to manually collect more human explanations for correct answers as additional supervision for auxiliary training. -based framework focuses on injecting external knowledge as an explicit graph structure, and enjoys the relational reasoning capacity over the graphs. Relational reasoning. can be seen as a knowledge-augmented Relation Network module (RN) BIBREF26 , which is proposed for the visual question answering task requiring relational reasoning (i.e. questions about the relations between multiple 3D-objects in an image). We view the concepts in the questions and answers as objects and effectively utilize external knowledge graphs to model their relations from both semantic and symbolic spaces (§ "Relational Path Encoding" ), while prior methods mainly work on the semantic one. Conclusion We propose a knowledge-aware framework for learning to answer commonsense questions. The framework first constructs schema graphs to represent relevant commonsense knowledge, and then model the graphs with our module. The module is based on a GCN-LSTM-HPA architecture, which effectively represent graphs for relational reasoning purpose in a transparent, interpretable way, yielding a new state-of-the-art results on a large-scale general dataset for testing machine commonsense. Future directions include better question parsing methods to deal with negation and comparative question answering, as well as incorporating knowledge to visual reasoning. Acknowledgments This work has been supported in part by National Science Foundation SMA 18-29268, DARPA MCS and GAILA, IARPA BETTER, Schmidt Family Foundation, Amazon Faculty Award, Google Research Award, Snapchat Gift and JP Morgan AI Research Award. We would like to thank all the collaborators in the INK research lab for their constructive feedback on the work.
No
3ee976add83e37339715d4ae9d8aa328dd54d052
3ee976add83e37339715d4ae9d8aa328dd54d052_0
Q: What were the model's results on flood detection? Text: Introduction There are various forms of a natural disaster such as flood, earthquake, volcano eruptions, storms, etc. but the flood is one of the lethal and prominent forms of natural disaster according to World Meteorological Organization (WMO) for most of the countries. National Weather Services (NWS) reported 28,826 flash floods events in the United States from October 2007 to October 2015 which resulted in 278 live loss and million-dollar worth crop and property damage BIBREF0. Monitoring and detecting floods in advance and proactively working towards saving peoples live and minimizing damage at the same time is amongst one of the most important tasks nowadays. In recent times, humans are extremely active on social media such as Twitter, Facebook, Youtube, Flickr, Instagram, etc. People use these platform extensively to share crucial information via message, photos and videos in real-time on social media for their interaction and information dissemination on every topic and acts as an active human sensor. It has been observed in the past few years via several case studies that social media also contributes significantly and being used extensively for crisis-related feeds BIBREF1 and extremely helpful in situation awareness towards crisis management BIBREF2, BIBREF3, BIBREF4. Emergency first responders agency, humanitarian organizations, city authorities and other end users are always looking for the right amount and content that would be helpful in the crisis scenarios but generally, social media provides an overwhelming amount of unlabeled data and it is very crucial to filter out the right kind of information using text classification. The advances in Artificial Intelligence (AI) which includes machine learning and Natural Language Processing (NLP) methods can track and focus on humanitarian relief process and extract meaningful insights from the huge amount of social media data generated regularly in a timely manner. One of the major challenge while building a reliable and high accuracy model, it needs a huge amount of labeled data in order to be evaluated properly and achieve higher accuracy. Some of the platforms which uses crowdsourcing services and manually observe the data to label the disaster-related information such as CrisisLexBIBREF5, CrisisNLPBIBREF6, CrisisMMDBIBREF7, AIDRBIBREF8 etc. with already labeled data and pre-trained models, we can efficiently utilize the learned knowledge for the new target domain. In general, to make a good predictive model we need a huge amount of labeled data with specific domain to train that provide accurate, reliable results for the new domain. Transfer learning models efficiently leverage the existing knowledge and perform effectively the intended task by adapting to the new domain. In Figure FIGREF1 shows the comparison of general transfer learning and NLP transfer learning. Transfer learning learns from the source data model and applies the gained knowledge from the source domain to the target domain that requires relatively less labeled data. Social media growth in last decade and availability of existing disaster-related data sources labeled by crowdsourcing platforms provide an opportunity to utilize this data and build a learning model which learns the domain knowledge and transfer the learned knowledge to classify new data with higher accuracy and confidence automatically. This can effectively solve some of the important problems in disaster management such as flood detection, executing rescue operations, sending feedback and contextual warnings to authorities, improved situation awareness, etc. Transfer learning contains various type of knowledge sharing such as inductive, transductive depending on the source and target domain data distribution and source/target task relatedness BIBREF9. Figure FIGREF1 shows basic transfer Learning concept in NLP is slightly different than the general transfer learning. In general transfer learning, we have source domain and target domain, the model build and learned from the source domain data is used to transfer the knowledge to the target domain task model. Whereas, in NLP the source domain is the general understanding of the text learned from not only one domain but from a giant corpus of text, build a language model known as a pre-trained language model. These pre-trained language models are further used for different downstream task such as text classification, spam detection, question answering, etc. We are using here the inductive transfer learning where we have a pre-trained model as source task and improve the performance of the target task (flood tweet classification). We present in this study that using a pre-trained model and very few labeled flood tweets we can achieve great accuracy effectively in no time. The main contributions of this work are as follows: We propose to use the inductive transfer learning method and adapt the ULMFiT Pre-train model for text classification. We fine-tune the target model parameters by knowledge obtained from the source domain for quick and efficient flood tweet classification. We show that ULMFiT method needs a very small amount of labeled data (5%) to achieve high accuracy and performance. This study demonstrates that this model can be applied in real-time flood detection and information extraction with very small training data for new application domain. Related Work Growing active user base on social media and has been created a great opportunity for extracting crucial information in real-time for various events and topics. Social media is being vigorously used as the communication channel in the time of any crisis or any natural disaster in order to convey the actionable information to the emergency responders to help them by more situational awareness context so that they make a better decision for rescue operations, sending alerts, reaching out people right on time. There have been numerous works proposed related to crisis management using social media content which is discussed in the following section. Social media for crisis management Mainly in the analysis of social media content related to crisis situations data type such as images, geolocation, videos, text, etc. but most of the focus of these work has been images and geolocation towards crisis management BIBREF2, BIBREF3, BIBREF4, BIBREF10. Processing social media content is itself a huge challenge and comes with great challenges as well such as information processing, cleaning, filtering, summarizing, extracting, etc. There has been some progress lately in developing methods to extract meaningful information during a crisis for better situation awareness and better decision making BIBREF11. The text domain of the social media data has not been exploited to its fullest and it is generally the most valuable and available data on social media. Text processing can provide great amount of details which can be useful for situation awareness and help towards extracting actionable insights. Identifying relevant text data would eventually result in major event detection which is difficult to correctly track in a short amount of time and fast processing is needed in these scenarios. BIBREF11, BIBREF10. Domain adaptation for crisis management Transfer learning is very popular and active research area of machine learning. This learning method is known for learning the domain knowledge while solving the task and transfer its knowledge from one domain (source) to another domain (target) to solve the task in the new domain. We need to know these basic things while applying transfer learning (1). What needs to be transferred? (2). When to transfer the learned knowledge? (3). How to transfer knowledge? There are few basic transfer learning algorithm principles that include few simple steps as follows: (i) it aims to minimize the error measure by reweighting the source label sample such that it appears as a target. (ii) Adapt the methods iteratively and label target example using these common steps (a) model learned from labeled example, (b) labels some target example (c) New model learns from new labels BIBREF12, BIBREF13. Transfer learning has been explored and applied in various classification problems for high quality and reliable results with less labeled data in the target domain. It has also been used for feature selection, pedestrian detection, improving visual tracking and subtractive bias removal in medial domainBIBREF12. Some of the other example where transfer learning have been used are text classificationBIBREF13, sentiment classification BIBREF14, BIBREF15, domain adaptation BIBREF16, object classificationBIBREF17. Data Collection and Processing In this section, we explain about our data collection and cleaning process of the data followed by some data visualization for better understanding of the data. The text data are decidedly very crucial and if leveraged carefully in time, it can assist in various emergency response services. It could greatly benefit the authorities in their decision-making process, rescue operation, increase situational awareness and early warnings. We are using Twitter data since it is one of the widely used social media platform in recent times. Data Collection: We are using the disaster data from BIBREF5. It contains various dataset including the CrisiLexT6 dataset which contains six crisis events related to English tweets in 2012 and 2013, labeled by relatedness (on-topic and off-topic) of respective crisis. Each crisis event tweets contain almost 10,000 labeled tweets but we are only focused on flood-related tweets thus, we experimented with only two flood event i.e. Queensland flood in Queensland, Australia and Alberta flood in Alberta, Canada and relabeled all on-topic tweets as Related and Off-topic as Unrelated for implicit class labels understanding in this case. The data collection process and duration of CrisisLex data is described in BIBREF5 details. Data cleaning: The tweets, in general, are very noisy and we need to clean the tweets in order to use them for efficient model building. We removed the stop words, numerical, special symbols and characters, punctuation, white space, random alphabets, and URLs, etc. We also transform all the tweets into lower case alphabet to normalize it and remove the redundancy in the data. After cleaning the tweets we performed some data visualization next for better data insights. Data Visualization: Our focus here is to understand the basic characteristics of tweets and demonstrate the power of transfer learning method in this application. Although both of the datasets are similar in distribution thus, we have selected Queensland flood dataset for elaboration. Table TABREF6 shows the fairly equal class distribution in Queensland flood tweets with 5414 related flood tweet and 4619 unrelated flood tweets. Figure FIGREF7 shows the number of words in a tweet which ranges from 5 words up to 30 words in a single tweet. Figure FIGREF7 shows the tweet length distribution contains from 30 characters up to 140 characters in a tweet. Figure FIGREF10, FIGREF10, FIGREF10 shows the top 20 most frequent words, bi-gram and tri-gram respectively of the tweet dataset. By visual inspection of these most frequent words, bigram and trigram, we have a general understanding of the major topics and themes in the data. Tweets characteristics are generally similar in most of the cases so it is highly probable that it can be effectively applied for other scenarios or new location as well. Methodology It is well known that numerous state-of-the-art models in NLP require huge data to be trained on from scratch to achieve reasonable results. These models take paramount of memory and immensely time-consuming. NLP researchers have been looking into various successful methods/models in computer vision (CV) and to attain similar success in NLP. A major breakthrough in CV was transferring knowledge obtained from pre-trained models on ImageNet BIBREF18 as a source task to target tasks for efficient results. There has been a huge advancement in the area of transfer learning in NLP due to the introduction of the pre-trained language models such as ULMFITBIBREF19, ELMO BIBREF20,GLUE BIBREF21, BERT BIBREF22, Attention-net BIBREF23, XL-Net BIBREF24 and many more to come etc. These pre-trained models have acquired state-of-the-art performance for many NLP task since they use a huge amount of training data for language understanding as their source models and fine-tune the model to achieve the high accuracy in the target task. We are using ULMFiT in this study since it has been shown significant performance for target domain classification task with minimal labelled data along with less training time with reasonable hardware requirement. Whereas, other models such as BERT, XL-Net etc. are much bigger and complex that need large training time and higher hardware architecture. Methodology ::: Universal Language Model Fine-tuning (ULMFiT) This method ULMFiT BIBREF19 was introduced by Howard and Ruder which can effectively be applied as a transfer learning method for various NLP task. In inductive transfer learning the source task (Language model) is generally different than the target task (Flood detection) and requires labeled data in the target domain. ULMFiT is very suitable for efficient and text classification BIBREF19 is a pre-trained model. This model significantly outperformed in text classification, reducing error by 18-24% on various datasets and achieving accuracy with very small labeled data. Some of the examples where researchers have used ULMFiT to solve a specific problem using power of transfer learning are BIBREF25, BIBREF26. Although, ULMFiT has the capability to handle any type of classification task such as topic classification, question classification, etc. but we are specifically targeting the flood-related tweet classification. Methodology ::: ULMFiT adaptation for Flood Tweet Classification Text classification in any new area generally suffers from no or very little labeled data to work with initially. Inductive transfer learning addressees this very same challenge and ULMFiT method is primarily based on this concept. We have used the pre-trained language model ULMFiT to do the classification for the target task and classify the related and unrelated flood tweets coming from different location social media (Twitter). As shown in Figure FIGREF16 our overall framework adapted from BIBREF19 to do the flood tweet classification. As shown in Figure FIGREF16 we are using the ULMFIiT architecture to solve the flood tweet classification problem. The source domain here is trained on the paramount of text data corpus from WikiText-103 dataset which contains 103 million words, 400 dimensional embedding size, 3 layers neural network architecture (AWD-LSTM) and 1150 hidden activations per layer that creates a general domain language model for general domain LM pretraining to predict the next word in the sequence, learns general features of the language. AWD-LSTM BIBREF27 is a regular LSTM, used for the Language Modeling with various regularization and optimization techniques that produce state-of-the-art results. Next step is Target Task LM Fine-Tuning which entertain the transfer learning idea by gaining the knowledge from the previous step and utilize it in the target task. Here the target task is flood tweet detection which has different data distribution and features so the general model fine-tunes according to the target task and adapt to the new domain (target) by learning the target task-specific features of the language. It is done using discriminative fine-tuning and slanted triangular learning rates for fine-tuning the LM. Finally, Target Task Classifier provide classification results as the probability distribution over flood class labels (related and unrelated) which is a very critical part of transfer learning method. it needs to be very balanced (not too slow or fast fine-tuned) using the gradual unfreezing for fine-tuning the classifier. We used some of the same hyperparameters for this task. Experimental Results and Discussion In this section, we will discuss our experimental results of the text classification. As described above in the methodology section that our source domain model comes from the ULMFiT and the target domain data is Queensland flood data which has almost 10,000 tweets labeled as flood Related and Unrelated. The pre-train ULMFiT model uses the AWD-LSTM language model with embedding size of 400, 3 layers, 1150 hidden activations per layer with a batch size of 70 and a back propagation through time (BPTT) BIBREF19. Dropout here has been used as 0.7 to language model learner and 0.7 to text classifier learner. A base learning rate of 0.01 for LM fine-tuning and multiple values ranging from 0.00001 to 0.1 of learning rate have been used for target classifier fine-tuning for various instances. We have used gradual unfreezing of the model layers in this case to avoid the risk of catastrophic forgetting. It starts fine-tuning of the last layer (minimal general knowledge) to the next lower layer on wards in every iterations to attain the highest performance of the model. We have used the following hardware for the experimentation: Windows 10 Education desktop consisting of intel core i-7 processor and 16GB RAM. We have used python 3.6 and Google colab notebook to execute our model and obtained the results discussed below: The train and test data have divided into 70-30 ratio and we got these results as shown in Table TABREF17 for the individual dataset and the combination of both. The pre-trained network was already trained and we used the target data Queensland flood which provided 96% accuracy with 0.118 Test loss in only 11 seconds provided we used only 70% of training labeled data. The second target data is Alberta flood with the same configuration of train-test split which provided 95% accuracy with 0.118 Test loss in just 19 seconds. As we can see it takes very less time to work with 20,000 of tweets (combined) and at times of emergency it can handle a huge amount of unlabeled data to classify into meaningful categories in minutes. Here, Our focus is localized flood detection thus we are not merging multiple datasets, we will leave the combination for our future work and staying with one Queensland flood data and explore that in details. As it can be seen in Table TABREF18 that event with the 5% of data which is only 500 labeled tweets as target labeled data the model can adapt and fine-tuned the classification model wit 95% accuracy. This model is very efficient and effective when we have a time-sensitive application and instead of training a model from scratch with huge data we can use the pre-trained model and successfully applies to the target domain application. The Table TABREF18 also depicts that even with the very small labeled training data the model was able to achieve the accuracy almost equivalent to the 80% of the training data. There is generally a direct relation which says the more training data is the better but here increased labeled data the accuracy did not contribute significantly towards the accuracy improvement. There are some more measures for accessing the quality of the classification such as training/testing loss and average precision to avoid the bias in the accuracy. Thus, Figure FIGREF19 shows the learning rate adjusting according to the target classifier model, showing with the specific learning rate it achieves the low amount of loss which is called as the slanted triangular Learning rate. Figure FIGREF19 shows the Precision-Recall curve for a particular classification instance where the average Precision is 0.94. It shows that the overall quality of the classification is fairly good and does not favor one class over another. As described above and based on the experimental results we can use a very low amount of labeled data and solved the localized flood disaster situation efficiently for any new location. We faced some limitations in this work and plan to include in our future work described in the next section. Limitation and Future Work We have been focused on a specific type of disaster (Flood) here and did not explore other disaster types since we wanted to capture specific kind of disaster characteristics and learn from it for another flood disaster. We plan to perform extensive experimentation with some other kind of disaster data as well in the future. We have explored and experimented with the twitter dataset only so far because it is widely available and accessible for everyone but we would attempt to include different kinds of data sources such as other social media platforms, news feeds, blogs, text, images, etc. as well to make it a multimodel transfer learning approach in our future models. There are other state-of-the-art pre-trained language model such as BERT, GPT-2, Transformer-XL, etc. for text classification available and we would want to compare this adaptation with other models as well for the most time effective models in the given situation. There can be many more application where multi-class classification including various classes such damage, rescue, buildings, transportation, medical, etc. can be labeled with a small amount in order to build a very efficient classification model. We also have the plan to formulate this a multi-class problem in order to deeply address the problems in disaster management. This opens up a new door for cyber-physical-social systems that would rely on social media feeds coming from human sensors along with wireless physical/environmental sensors in tandem for various applications to create another layer of smart sensors that can achieve the high quality, more reliable and fault-tolerant system. Conclusion As we are aware of the calamity due to flood/ flash flood situation which needs close monitoring and detail attention. With the exponential growth in social media users, there is an ample amount of data which can be extremely useful in flood detection. Transfer learning is very helpful in these applications where we need to train with general knowledge along with little target domain knowledge to attain a highly effective model. We have discovered that inductive transfer learning methods are very useful for social media flood detection data with minimal labeled data. We used Queensland Twitter data as one of the flood locations and used the pre-trained model ULMFiT to successfully classify with accuracy 95% the flood-related tweets with only 5% of labeled target samples under 10 seconds whereas in general, it takes thousands of labeled tweets and huge time to achieve the similar performance. The usage of pre-trained models with minimal space and time complexity, it can be a huge advantage to the time-sensitive application where we need to process millions of tweets efficiently and classify them accordingly with high performance without compromising on the accuracy. Acknowledgment This research is funded by the National Science Foundation (NSF) grant number 1640625. I would like to thank my mentor and advisor Dr. Nirmalya Roy for their motivation, support, and feedback for my research. I am grateful for Dr. Aryya Gangopadhyay (co-advisor) for the discussion and continuous encouragement towards my work.
Queensland flood which provided 96% accuracy, Alberta flood with the same configuration of train-test split which provided 95% accuracy
ef04182b6ae73a83d52cb694cdf4d414c81bf1dc
ef04182b6ae73a83d52cb694cdf4d414c81bf1dc_0
Q: What dataset did they use? Text: Introduction There are various forms of a natural disaster such as flood, earthquake, volcano eruptions, storms, etc. but the flood is one of the lethal and prominent forms of natural disaster according to World Meteorological Organization (WMO) for most of the countries. National Weather Services (NWS) reported 28,826 flash floods events in the United States from October 2007 to October 2015 which resulted in 278 live loss and million-dollar worth crop and property damage BIBREF0. Monitoring and detecting floods in advance and proactively working towards saving peoples live and minimizing damage at the same time is amongst one of the most important tasks nowadays. In recent times, humans are extremely active on social media such as Twitter, Facebook, Youtube, Flickr, Instagram, etc. People use these platform extensively to share crucial information via message, photos and videos in real-time on social media for their interaction and information dissemination on every topic and acts as an active human sensor. It has been observed in the past few years via several case studies that social media also contributes significantly and being used extensively for crisis-related feeds BIBREF1 and extremely helpful in situation awareness towards crisis management BIBREF2, BIBREF3, BIBREF4. Emergency first responders agency, humanitarian organizations, city authorities and other end users are always looking for the right amount and content that would be helpful in the crisis scenarios but generally, social media provides an overwhelming amount of unlabeled data and it is very crucial to filter out the right kind of information using text classification. The advances in Artificial Intelligence (AI) which includes machine learning and Natural Language Processing (NLP) methods can track and focus on humanitarian relief process and extract meaningful insights from the huge amount of social media data generated regularly in a timely manner. One of the major challenge while building a reliable and high accuracy model, it needs a huge amount of labeled data in order to be evaluated properly and achieve higher accuracy. Some of the platforms which uses crowdsourcing services and manually observe the data to label the disaster-related information such as CrisisLexBIBREF5, CrisisNLPBIBREF6, CrisisMMDBIBREF7, AIDRBIBREF8 etc. with already labeled data and pre-trained models, we can efficiently utilize the learned knowledge for the new target domain. In general, to make a good predictive model we need a huge amount of labeled data with specific domain to train that provide accurate, reliable results for the new domain. Transfer learning models efficiently leverage the existing knowledge and perform effectively the intended task by adapting to the new domain. In Figure FIGREF1 shows the comparison of general transfer learning and NLP transfer learning. Transfer learning learns from the source data model and applies the gained knowledge from the source domain to the target domain that requires relatively less labeled data. Social media growth in last decade and availability of existing disaster-related data sources labeled by crowdsourcing platforms provide an opportunity to utilize this data and build a learning model which learns the domain knowledge and transfer the learned knowledge to classify new data with higher accuracy and confidence automatically. This can effectively solve some of the important problems in disaster management such as flood detection, executing rescue operations, sending feedback and contextual warnings to authorities, improved situation awareness, etc. Transfer learning contains various type of knowledge sharing such as inductive, transductive depending on the source and target domain data distribution and source/target task relatedness BIBREF9. Figure FIGREF1 shows basic transfer Learning concept in NLP is slightly different than the general transfer learning. In general transfer learning, we have source domain and target domain, the model build and learned from the source domain data is used to transfer the knowledge to the target domain task model. Whereas, in NLP the source domain is the general understanding of the text learned from not only one domain but from a giant corpus of text, build a language model known as a pre-trained language model. These pre-trained language models are further used for different downstream task such as text classification, spam detection, question answering, etc. We are using here the inductive transfer learning where we have a pre-trained model as source task and improve the performance of the target task (flood tweet classification). We present in this study that using a pre-trained model and very few labeled flood tweets we can achieve great accuracy effectively in no time. The main contributions of this work are as follows: We propose to use the inductive transfer learning method and adapt the ULMFiT Pre-train model for text classification. We fine-tune the target model parameters by knowledge obtained from the source domain for quick and efficient flood tweet classification. We show that ULMFiT method needs a very small amount of labeled data (5%) to achieve high accuracy and performance. This study demonstrates that this model can be applied in real-time flood detection and information extraction with very small training data for new application domain. Related Work Growing active user base on social media and has been created a great opportunity for extracting crucial information in real-time for various events and topics. Social media is being vigorously used as the communication channel in the time of any crisis or any natural disaster in order to convey the actionable information to the emergency responders to help them by more situational awareness context so that they make a better decision for rescue operations, sending alerts, reaching out people right on time. There have been numerous works proposed related to crisis management using social media content which is discussed in the following section. Social media for crisis management Mainly in the analysis of social media content related to crisis situations data type such as images, geolocation, videos, text, etc. but most of the focus of these work has been images and geolocation towards crisis management BIBREF2, BIBREF3, BIBREF4, BIBREF10. Processing social media content is itself a huge challenge and comes with great challenges as well such as information processing, cleaning, filtering, summarizing, extracting, etc. There has been some progress lately in developing methods to extract meaningful information during a crisis for better situation awareness and better decision making BIBREF11. The text domain of the social media data has not been exploited to its fullest and it is generally the most valuable and available data on social media. Text processing can provide great amount of details which can be useful for situation awareness and help towards extracting actionable insights. Identifying relevant text data would eventually result in major event detection which is difficult to correctly track in a short amount of time and fast processing is needed in these scenarios. BIBREF11, BIBREF10. Domain adaptation for crisis management Transfer learning is very popular and active research area of machine learning. This learning method is known for learning the domain knowledge while solving the task and transfer its knowledge from one domain (source) to another domain (target) to solve the task in the new domain. We need to know these basic things while applying transfer learning (1). What needs to be transferred? (2). When to transfer the learned knowledge? (3). How to transfer knowledge? There are few basic transfer learning algorithm principles that include few simple steps as follows: (i) it aims to minimize the error measure by reweighting the source label sample such that it appears as a target. (ii) Adapt the methods iteratively and label target example using these common steps (a) model learned from labeled example, (b) labels some target example (c) New model learns from new labels BIBREF12, BIBREF13. Transfer learning has been explored and applied in various classification problems for high quality and reliable results with less labeled data in the target domain. It has also been used for feature selection, pedestrian detection, improving visual tracking and subtractive bias removal in medial domainBIBREF12. Some of the other example where transfer learning have been used are text classificationBIBREF13, sentiment classification BIBREF14, BIBREF15, domain adaptation BIBREF16, object classificationBIBREF17. Data Collection and Processing In this section, we explain about our data collection and cleaning process of the data followed by some data visualization for better understanding of the data. The text data are decidedly very crucial and if leveraged carefully in time, it can assist in various emergency response services. It could greatly benefit the authorities in their decision-making process, rescue operation, increase situational awareness and early warnings. We are using Twitter data since it is one of the widely used social media platform in recent times. Data Collection: We are using the disaster data from BIBREF5. It contains various dataset including the CrisiLexT6 dataset which contains six crisis events related to English tweets in 2012 and 2013, labeled by relatedness (on-topic and off-topic) of respective crisis. Each crisis event tweets contain almost 10,000 labeled tweets but we are only focused on flood-related tweets thus, we experimented with only two flood event i.e. Queensland flood in Queensland, Australia and Alberta flood in Alberta, Canada and relabeled all on-topic tweets as Related and Off-topic as Unrelated for implicit class labels understanding in this case. The data collection process and duration of CrisisLex data is described in BIBREF5 details. Data cleaning: The tweets, in general, are very noisy and we need to clean the tweets in order to use them for efficient model building. We removed the stop words, numerical, special symbols and characters, punctuation, white space, random alphabets, and URLs, etc. We also transform all the tweets into lower case alphabet to normalize it and remove the redundancy in the data. After cleaning the tweets we performed some data visualization next for better data insights. Data Visualization: Our focus here is to understand the basic characteristics of tweets and demonstrate the power of transfer learning method in this application. Although both of the datasets are similar in distribution thus, we have selected Queensland flood dataset for elaboration. Table TABREF6 shows the fairly equal class distribution in Queensland flood tweets with 5414 related flood tweet and 4619 unrelated flood tweets. Figure FIGREF7 shows the number of words in a tweet which ranges from 5 words up to 30 words in a single tweet. Figure FIGREF7 shows the tweet length distribution contains from 30 characters up to 140 characters in a tweet. Figure FIGREF10, FIGREF10, FIGREF10 shows the top 20 most frequent words, bi-gram and tri-gram respectively of the tweet dataset. By visual inspection of these most frequent words, bigram and trigram, we have a general understanding of the major topics and themes in the data. Tweets characteristics are generally similar in most of the cases so it is highly probable that it can be effectively applied for other scenarios or new location as well. Methodology It is well known that numerous state-of-the-art models in NLP require huge data to be trained on from scratch to achieve reasonable results. These models take paramount of memory and immensely time-consuming. NLP researchers have been looking into various successful methods/models in computer vision (CV) and to attain similar success in NLP. A major breakthrough in CV was transferring knowledge obtained from pre-trained models on ImageNet BIBREF18 as a source task to target tasks for efficient results. There has been a huge advancement in the area of transfer learning in NLP due to the introduction of the pre-trained language models such as ULMFITBIBREF19, ELMO BIBREF20,GLUE BIBREF21, BERT BIBREF22, Attention-net BIBREF23, XL-Net BIBREF24 and many more to come etc. These pre-trained models have acquired state-of-the-art performance for many NLP task since they use a huge amount of training data for language understanding as their source models and fine-tune the model to achieve the high accuracy in the target task. We are using ULMFiT in this study since it has been shown significant performance for target domain classification task with minimal labelled data along with less training time with reasonable hardware requirement. Whereas, other models such as BERT, XL-Net etc. are much bigger and complex that need large training time and higher hardware architecture. Methodology ::: Universal Language Model Fine-tuning (ULMFiT) This method ULMFiT BIBREF19 was introduced by Howard and Ruder which can effectively be applied as a transfer learning method for various NLP task. In inductive transfer learning the source task (Language model) is generally different than the target task (Flood detection) and requires labeled data in the target domain. ULMFiT is very suitable for efficient and text classification BIBREF19 is a pre-trained model. This model significantly outperformed in text classification, reducing error by 18-24% on various datasets and achieving accuracy with very small labeled data. Some of the examples where researchers have used ULMFiT to solve a specific problem using power of transfer learning are BIBREF25, BIBREF26. Although, ULMFiT has the capability to handle any type of classification task such as topic classification, question classification, etc. but we are specifically targeting the flood-related tweet classification. Methodology ::: ULMFiT adaptation for Flood Tweet Classification Text classification in any new area generally suffers from no or very little labeled data to work with initially. Inductive transfer learning addressees this very same challenge and ULMFiT method is primarily based on this concept. We have used the pre-trained language model ULMFiT to do the classification for the target task and classify the related and unrelated flood tweets coming from different location social media (Twitter). As shown in Figure FIGREF16 our overall framework adapted from BIBREF19 to do the flood tweet classification. As shown in Figure FIGREF16 we are using the ULMFIiT architecture to solve the flood tweet classification problem. The source domain here is trained on the paramount of text data corpus from WikiText-103 dataset which contains 103 million words, 400 dimensional embedding size, 3 layers neural network architecture (AWD-LSTM) and 1150 hidden activations per layer that creates a general domain language model for general domain LM pretraining to predict the next word in the sequence, learns general features of the language. AWD-LSTM BIBREF27 is a regular LSTM, used for the Language Modeling with various regularization and optimization techniques that produce state-of-the-art results. Next step is Target Task LM Fine-Tuning which entertain the transfer learning idea by gaining the knowledge from the previous step and utilize it in the target task. Here the target task is flood tweet detection which has different data distribution and features so the general model fine-tunes according to the target task and adapt to the new domain (target) by learning the target task-specific features of the language. It is done using discriminative fine-tuning and slanted triangular learning rates for fine-tuning the LM. Finally, Target Task Classifier provide classification results as the probability distribution over flood class labels (related and unrelated) which is a very critical part of transfer learning method. it needs to be very balanced (not too slow or fast fine-tuned) using the gradual unfreezing for fine-tuning the classifier. We used some of the same hyperparameters for this task. Experimental Results and Discussion In this section, we will discuss our experimental results of the text classification. As described above in the methodology section that our source domain model comes from the ULMFiT and the target domain data is Queensland flood data which has almost 10,000 tweets labeled as flood Related and Unrelated. The pre-train ULMFiT model uses the AWD-LSTM language model with embedding size of 400, 3 layers, 1150 hidden activations per layer with a batch size of 70 and a back propagation through time (BPTT) BIBREF19. Dropout here has been used as 0.7 to language model learner and 0.7 to text classifier learner. A base learning rate of 0.01 for LM fine-tuning and multiple values ranging from 0.00001 to 0.1 of learning rate have been used for target classifier fine-tuning for various instances. We have used gradual unfreezing of the model layers in this case to avoid the risk of catastrophic forgetting. It starts fine-tuning of the last layer (minimal general knowledge) to the next lower layer on wards in every iterations to attain the highest performance of the model. We have used the following hardware for the experimentation: Windows 10 Education desktop consisting of intel core i-7 processor and 16GB RAM. We have used python 3.6 and Google colab notebook to execute our model and obtained the results discussed below: The train and test data have divided into 70-30 ratio and we got these results as shown in Table TABREF17 for the individual dataset and the combination of both. The pre-trained network was already trained and we used the target data Queensland flood which provided 96% accuracy with 0.118 Test loss in only 11 seconds provided we used only 70% of training labeled data. The second target data is Alberta flood with the same configuration of train-test split which provided 95% accuracy with 0.118 Test loss in just 19 seconds. As we can see it takes very less time to work with 20,000 of tweets (combined) and at times of emergency it can handle a huge amount of unlabeled data to classify into meaningful categories in minutes. Here, Our focus is localized flood detection thus we are not merging multiple datasets, we will leave the combination for our future work and staying with one Queensland flood data and explore that in details. As it can be seen in Table TABREF18 that event with the 5% of data which is only 500 labeled tweets as target labeled data the model can adapt and fine-tuned the classification model wit 95% accuracy. This model is very efficient and effective when we have a time-sensitive application and instead of training a model from scratch with huge data we can use the pre-trained model and successfully applies to the target domain application. The Table TABREF18 also depicts that even with the very small labeled training data the model was able to achieve the accuracy almost equivalent to the 80% of the training data. There is generally a direct relation which says the more training data is the better but here increased labeled data the accuracy did not contribute significantly towards the accuracy improvement. There are some more measures for accessing the quality of the classification such as training/testing loss and average precision to avoid the bias in the accuracy. Thus, Figure FIGREF19 shows the learning rate adjusting according to the target classifier model, showing with the specific learning rate it achieves the low amount of loss which is called as the slanted triangular Learning rate. Figure FIGREF19 shows the Precision-Recall curve for a particular classification instance where the average Precision is 0.94. It shows that the overall quality of the classification is fairly good and does not favor one class over another. As described above and based on the experimental results we can use a very low amount of labeled data and solved the localized flood disaster situation efficiently for any new location. We faced some limitations in this work and plan to include in our future work described in the next section. Limitation and Future Work We have been focused on a specific type of disaster (Flood) here and did not explore other disaster types since we wanted to capture specific kind of disaster characteristics and learn from it for another flood disaster. We plan to perform extensive experimentation with some other kind of disaster data as well in the future. We have explored and experimented with the twitter dataset only so far because it is widely available and accessible for everyone but we would attempt to include different kinds of data sources such as other social media platforms, news feeds, blogs, text, images, etc. as well to make it a multimodel transfer learning approach in our future models. There are other state-of-the-art pre-trained language model such as BERT, GPT-2, Transformer-XL, etc. for text classification available and we would want to compare this adaptation with other models as well for the most time effective models in the given situation. There can be many more application where multi-class classification including various classes such damage, rescue, buildings, transportation, medical, etc. can be labeled with a small amount in order to build a very efficient classification model. We also have the plan to formulate this a multi-class problem in order to deeply address the problems in disaster management. This opens up a new door for cyber-physical-social systems that would rely on social media feeds coming from human sensors along with wireless physical/environmental sensors in tandem for various applications to create another layer of smart sensors that can achieve the high quality, more reliable and fault-tolerant system. Conclusion As we are aware of the calamity due to flood/ flash flood situation which needs close monitoring and detail attention. With the exponential growth in social media users, there is an ample amount of data which can be extremely useful in flood detection. Transfer learning is very helpful in these applications where we need to train with general knowledge along with little target domain knowledge to attain a highly effective model. We have discovered that inductive transfer learning methods are very useful for social media flood detection data with minimal labeled data. We used Queensland Twitter data as one of the flood locations and used the pre-trained model ULMFiT to successfully classify with accuracy 95% the flood-related tweets with only 5% of labeled target samples under 10 seconds whereas in general, it takes thousands of labeled tweets and huge time to achieve the similar performance. The usage of pre-trained models with minimal space and time complexity, it can be a huge advantage to the time-sensitive application where we need to process millions of tweets efficiently and classify them accordingly with high performance without compromising on the accuracy. Acknowledgment This research is funded by the National Science Foundation (NSF) grant number 1640625. I would like to thank my mentor and advisor Dr. Nirmalya Roy for their motivation, support, and feedback for my research. I am grateful for Dr. Aryya Gangopadhyay (co-advisor) for the discussion and continuous encouragement towards my work.
disaster data from BIBREF5, Queensland flood in Queensland, Australia and Alberta flood in Alberta, Canada
decb07f9be715de024236e50dc7011a132363480
decb07f9be715de024236e50dc7011a132363480_0
Q: What exactly is new about this stochastic gradient descent algorithm? Text: Introduction Emergency events such as natural or man-made disasters bring unique challenges for humanitarian response organizations. Particularly, sudden-onset crisis situations demand officials to make fast decisions based on minimum information available to deploy rapid crisis response. However, information scarcity during time-critical situations hinders decision-making processes and delays response efforts BIBREF0 , BIBREF1 . During crises, people post updates regarding their statuses, ask for help and other useful information, report infrastructure damages, injured people, etc., on social media platforms like Twitter BIBREF2 . Humanitarian organizations can use this citizen-generated information to provide relief if critical information is easily available in a timely fashion. In this paper, we consider the classification of the social media posts into different humanitarian categories to fulfill different information needs of humanitarian organizations. Specifically, we address two types of information needs described as follows: Informativeness of social media posts: Information posted on social networks during crises vary greatly in value. Most messages contain irrelevant information not useful for disaster response and management. Humanitarian organizations do not want a deluge of noisy messages that are of a personal nature or those that do not contain any useful information. They want clean data that consists of messages containing potentially useful information. They can then use this information for various purposes such as situational awareness. In order to assist humanitarian organizations, we perform binary classification. That is, we aim to classify each message into one of the two classes i.e. “informative" vs. “not informative". Information types of social media posts Furthermore, humanitarian organizations are interested in sorting social media posts into different categories. Identifying social media posts by category assists humanitarian organizations in coordinating their response. Categories such as infrastructure damage, reports of deceased or injured, urgent need for shelter, food and water, or donations of goods or services could therefore be directed to different relief functions. In this work, we show how we can classify tweets into multiple classes. Automatic classification of short crisis-related messages such as tweets is a challenging task due to a number of reasons. Tweets are short (only 140 characters), informal, often contain abbreviations, spelling variations and mistakes, and, therefore, they are hard to understand without enough context. Despite advances in natural language processing (NLP), interpreting the semantics of short informal texts automatically remains a hard problem. Traditional classification approaches rely on manually engineered features like cue words and TF-IDF vectors for learning BIBREF1 . Due to the high variability of the data during a crisis, adapting the model to changes in features and their importance manually is undesirable (and often infeasible). To overcome these issues, we use Deep Neural Networks (DNNs) to classify the tweets. DNNs are usually trained using online learning and have the flexibility to adaptively learn the model parameters as new batches of labeled data arrive, without requiring to retrain the model from scratch. DNNs use distributed condensed representation of words and learn the representation as well as higher level abstract features automatically for the classification task. Distributed representation (as opposed to sparse discrete representation) generalizes well. This can be a crucial advantage at the beginning of a new disaster, when there is not enough event-specific labeled data. We can train a reasonably good DNN model using previously labeled data from other events, and then the model is fine-tuned adaptively as newly labeled data arrives in small batches. In this paper, we use Deep Neural Network (DNN) to address two types of information needs of response organizations: identifying informative tweets and classifying them into topical classes. DNNs use distributed representation of words and learn the representation as well as higher level features automatically for the classification task. We propose a new online algorithm based on stochastic gradient descent to train DNNs in an online fashion during disaster situations. Moreover, we make our source code publicly available for crisis computing community for further research at: https://github.com/CrisisNLP/deep-learning-for-big-crisis-data In the next section, we provide details regarding DNNs we use and the online learning algorithm. Section "Dataset and Experimental Settings" describes datasets and online learning settings. In Section "Results" , we describe results of our models. Section "Related work" presents related-work and we conclude our paper in Section "Conclusions" . Deep Neural Network As argued before, deep neural networks (DNNs) can be quite effective in classifying tweets during a disaster situation because of their distributed representation of words and automatic feature learning capabilities. Furthermore, DNNs are usually trained using online algorithms, which nicely suits the needs of a crisis response situation. Our main hypothesis is that in order to effectively classify tweets, which are short and informal, a classification model should learn the key features at different levels of abstraction. To this end, we use a Convolutional Neural Network (CNN), which has been shown to be effective for sentence-level classification tasks BIBREF3 . Convolutional Neural Network Figure 1 demonstrates how a CNN works with an example tweet. Each word in the vocabulary $V$ is represented by a $D$ dimensional vector in a shared look-up table $L$ $\in $ $^{|V| \times D}$ . $L$ is considered a model parameter to be learned. We can initialize $L$ randomly or using pretrained word embedding vectors like word2vec BIBREF4 . Given an input tweet $\mathbf {s} = (w_1, \cdots , w_T)$ , we first transform it into a feature sequence by mapping each word token $w_t \in \mathbf {s}$ to an index in $L$ . The look-up layer then creates an input vector $\mathbf {x_t}\in ^{D}$ for each token $w_t$ , which are passed through a sequence of convolution and pooling operations to learn high-level abstract features. A convolution operation involves applying a filter $\mathbf {u} \in ^{L.D}$ to a window of $L$ words to produce a new feature $$h_t = f(\mathbf {u} . \mathbf {x}_{t:t+L-1} + b_t)$$ (Eq. 5) where $\mathbf {x}_{t:t+L-1}$ denotes the concatenation of $L$ input vectors, $b_t$ is a bias term, and $f$ is a nonlinear activation function (e.g., $, \tanh $ ). A filter is also known as a kernel or a feature detector. We apply this filter to each possible $L$ -word window in the tweet to generate a feature map $\mathbf {h}_i = [h_1, \cdots , h_{T+L-1}]$ . We repeat this process $N$ times with $N$ different filters to get $N$ different feature maps. We use a wide convolution BIBREF5 (as opposed to narrow), which ensures that the filters reach the entire sentence, including the boundary words. This is done by performing zero-padding, where out-of-range (i.e., $L$0 $L$1 1 or $L$2 $L$3 $L$4 ) vectors are assumed to be zero. After the convolution, we apply a max-pooling operation to each feature map. $$\mathbf {m} = [\mu _p(\mathbf {h}_1), \cdots , \mu _p(\mathbf {h}_N)] $$ (Eq. 6) where $\mu _p(\mathbf {h}_i)$ refers to the $\max $ operation applied to each window of $p$ features in the feature map $\mathbf {h}_i$ . For instance, with $p=2$ , this pooling gives the same number of features as in the feature map (because of the zero-padding). Intuitively, the filters compose local $n$ -grams into higher-level representations in the feature maps, and max-pooling reduces the output dimensionality while keeping the most important aspects from each feature map. Since each convolution-pooling operation is performed independently, the features extracted become invariant in locations (i.e., where they occur in the tweet), thus acting like bag-of- $n$ -grams. However, keeping the order information could be important for modeling sentences. In order to model interactions between the features picked up by the filters and the pooling, we include a dense layer of hidden nodes on top of the pooling layer $$\mathbf {z} = f(V\mathbf {m} + \mathbf {b_h}) $$ (Eq. 7) where $V$ is the weight matrix, $\mathbf {b_h}$ is a bias vector, and $f$ is a non-linear activation. The dense layer naturally deals with variable sentence lengths by producing fixed size output vectors $\mathbf {z}$ , which are fed to the output layer for classification. Depending on the classification tasks, the output layer defines a probability distribution. For binary classification tasks, it defines a Bernoulli distribution: $$p(y|\mathbf {s}, \theta )= (y| (\mathbf {w^T} \mathbf {z} + b )) $$ (Eq. 8) where $$ refers to the sigmoid function, and $\mathbf {w}$ are the weights from the dense layer to the output layer and $b$ is a bias term. For multi-class classification the output layer uses a softmax function. Formally, the probability of $k$ -th label in the output for classification into $K$ classes: $$P(y = k|\mathbf {s}, \theta ) = \frac{exp~(\mathbf {w}_k^T\mathbf {z} + b_k)}{\sum _{j=1}^{K} exp~({\mathbf {w}_j^T\mathbf {z} + b_j)}} $$ (Eq. 9) where, $\mathbf {w}_k$ are the weights associated with class $k$ in the output layer. We fit the models by minimizing the cross-entropy between the predicted distributions $\hat{y}_{n\theta } = p(y_n|\mathbf {s}_n, \theta )$ and the target distributions $y_n$ (i.e., the gold labels). The objective function $f(\theta )$ can be written as: $$f (\theta ) = \sum _{n=1}^{N} \sum _{k=1}^{K} y_{nk}~log~P(y_n = k|\mathbf {s}_n, \theta ) $$ (Eq. 11) where, $N$ is the number of training examples and $y_{nk}$ $=$ $I(y_n = k)$ is an indicator variable to encode the gold labels, i.e., $y_{tk}=1$ if the gold label $y_t=k$ , otherwise 0. Online Learning DNNs are usually trained with first-order online methods like stochastic gradient descent (SGD). This method yields a crucial advantage in crisis situations, where retraining the whole model each time a small batch of labeled data arrives is impractical. Algorithm "Online Learning" demonstrates how our CNN model can be trained in a purely online setting. We first initialize the model parameters $\theta _0$ (line 1), which can be a trained model from other disaster events or it can be initialized randomly to start from scratch. As a new batch of labeled tweets $B_t= \lbrace \mathbf {s}_1 \ldots \mathbf {s}_n \rbrace $ arrives, we first compute the log-loss (cross entropy) in Equation 11 for $B_t$ with respect to the current parameters $\theta _t$ (line 2a). Then, we use backpropagation to compute the gradients $f^{\prime }(\theta _{t})$ of the loss with respect to the current parameters (line 2b). Finally, we update the parameters with the learning rate $\eta _t$ and the mean of the gradients (line 2c). We take the mean of the gradients to deal with minibatches of different sizes. Notice that we take only the current minibatch into account to get an updated model. Choosing a proper learning rate $\eta _t$ can be difficult in practice. Several adaptive methods such as ADADELTA BIBREF6 , ADAM BIBREF7 , etc., have been proposed to overcome this issue. In our model, we use ADADELTA. [t] 1. Initialize the model parameters $\theta _0$ ; 2. a minibatch $B_t= \lbrace \mathbf {s}_1 \ldots \mathbf {s}_n \rbrace $ at time $t$ a. Compute the loss $f(\theta _{t})$ in Equation 11 ; b. Compute gradients of the loss $f^{\prime }(\theta _{t})$ using backpropagation; c. Update: $\theta _{t+1} = \theta _{t} - \eta _t \frac{1}{n} f^{\prime }(\theta _{t})$ ; Online learning of CNN Word Embedding and Fine-tuning As mentioned before, we can initialize the word embeddings $L$ randomly, and learn them as part of model parameters by backpropagating the errors to the look-up layer. Random initialization may lead the training algorithm to get stuck in a local minima. One can plug the readily available embeddings from external sources (e.g., Google embeddings BIBREF4 ) in the neural network model and use them as features without further task-specific tuning. However, the latter approach does not exploit the automatic feature learning capability of DNN models, which is one of the main motivations of using them. In our work, we use pre-trained word embeddings (see below) to better initialize our models, and we fine-tune them for our task, which turns out to be beneficial. Mikolov et al. BIBREF4 propose two log-linear models for computing word embeddings from large (unlabeled) corpuses efficiently: a bag-of-words model CBOW that predicts the current word based on the context words, and a skip-gram model that predicts surrounding words given the current word. They released their pre-trained 300-dimensional word embeddings trained by the skip-gram model on a Google news dataset. Since we work on disaster related tweets, which are quite different from news, we have trained domain-specific embeddings of 300-dimensions (vocabulary size 20 million) using the Skip-gram model of word2vec tool BIBREF8 from a large corpus of disaster related tweets. The corpus contains $57,908$ tweets and $9.4$ million tokens. Dataset and Experimental Settings In this section, we describe the datasets used for the classification tasks and the settings for CNN and online learning. Dataset and Preprocessing We use CrisisNLP BIBREF9 labeled datasets. The CNN models were trained online using a labeled dataset related to the 2015 Nepal Earthquake and the rest of the datasets are used to train an initial model ( $\theta _0$ in Algorithm "Online Learning" ) upon which the online learning is performed. The Nepal earthquake dataset consists of approximately 12k labeled tweets collected from Twitter during the event using different keywords like NepalEarthquake. Of all the labeled tweets, 9k are labeled by trained volunteers during the actual event using the AIDR platform BIBREF10 and the remaining 3k tweets are labeled using the Crowdflower crowdsourcing platform. The dataset is labeled into different informative classes (e.g., affected individuals, infrastructure damage, donations etc.) and one “not-related” or “irrelevant” class. Table 1 provides a one line description of each class and also the total number of labels in each class. Other useful information and Not related or irrelevant are the most frequent classes in the dataset. Data Preprocessing: We normalize all characters to their lower-cased forms, truncate elongations to two characters, spell out every digit to D, all twitter usernames to userID, and all URLs to HTTP. We remove all punctuation marks except periods, semicolons, question and exclamation marks. We further tokenize the tweets using the CMU TweetNLP tool BIBREF11 . Online Training Settings Before performing the online learning, we assume that an initial model $\theta _0$ exists. In our case, we train the initial model using all the datasets from CrisisNLP except the Nepal earthquake. For online training, we sort the Nepal labeled data based on the time stamp of the tweets. This brings the tweets in their posting order. Next, the dataset $D$ is divided at each time interval $d_t$ in which case $D$ is defined as: D = $\sum _{t=1}^T d_t$ where $d_t= 200$ . For each time interval $t$ , we divide the available labeled dataset into a train set (70%), dev set (10%), and a test set (20%) using ski-learn toolkit's module BIBREF12 , which ensured that the class distribution remains reasonably balanced in each subset. Based on the data splitting strategy mentioned above, we start online learning to train a binary and a multi-class classifier. For the binary classifier training, we merge all the informative classes to create one general Informative class. We train CNN models by optimizing the cross entropy in Equation 8 using the gradient-based online learning algorithm ADADELTA BIBREF6 . The learning rate and the parameters were set to the values as suggested by the authors. The maximum number of epochs was set to 25. To avoid overfitting, we use dropout BIBREF13 of hidden units and early stopping based on the accuracy on the validation set. We experimented with $\lbrace 0.0, 0.2, 0.4, 0.5\rbrace $ dropout rates and $\lbrace 32, 64, 128\rbrace $ minibatch sizes. We limit the vocabulary ( $V$ ) to the most frequent $P\%$ ( $P\in \lbrace 80, 85, 90\rbrace $ ) words in the training corpus. The word vectors in $L$ were initialized with the pre-trained embeddings. We use rectified linear units (ReLU) for the activation functions ( $f$ ), $\lbrace 100, 150, 200\rbrace $ filters each having window size ( $L$ ) of $\lbrace 2, 3, 4\rbrace $ , pooling length ( $p$ ) of $\lbrace 2,3, 4\rbrace $ , and $\lbrace 100, 150, 200\rbrace $ dense layer units. All the hyperparameters are tuned on the development set. Results In this section, we present our results for binary and multi-class classification tasks. Binary Classification Figure 2 shows the results for the “informative" vs. “not informative" binary classification task using online learning. The performance of the model is quite inconsistent as the size of the in-event training data varies. We observe an improvement in performance initially. However, the results dropped when the training size is between 2200 to 3900 tweets. We investigated this strange result and found that this could be due to the inconsistencies in the annotation procedure and the data sources. In our in-event (Nepal Earthquake) training data, first 3000 tweets are from CrowdFlower and the rest are from AIDR. Tweets in CrowdFlower were annotated by paid workers, where AIDR tweets are annotated by volunteers. We speculate these inconsistencies can affect the performance at the beginning, but as the model sees more AIDR data (4000+), the performance stabilizes. Multi-Class Classification Figure 3 summarizes the results of online training for the multi-class classification task. Since multi-class classification is a harder task than binary classification, the first training run provides very low accuracy and the results continue to drop until a good number of training examples are available, which in this case is approximately 2200 labeled tweets. As in the binary classification case, after the initial dip in performance, once over 3000 tweets are available, the performance of the classifier improves and remains stable after that. The benefit of using online learning methods like CNN compared to offline learning methods used in classifiers like SVM, Naive Bayes, and Logistic Regression is online training. The labeled data comes in batches and retraining a model on the complete data every time with the addition of newly labeled data is an expensive task. Online training methods learn in small batches, which suits the situation in hand perfectly. Another advantage of neural network methods is automatic feature extraction that does not require any manual feature engineering. The models take labeled tweets as input and automatically learn features based on distributed representation of words. Discussion Rapid analysis of social media posts during time-critical situations is important for humanitarian response organization to take timely decisions and to launch relief efforts. This work proposes solutions to two main challenges that humanitarian organizations face while incorporating social media data into crisis response. First, how to filter-out noisy and irrelevant messages from big crisis data and second, categorization of the informative messages into different classes of interest. By utilizing labeled data from past crises, we show the performance of DNNs trained using the proposed online learning algorithm for binary and multi-class classification tasks. We observe that past labeled data helps when no event-specific data is available in the early hours of a crisis. However, labeled data from event always help improve the classification accuracy. Related work Recent studies have shown the usefulness of crisis-related data on social media for disaster response and management BIBREF14 , BIBREF15 , BIBREF16 . A number of systems have been developed to classify, extract, and summarize BIBREF17 crisis-relevant information from social media; for a detailed survey see BIBREF1 . Cameron, et al., describe a platform for emergency situation awareness BIBREF18 . They classify interesting tweets using an SVM classifier. Verma, et al., use Naive Bayes and MaxEnt classifiers to find situational awareness tweets from several crises BIBREF19 . Imran, et al., implemented AIDR to classify a Twitter data stream during crises BIBREF10 . They use a random forest classifier in an offline setting. After receiving every mini-batch of 50 training examples, they replace the older model with a new one. In BIBREF20 , the authors show the performance of a number of non-neural network classifiers trained on labeled data from past crisis events. However, they do not use DNNs in their comparison. DNNs and word embeddings have been applied successfully to address NLP problems BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . The emergence of tools such as word2vec BIBREF8 and GloVe BIBREF26 have enabled NLP researchers to learn word embeddings efficiently and use them to train better models. Collobert, et al. BIBREF21 presented a unified DNN architecture for solving various NLP tasks including part-of-speech tagging, chunking, named entity recognition and semantic role labeling. They showed that DNNs outperform traditional models in most of these tasks. They also proposed a multi-task learning framework for solving the tasks jointly. Kim BIBREF3 and Kalchbrenner et al. BIBREF5 used convolutional neural networks (CNN) for sentence-level classification tasks (e.g., sentiment/polarity classification, question classification) and showed that CNNs outperform traditional methods (e.g., SVMs, MaxEnts). Caragea, Silvescu, and Tapia used CNNs to identify informative messages during disasters BIBREF22 . However, to the best of our knowledge, no previous research has shown the efficacy of CNNs to both the binary classification and the multi-class classification problems using online learning. Conclusions We presented an online learning model namely Convolutional Neural Network for the purpose of classifying tweets in a disaster response scenario. We proposed a new online learning algorithm for training CNNs in online fashion. We showed that online training of the model perfectly suits the disaster response situation. We assume that a base model trained on past crisis labeled data exists and the event-specific labeled data arrive in small batches which are used to perform online learning. The neural network models bring an additive advantage of automatic feature extraction which eases the training process when compared with offline learning methods like SVM, logistic regression. The model uses only labeled tweets for training and automatically learns features from them. We reported the results of two classification tasks (i.e. binary and multi-class). Moreover, we also provide source code for the online learning of CNN models to research community for further extensions.
CNN model can be trained in a purely online setting. We first initialize the model parameters $\theta _0$ (line 1), which can be a trained model from other disaster events or it can be initialized randomly to start from scratch. As a new batch of labeled tweets $B_t= \lbrace \mathbf {s}_1 \ldots \mathbf {s}_n \rbrace $ arrives, we first compute the log-loss (cross entropy) in Equation 11 for $B_t$ with respect to the current parameters $\theta _t$ (line 2a). Then, we use backpropagation to compute the gradients $f^{\prime }(\theta _{t})$ of the loss with respect to the current parameters (line 2b). Finally, we update the parameters with the learning rate $\eta _t$ and the mean of the gradients (line 2c). We take the mean of the gradients to deal with minibatches of different sizes. Notice that we take only the current minibatch into account to get an updated model.
63eb31f613a41a3ddd86f599e743ed10e1cd07ba
63eb31f613a41a3ddd86f599e743ed10e1cd07ba_0
Q: What codemixed language pairs are evaluated? Text: Multilingual Models for Sequence Labeling We discuss two core models for addressing sequence labeling problems and describe, for each, training them in a single-model multilingual setting: (1) the Meta-LSTM BIBREF0 , an extremely strong baseline for our tasks, and (2) a multilingual BERT-based model BIBREF1 . Meta-LSTM The Meta-LSTM is the best-performing model of the CoNLL 2018 Shared Task BIBREF2 for universal part-of-speech tagging and morphological features. The model is composed of 3 LSTMs: a character-BiLSTM, a word-BiLSTM and a single joint BiLSTM which takes the output of the character and word-BiLSTMs as input. The entire model structure is referred to as Meta-LSTM. To set up multilingual Meta-LSTM training, we take the union of all the word embeddings from the bojanowski2017enriching embeddings model on Wikipedia in all languages. For out-of-vocabulary words, a special unknown token is used in place of the word. The model is then trained as usual with cross-entropy loss. The char-BiLSTM and word-biLSTM are first trained independently. And finally we train the entire Meta-LSTM. Multilingual BERT BERT is a transformer-based model BIBREF3 pretrained with a masked-LM task on millions of words of text. In this paper our BERT-based experiments make use of the cased multilingual BERT model available on GitHub and pretrained on 104 languages. Models fine-tuned on top of BERT models achieve state-of-the-art results on a variety of benchmark and real-world tasks. To train a multilingual BERT model for our sequence prediction tasks, we add a softmax layer on top of the the first wordpiece BIBREF4 of each token and finetune on data from all languages combined. During training, we concatenate examples from all treebanks and randomly shuffle the examples. Small and Practical Models The results in Table TABREF1 make it clear that the BERT-based model for each task is a solid win over a Meta-LSTM model in both the per-language and multilingual settings. However, the number of parameters of the BERT model is very large (179M parameters), making deploying memory intensive and inference slow: 230ms on an Intel Xeon CPU. Our goal is to produce a model fast enough to run on a single CPU while maintaining the modeling capability of the large model on our tasks. Size and speed We choose a three-layer BERT, we call MiniBERT, that has the same number of layers as the Meta-LSTM and has fewer embedding parameters and hidden units than both models. Table TABREF7 shows the parameters of each model. The Meta-LSTM has the largest number of parameters dominated by the large embeddings. BERT's parameters are mostly in the hidden units. The MiniBERT has the fewest total parameters. The inference-speed bottleneck for Meta-LSTM is the sequential character-LSTM-unrolling and for BERT is the large feedforward layers and attention computation that has time complexity quadratic to the sequence length. Table TABREF8 compares the model speeds. BERT is much slower than both MetaLSTM and MiniBERT on CPU. However, it is faster than Meta-LSTM on GPU due to the parallel computation of the transformer. The MiniBERT is significantly faster than the other models on both GPU and CPU. Distillation For model distillation BIBREF6 , we extract sentences from Wikipedia in languages for which public multilingual is pretrained. For each sentence, we use the open-source BERT wordpiece tokenizer BIBREF4 , BIBREF1 and compute cross-entropy loss for each wordpiece: INLINEFORM0 where INLINEFORM0 is the cross-entropy function, INLINEFORM1 is the softmax function, INLINEFORM2 is the BERT model's logit of the current wordpiece, INLINEFORM3 is the small BERT model's logits and INLINEFORM4 is a temperature hyperparameter, explained in Section SECREF11 . To train the distilled multilingual model mMiniBERT, we first use the distillation loss above to train the student from scratch using the teacher's logits on unlabeled data. Afterwards, we finetune the student model on the labeled data the teacher is trained on. Data We use universal part-of-speech tagging and morphology data from the The CoNLL 2018 Shared Task BIBREF7 , BIBREF8 . For comparison simplicity, we remove the languages that the multilingual BERT public checkpoint is not pretrained on. For segmentation, we use a baseline segmenter (UDPipe v2.2) provided by the shared task organizer to segment raw text. We train and tune the models on gold-segmented data and apply the segmenter on the raw test of test data before applying our models. The part-of-speech tagging task has 17 labels for all languages. For morphology, we treat each morphological group as a class and union all classes as a output of 18334 labels. Tuning For Meta-LSTM, we use the public repository's hyperparameters. Following devlin2019, we use a smaller learning rate of 3e-5 for fine-tuning and a larger learning rate of 1e-4 when training from scratch and during distillation. Training batch size is set to 16 for finetuning and 256 for distillation. For distillation, we try temperatures INLINEFORM0 and use the teacher-student accuracy for evaluation. We observe BERT is very confident on its predictions, and using a large temperature INLINEFORM1 to soften the distribution consistently yields the best result. Multilingual Models We compare per-language models trained on single language treebanks with multilingual models in Table TABREF1 and Table TABREF14 . In the experimental results we use a prefix INLINEFORM0 to denote the model is a single multilingual model. We compare Meta-LSTM, BERT, and MiniBERT. mBERT performs the best among all multilingual models. The smallest and fastest model, mMiniBERT, performs comparably to mBERT, and outperforms mMeta-LSTM, a state-of-the-art model for this task. When comparing with per-language models, the multilingual models have lower F1. DBLP:journals/corr/abs-1904-02099 shows similar results. Meta-LSTM, when trained in a multilingual fashion, has bigger drops than BERT in general. Most of the Meta-LSTM drop is due to the character-LSTM, which drops by more than 4 points F1. Low Resource Languages We pick languages with fewer than 500 training examples to investigate the performance of low-resource languages: Tamil (ta), Marathi (mr), Belarusian (be), Lithuanian (lt), Armenian (hy), Kazakh (kk). Table TABREF15 shows the performance of the models. While DBLP:journals/corr/abs-1904-09077 shows effective zero-shot crosslingual transfer from English to other high-resource languages, we show that cross-lingual transfer is even effective on low-resource languages when we train on all languages as mBERT is significantly better than BERT when we have fewer than 50 examples. In these cases, the mMiniBERT distilled from the multilingual mBERT yields results better than training individual BERT models. The gains becomes less significant when we have more training data. The multilingual baseline mMeta-LSTM does not do well on low-resource languages. On the contrary, mMiniBERT performs well and outperforms the state-of-the-art Meta-LSTM on the POS tagging task and on four out of size languages of the Morphology task. Codemixed Input We use the Universal Dependencies' Hindi-English codemixed data set BIBREF9 to test the model's ability to label code-mixed data. This dataset is based on code-switching tweets of Hindi and English multilingual speakers. We use the Devanagari script provided by the data set as input tokens. In the Universal Dependency labeling guidelines, code-switched or foreign-word tokens are labeled as X along with other tokens that cannot be labeled. The trained model learns to partition the languages in a codemixed input by labeling tokens in one language with X, and tokens in the other language with any of the other POS tags. It turns out that the 2nd-most likely label is usually the correct label in this case; we evaluate on this label when the 1-best is X. Table TABREF25 shows that all multilingual models handle codemixed data reasonably well without supervised codemixed traininig data. Conclusion We have described the benefits of multilingual models over models trained on a single language for a single task, and have shown that it is possible to resolve a major concern of deploying large BERT-based models by distilling our multilingual model into one that maintains the quality wins with performance fast enough to run on a single CPU. Our distilled model outperforms a multilingual version of a very strong baseline model, and for most languages yields comparable or better performance to a large BERT model. Training Hyperparameters We use exactly the same hyperparameters as the public multilingual BERT for finetuning our models. We train the part-of-speech tagging task for 10 epochs and the morphology task for 50 epochs. For distillation, we use the following hyperparameters for all tasks. learning rate: 1e-4 temperature: 3 batch size: 256 num epochs: 24 We take the Wikipedia pretraining data as is and drop sentences with fewer than 10 characters. Small BERT structure We use the vocab and wordpiece model included with the cased public multilingual model on GitHub. We use the BERT configuration of the public multilingual BERT with the following modifications for mMiniBERT. Hidden size = 256 Intermediate layer size = 1024 Num attention heads = 4 Layers = 3 The Importance of Distillation To understand the importance of distillation in training mMiniBERT, we compare it to a model with the MiniBERT structure trained from scratch using only labeled multilingual data the teacher is trained on. Table TABREF37 shows that distillation plays an important role in closing the accuracy gap between teacher and student. Per-Language Results We show per-language F1 results of each model in Table SECREF38 and Table SECREF38 . For per-language models, no models are trained for treebanks without tuning data, and metrics of those languages are not reported. All macro-averaged results reported exclude those languages. lccccc treebankBERTMeta-LSTMmBERT mMeta-LSTM mMiniBERT af_afribooms97.6297.6397.4993.1696.08 am_att3.285.63.16 ar_padt90.4690.5590.328990.06 ar_pud71.5968.9671.06 be_hse94.8191.0595.0287.5994.95 bg_btb99.0198.7798.7296.4398.19 ca_ancora98.8498.6298.7797.5798.45 cs_cac99.1799.4399.398.4698.48 cs_cltt87.4887.2587.6787.6287.53 cs_fictree98.6298.6398.2597.297.18 cs_pdt99.0699.0798.9998.2298.61 cs_pud97.1396.5397 da_ddt97.5997.4797.1892.3695.93 de_gsd94.8194.1794.5391.9493.82 de_pud88.7687.4288.7 el_gdt97.9797.497.9194.8797.16 en_ewt95.8295.4595.292.2494.19 en_gum96.2295.0294.7992.3394.24 en_lines97.2296.8195.7993.9695.25 en_partut96.1195.995.0293.2994.61 es_ancora98.8798.7898.1796.2797.8 es_gsd93.793.989.6590.6189.58 es_pud85.8786.185.71 et_edt97.2797.1797.0294.3295.64 eu_bdt96.296.195.5191.5394.15 fa_seraji97.5797.1797.1795.2996.92 fi_ftb96.2696.1293.1587.2389.79 fi_pud95.5593.2395.01 fi_tdt96.8197.0293.991.5892.6 fr_gsd96.6296.4596.2395.3796.05 fr_partut96.189695.4394.3594.93 fr_pud90.7790.190.64 fr_sequoia96.7797.5997.0795.9196.75 fr_spoken97.5595.7896.190.0793.25 ga_idt91.9291.5590.8384.1685.72 gl_ctg96.9997.2196.592.8795.84 gl_treegal93.491.2891.9 he_htb82.7682.4982.6980.9381.93 hi_hdtb97.3197.3997.196.296.43 hi_pud86.4885.3385.68 hr_set97.7997.9497.4796.2497.2 hu_szeged96.5194.7195.9985.595.47 hy_armtdp84.4286.6263.8286.98 id_gsd93.0693.3793.390.8193.35 id_pud63.5263.563.33 it_isdt98.3398.0698.2796.797.8 it_partut98.1298.1798.0996.9998.06 it_postwita95.6695.8695.694.1793.2 it_pud93.8492.7293.67 ja_gsd88.6388.7388.5487.0388.43 ja_modern41.5551.2621.61 ja_pud89.1587.9689.3 kk_ktb75.9361.781.3652.9180.06 ko_gsd95.9295.6490.386.3988.62 ko_kaist95.5695.4293.8687.4693.43 ko_pud41.9346.1131.96 la_ittb98.3498.4298.397.1897.65 la_perseus89.9183.8585.23 la_proiel96.3496.3795.9792.0293.78 lt_hse88.8881.4390.0165.686.9 lv_lvtb94.7994.4793.7188.2591.3 mr_ufal77.4572.175.9265.4875.41 nl_alpino97.196.1697.3393.7896.19 nl_lassysmall95.5495.9295.7294.495.47 no_bokmaal989897.9595.2797.04 no_nynorsklia94.0888.2792.55 no_nynorsk97.9497.9297.6994.9196.59 pl_lfg98.798.598.3995.2197.48 pl_sz98.5697.9198.0594.7397.29 pt_bosque96.7496.7396.1695.5395.85 pt_gsd95.8395.4493.8493.0794.44 pt_pud89.4889.6689.29 ro_nonstandard94.6794.489492.0591.9 ro_rrt97.6397.5297.4795.7896.71 ru_gsd92.2391.3990.8488.1390.14 ru_pud89.788.9289.52 ru_syntagrus98.398.6598.3297.1398.03 ru_taiga93.6292.7593.18 sa_ufal32.4729.5827.11 sk_snk97.0896.3296.9893.6196.35 sl_ssj97.0796.6896.8994.2495.58 sl_sst94.5190.3491.79 sr_set98.6398.3398.3194.7997.36 sv_lines97.2196.5996.9993.6495.57 sv_pud94.5292.0694.32 sv_talbanken98.0397.3497.7794.9196.76 ta_ttb75.7172.774.2861.5174.6 te_mtg94.2592.7293.4287.3293.42 th_pud2.372.731.54 tl_trg70.6928.6268.28 tr_imst93.9694.0393.184.6491.8 tr_pud73.168.3672.47 uk_iu97.2996.697.289396.88 ur_udtb93.8393.8793.699393.05 vi_vtb77.6776.4277.4472.0177.06 yo_ytb43.4830.8534.59 zh_cfl49.8339.7749.42 zh_gsd87.685.785.9682.7686.08 zh_hk66.2957.8865.86 zh_pud83.373.382.95 POS tagging F1 of all models. lccccc treebankBERT F1Meta-LSTM F1mBERT F1mMeta-LSTM F1mMiniBERT F1 af_afribooms97.1197.3696.5388.9893.75 am_att32.3632.36 ar_padt88.2688.2487.7683.1485.34 ar_pud36.3334.2836.08 be_hse82.8374.0387.5259.1681.82 bg_btb97.5497.5897.4791.4195.4 ca_ancora98.3798.2198.2896.0497.67 cs_cac96.3396.4996.5488.1193.47 cs_cltt81.6179.8983.8678.8280.61 cs_fictree96.3996.494.0983.3787.59 cs_pdt97.1896.9197.1589.7794.63 cs_pud93.8887.4491.81 da_ddt97.2297.0895.6289.8294.08 de_gsd90.8490.5890.480.6988.99 de_pud30.4130.5530.4 el_gdt94.5793.9594.8387.692.07 en_gum96.879693.7990.1193.71 en_lines97.3296.6893.1187.4992.07 en_partut94.8895.3890.7679.9990.18 en_pud93.2591.2393.1 es_ancora98.4598.4297.695.1797 es_gsd93.5293.7288.7289.2688.78 es_pud52.752.852.73 et_edt96.1496.1195.7890.5192.14 eu_bdt93.2792.5692.6776.7284.53 fa_seraji97.3597.2596.9193.8296.28 fi_ftb96.3496.4892.3277.8986.47 fi_pud93.5891.1291.65 fi_tdt95.0395.5890.9688.4487.48 fr_gsd96.0596.1194.6786.9794.51 fr_partut93.3292.9388.987.4887.05 fr_pud59.1557.558.94 fr_sequoia97.0997.1391.5485.2390.74 fr_spoken10010098.6280.6796.67 ga_idt82.281.7881.263.4466.82 gl_ctg98.9898.9595.2789.9895.1 gl_treegal80.0568.7375.97 he_htb81.2780.8580.7976.8978.74 hi_hdtb93.3293.8592.9189.0990.65 hi_pud22.122.3722.03 hr_set91.9991.8591.2481.6287.81 hu_szeged93.6591.2892.9371.2587.36 hy_armtdp41.1354.4551.0836.5946.43 id_gsd94.849694.8591.6294.39 id_pud39.8342.7939.79 it_isdt97.797.8297.8795.4797.37 it_partut97.3597.7398.0196.3397.9 it_postwita95.6296.0595.0391.5293.17 it_pud57.8257.4157.6 ja_gsd90.2990.4590.2990.3990.41 ja_modern63.961.1763.99 ja_pud57.457.2657.27 kk_ktb64.625.5559.49 ko_gsd99.6299.5599.498.9999.37 ko_kaist10010099.9499.2499.93 ko_pud38.3338.6638.27 la_ittb96.796.9497.1590.7893.91 la_perseus82.0964.7372.24 la_proiel90.8291.0191.5179.0883.99 lt_hse75.2169.6573.6142.5165.22 lv_lvtb88.6191.3488.179.1181.91 mr_ufal63.9559.1164.233.6354.01 nl_alpino96.2296.1396.5391.995.67 nl_lassysmall96.4696.0295.5592.1695.28 no_bokmaal96.8597.1396.4891.1795.31 no_nynorsklia94.2289.5691.08 no_nynorsk96.797.0496.4992.1294.79 pl_lfg95.8594.6884.9647.9984.56 pl_sz93.991.9371.473.0265.36 pt_bosque96.2796.1687.0483.1385.72 pt_gsd97.295.3367.7276.0171.88 pt_pud52.0649.7950.95 ro_nonstandard88.5288.9186.8982.182.14 ro_rrt97.0297.2396.5893.294.85 ru_gsd88.8386.7381.4464.278.93 ru_pud37.9735.2637.49 ru_syntagrus97.0296.995.9991.9694.33 ru_taiga88.5684.0286.01 sa_ufal15.916.1416.33 sk_snk92.0689.6391.5868.2585.29 sl_ssj94.3993.7894.4182.6989.23 sl_sst88.4691.8978.2285.59 sr_set94.8394.7192.7973.5190.48 sv_lines89.5489.5588.6683.2786.4 sv_pud77.3973.9476.79 sv_talbanken96.9296.5696.1390.2394.49 ta_ttb72.9171.0173.7546.970.22 te_mtg98.9698.9698.5498.6898.54 th_pud8.2708.43 tl_trg29.3128.6225.17 tr_imst89.59188.6373.2381.99 tr_pud23.7223.8423.46 uk_iu92.490.9892.6479.4988.79 ur_udtb82.2483.7282.6481.8982.48 vi_vtb83.748483.9383.5883.94 yo_ytb58.7886.8261.88 zh_cfl46.5543.5545.73 zh_gsd87.6488.3888.3187.0588.5 zh_hk66.3364.9766.23 zh_pud86.3583.686.14 Morphology F1 of all models.
Hindi-English
d2804ac0f068e9c498e33582af9c66906b26cac3
d2804ac0f068e9c498e33582af9c66906b26cac3_0
Q: How do they compress the model? Text: Multilingual Models for Sequence Labeling We discuss two core models for addressing sequence labeling problems and describe, for each, training them in a single-model multilingual setting: (1) the Meta-LSTM BIBREF0 , an extremely strong baseline for our tasks, and (2) a multilingual BERT-based model BIBREF1 . Meta-LSTM The Meta-LSTM is the best-performing model of the CoNLL 2018 Shared Task BIBREF2 for universal part-of-speech tagging and morphological features. The model is composed of 3 LSTMs: a character-BiLSTM, a word-BiLSTM and a single joint BiLSTM which takes the output of the character and word-BiLSTMs as input. The entire model structure is referred to as Meta-LSTM. To set up multilingual Meta-LSTM training, we take the union of all the word embeddings from the bojanowski2017enriching embeddings model on Wikipedia in all languages. For out-of-vocabulary words, a special unknown token is used in place of the word. The model is then trained as usual with cross-entropy loss. The char-BiLSTM and word-biLSTM are first trained independently. And finally we train the entire Meta-LSTM. Multilingual BERT BERT is a transformer-based model BIBREF3 pretrained with a masked-LM task on millions of words of text. In this paper our BERT-based experiments make use of the cased multilingual BERT model available on GitHub and pretrained on 104 languages. Models fine-tuned on top of BERT models achieve state-of-the-art results on a variety of benchmark and real-world tasks. To train a multilingual BERT model for our sequence prediction tasks, we add a softmax layer on top of the the first wordpiece BIBREF4 of each token and finetune on data from all languages combined. During training, we concatenate examples from all treebanks and randomly shuffle the examples. Small and Practical Models The results in Table TABREF1 make it clear that the BERT-based model for each task is a solid win over a Meta-LSTM model in both the per-language and multilingual settings. However, the number of parameters of the BERT model is very large (179M parameters), making deploying memory intensive and inference slow: 230ms on an Intel Xeon CPU. Our goal is to produce a model fast enough to run on a single CPU while maintaining the modeling capability of the large model on our tasks. Size and speed We choose a three-layer BERT, we call MiniBERT, that has the same number of layers as the Meta-LSTM and has fewer embedding parameters and hidden units than both models. Table TABREF7 shows the parameters of each model. The Meta-LSTM has the largest number of parameters dominated by the large embeddings. BERT's parameters are mostly in the hidden units. The MiniBERT has the fewest total parameters. The inference-speed bottleneck for Meta-LSTM is the sequential character-LSTM-unrolling and for BERT is the large feedforward layers and attention computation that has time complexity quadratic to the sequence length. Table TABREF8 compares the model speeds. BERT is much slower than both MetaLSTM and MiniBERT on CPU. However, it is faster than Meta-LSTM on GPU due to the parallel computation of the transformer. The MiniBERT is significantly faster than the other models on both GPU and CPU. Distillation For model distillation BIBREF6 , we extract sentences from Wikipedia in languages for which public multilingual is pretrained. For each sentence, we use the open-source BERT wordpiece tokenizer BIBREF4 , BIBREF1 and compute cross-entropy loss for each wordpiece: INLINEFORM0 where INLINEFORM0 is the cross-entropy function, INLINEFORM1 is the softmax function, INLINEFORM2 is the BERT model's logit of the current wordpiece, INLINEFORM3 is the small BERT model's logits and INLINEFORM4 is a temperature hyperparameter, explained in Section SECREF11 . To train the distilled multilingual model mMiniBERT, we first use the distillation loss above to train the student from scratch using the teacher's logits on unlabeled data. Afterwards, we finetune the student model on the labeled data the teacher is trained on. Data We use universal part-of-speech tagging and morphology data from the The CoNLL 2018 Shared Task BIBREF7 , BIBREF8 . For comparison simplicity, we remove the languages that the multilingual BERT public checkpoint is not pretrained on. For segmentation, we use a baseline segmenter (UDPipe v2.2) provided by the shared task organizer to segment raw text. We train and tune the models on gold-segmented data and apply the segmenter on the raw test of test data before applying our models. The part-of-speech tagging task has 17 labels for all languages. For morphology, we treat each morphological group as a class and union all classes as a output of 18334 labels. Tuning For Meta-LSTM, we use the public repository's hyperparameters. Following devlin2019, we use a smaller learning rate of 3e-5 for fine-tuning and a larger learning rate of 1e-4 when training from scratch and during distillation. Training batch size is set to 16 for finetuning and 256 for distillation. For distillation, we try temperatures INLINEFORM0 and use the teacher-student accuracy for evaluation. We observe BERT is very confident on its predictions, and using a large temperature INLINEFORM1 to soften the distribution consistently yields the best result. Multilingual Models We compare per-language models trained on single language treebanks with multilingual models in Table TABREF1 and Table TABREF14 . In the experimental results we use a prefix INLINEFORM0 to denote the model is a single multilingual model. We compare Meta-LSTM, BERT, and MiniBERT. mBERT performs the best among all multilingual models. The smallest and fastest model, mMiniBERT, performs comparably to mBERT, and outperforms mMeta-LSTM, a state-of-the-art model for this task. When comparing with per-language models, the multilingual models have lower F1. DBLP:journals/corr/abs-1904-02099 shows similar results. Meta-LSTM, when trained in a multilingual fashion, has bigger drops than BERT in general. Most of the Meta-LSTM drop is due to the character-LSTM, which drops by more than 4 points F1. Low Resource Languages We pick languages with fewer than 500 training examples to investigate the performance of low-resource languages: Tamil (ta), Marathi (mr), Belarusian (be), Lithuanian (lt), Armenian (hy), Kazakh (kk). Table TABREF15 shows the performance of the models. While DBLP:journals/corr/abs-1904-09077 shows effective zero-shot crosslingual transfer from English to other high-resource languages, we show that cross-lingual transfer is even effective on low-resource languages when we train on all languages as mBERT is significantly better than BERT when we have fewer than 50 examples. In these cases, the mMiniBERT distilled from the multilingual mBERT yields results better than training individual BERT models. The gains becomes less significant when we have more training data. The multilingual baseline mMeta-LSTM does not do well on low-resource languages. On the contrary, mMiniBERT performs well and outperforms the state-of-the-art Meta-LSTM on the POS tagging task and on four out of size languages of the Morphology task. Codemixed Input We use the Universal Dependencies' Hindi-English codemixed data set BIBREF9 to test the model's ability to label code-mixed data. This dataset is based on code-switching tweets of Hindi and English multilingual speakers. We use the Devanagari script provided by the data set as input tokens. In the Universal Dependency labeling guidelines, code-switched or foreign-word tokens are labeled as X along with other tokens that cannot be labeled. The trained model learns to partition the languages in a codemixed input by labeling tokens in one language with X, and tokens in the other language with any of the other POS tags. It turns out that the 2nd-most likely label is usually the correct label in this case; we evaluate on this label when the 1-best is X. Table TABREF25 shows that all multilingual models handle codemixed data reasonably well without supervised codemixed traininig data. Conclusion We have described the benefits of multilingual models over models trained on a single language for a single task, and have shown that it is possible to resolve a major concern of deploying large BERT-based models by distilling our multilingual model into one that maintains the quality wins with performance fast enough to run on a single CPU. Our distilled model outperforms a multilingual version of a very strong baseline model, and for most languages yields comparable or better performance to a large BERT model. Training Hyperparameters We use exactly the same hyperparameters as the public multilingual BERT for finetuning our models. We train the part-of-speech tagging task for 10 epochs and the morphology task for 50 epochs. For distillation, we use the following hyperparameters for all tasks. learning rate: 1e-4 temperature: 3 batch size: 256 num epochs: 24 We take the Wikipedia pretraining data as is and drop sentences with fewer than 10 characters. Small BERT structure We use the vocab and wordpiece model included with the cased public multilingual model on GitHub. We use the BERT configuration of the public multilingual BERT with the following modifications for mMiniBERT. Hidden size = 256 Intermediate layer size = 1024 Num attention heads = 4 Layers = 3 The Importance of Distillation To understand the importance of distillation in training mMiniBERT, we compare it to a model with the MiniBERT structure trained from scratch using only labeled multilingual data the teacher is trained on. Table TABREF37 shows that distillation plays an important role in closing the accuracy gap between teacher and student. Per-Language Results We show per-language F1 results of each model in Table SECREF38 and Table SECREF38 . For per-language models, no models are trained for treebanks without tuning data, and metrics of those languages are not reported. All macro-averaged results reported exclude those languages. lccccc treebankBERTMeta-LSTMmBERT mMeta-LSTM mMiniBERT af_afribooms97.6297.6397.4993.1696.08 am_att3.285.63.16 ar_padt90.4690.5590.328990.06 ar_pud71.5968.9671.06 be_hse94.8191.0595.0287.5994.95 bg_btb99.0198.7798.7296.4398.19 ca_ancora98.8498.6298.7797.5798.45 cs_cac99.1799.4399.398.4698.48 cs_cltt87.4887.2587.6787.6287.53 cs_fictree98.6298.6398.2597.297.18 cs_pdt99.0699.0798.9998.2298.61 cs_pud97.1396.5397 da_ddt97.5997.4797.1892.3695.93 de_gsd94.8194.1794.5391.9493.82 de_pud88.7687.4288.7 el_gdt97.9797.497.9194.8797.16 en_ewt95.8295.4595.292.2494.19 en_gum96.2295.0294.7992.3394.24 en_lines97.2296.8195.7993.9695.25 en_partut96.1195.995.0293.2994.61 es_ancora98.8798.7898.1796.2797.8 es_gsd93.793.989.6590.6189.58 es_pud85.8786.185.71 et_edt97.2797.1797.0294.3295.64 eu_bdt96.296.195.5191.5394.15 fa_seraji97.5797.1797.1795.2996.92 fi_ftb96.2696.1293.1587.2389.79 fi_pud95.5593.2395.01 fi_tdt96.8197.0293.991.5892.6 fr_gsd96.6296.4596.2395.3796.05 fr_partut96.189695.4394.3594.93 fr_pud90.7790.190.64 fr_sequoia96.7797.5997.0795.9196.75 fr_spoken97.5595.7896.190.0793.25 ga_idt91.9291.5590.8384.1685.72 gl_ctg96.9997.2196.592.8795.84 gl_treegal93.491.2891.9 he_htb82.7682.4982.6980.9381.93 hi_hdtb97.3197.3997.196.296.43 hi_pud86.4885.3385.68 hr_set97.7997.9497.4796.2497.2 hu_szeged96.5194.7195.9985.595.47 hy_armtdp84.4286.6263.8286.98 id_gsd93.0693.3793.390.8193.35 id_pud63.5263.563.33 it_isdt98.3398.0698.2796.797.8 it_partut98.1298.1798.0996.9998.06 it_postwita95.6695.8695.694.1793.2 it_pud93.8492.7293.67 ja_gsd88.6388.7388.5487.0388.43 ja_modern41.5551.2621.61 ja_pud89.1587.9689.3 kk_ktb75.9361.781.3652.9180.06 ko_gsd95.9295.6490.386.3988.62 ko_kaist95.5695.4293.8687.4693.43 ko_pud41.9346.1131.96 la_ittb98.3498.4298.397.1897.65 la_perseus89.9183.8585.23 la_proiel96.3496.3795.9792.0293.78 lt_hse88.8881.4390.0165.686.9 lv_lvtb94.7994.4793.7188.2591.3 mr_ufal77.4572.175.9265.4875.41 nl_alpino97.196.1697.3393.7896.19 nl_lassysmall95.5495.9295.7294.495.47 no_bokmaal989897.9595.2797.04 no_nynorsklia94.0888.2792.55 no_nynorsk97.9497.9297.6994.9196.59 pl_lfg98.798.598.3995.2197.48 pl_sz98.5697.9198.0594.7397.29 pt_bosque96.7496.7396.1695.5395.85 pt_gsd95.8395.4493.8493.0794.44 pt_pud89.4889.6689.29 ro_nonstandard94.6794.489492.0591.9 ro_rrt97.6397.5297.4795.7896.71 ru_gsd92.2391.3990.8488.1390.14 ru_pud89.788.9289.52 ru_syntagrus98.398.6598.3297.1398.03 ru_taiga93.6292.7593.18 sa_ufal32.4729.5827.11 sk_snk97.0896.3296.9893.6196.35 sl_ssj97.0796.6896.8994.2495.58 sl_sst94.5190.3491.79 sr_set98.6398.3398.3194.7997.36 sv_lines97.2196.5996.9993.6495.57 sv_pud94.5292.0694.32 sv_talbanken98.0397.3497.7794.9196.76 ta_ttb75.7172.774.2861.5174.6 te_mtg94.2592.7293.4287.3293.42 th_pud2.372.731.54 tl_trg70.6928.6268.28 tr_imst93.9694.0393.184.6491.8 tr_pud73.168.3672.47 uk_iu97.2996.697.289396.88 ur_udtb93.8393.8793.699393.05 vi_vtb77.6776.4277.4472.0177.06 yo_ytb43.4830.8534.59 zh_cfl49.8339.7749.42 zh_gsd87.685.785.9682.7686.08 zh_hk66.2957.8865.86 zh_pud83.373.382.95 POS tagging F1 of all models. lccccc treebankBERT F1Meta-LSTM F1mBERT F1mMeta-LSTM F1mMiniBERT F1 af_afribooms97.1197.3696.5388.9893.75 am_att32.3632.36 ar_padt88.2688.2487.7683.1485.34 ar_pud36.3334.2836.08 be_hse82.8374.0387.5259.1681.82 bg_btb97.5497.5897.4791.4195.4 ca_ancora98.3798.2198.2896.0497.67 cs_cac96.3396.4996.5488.1193.47 cs_cltt81.6179.8983.8678.8280.61 cs_fictree96.3996.494.0983.3787.59 cs_pdt97.1896.9197.1589.7794.63 cs_pud93.8887.4491.81 da_ddt97.2297.0895.6289.8294.08 de_gsd90.8490.5890.480.6988.99 de_pud30.4130.5530.4 el_gdt94.5793.9594.8387.692.07 en_gum96.879693.7990.1193.71 en_lines97.3296.6893.1187.4992.07 en_partut94.8895.3890.7679.9990.18 en_pud93.2591.2393.1 es_ancora98.4598.4297.695.1797 es_gsd93.5293.7288.7289.2688.78 es_pud52.752.852.73 et_edt96.1496.1195.7890.5192.14 eu_bdt93.2792.5692.6776.7284.53 fa_seraji97.3597.2596.9193.8296.28 fi_ftb96.3496.4892.3277.8986.47 fi_pud93.5891.1291.65 fi_tdt95.0395.5890.9688.4487.48 fr_gsd96.0596.1194.6786.9794.51 fr_partut93.3292.9388.987.4887.05 fr_pud59.1557.558.94 fr_sequoia97.0997.1391.5485.2390.74 fr_spoken10010098.6280.6796.67 ga_idt82.281.7881.263.4466.82 gl_ctg98.9898.9595.2789.9895.1 gl_treegal80.0568.7375.97 he_htb81.2780.8580.7976.8978.74 hi_hdtb93.3293.8592.9189.0990.65 hi_pud22.122.3722.03 hr_set91.9991.8591.2481.6287.81 hu_szeged93.6591.2892.9371.2587.36 hy_armtdp41.1354.4551.0836.5946.43 id_gsd94.849694.8591.6294.39 id_pud39.8342.7939.79 it_isdt97.797.8297.8795.4797.37 it_partut97.3597.7398.0196.3397.9 it_postwita95.6296.0595.0391.5293.17 it_pud57.8257.4157.6 ja_gsd90.2990.4590.2990.3990.41 ja_modern63.961.1763.99 ja_pud57.457.2657.27 kk_ktb64.625.5559.49 ko_gsd99.6299.5599.498.9999.37 ko_kaist10010099.9499.2499.93 ko_pud38.3338.6638.27 la_ittb96.796.9497.1590.7893.91 la_perseus82.0964.7372.24 la_proiel90.8291.0191.5179.0883.99 lt_hse75.2169.6573.6142.5165.22 lv_lvtb88.6191.3488.179.1181.91 mr_ufal63.9559.1164.233.6354.01 nl_alpino96.2296.1396.5391.995.67 nl_lassysmall96.4696.0295.5592.1695.28 no_bokmaal96.8597.1396.4891.1795.31 no_nynorsklia94.2289.5691.08 no_nynorsk96.797.0496.4992.1294.79 pl_lfg95.8594.6884.9647.9984.56 pl_sz93.991.9371.473.0265.36 pt_bosque96.2796.1687.0483.1385.72 pt_gsd97.295.3367.7276.0171.88 pt_pud52.0649.7950.95 ro_nonstandard88.5288.9186.8982.182.14 ro_rrt97.0297.2396.5893.294.85 ru_gsd88.8386.7381.4464.278.93 ru_pud37.9735.2637.49 ru_syntagrus97.0296.995.9991.9694.33 ru_taiga88.5684.0286.01 sa_ufal15.916.1416.33 sk_snk92.0689.6391.5868.2585.29 sl_ssj94.3993.7894.4182.6989.23 sl_sst88.4691.8978.2285.59 sr_set94.8394.7192.7973.5190.48 sv_lines89.5489.5588.6683.2786.4 sv_pud77.3973.9476.79 sv_talbanken96.9296.5696.1390.2394.49 ta_ttb72.9171.0173.7546.970.22 te_mtg98.9698.9698.5498.6898.54 th_pud8.2708.43 tl_trg29.3128.6225.17 tr_imst89.59188.6373.2381.99 tr_pud23.7223.8423.46 uk_iu92.490.9892.6479.4988.79 ur_udtb82.2483.7282.6481.8982.48 vi_vtb83.748483.9383.5883.94 yo_ytb58.7886.8261.88 zh_cfl46.5543.5545.73 zh_gsd87.6488.3888.3187.0588.5 zh_hk66.3364.9766.23 zh_pud86.3583.686.14 Morphology F1 of all models.
we extract sentences from Wikipedia in languages for which public multilingual is pretrained. For each sentence, we use the open-source BERT wordpiece tokenizer BIBREF4 , BIBREF1 and compute cross-entropy loss for each wordpiece: INLINEFORM0
e24fbcc8be922c43f6b6037cdf2bfd4c0a926c08
e24fbcc8be922c43f6b6037cdf2bfd4c0a926c08_0
Q: What is the multilingual baseline? Text: Multilingual Models for Sequence Labeling We discuss two core models for addressing sequence labeling problems and describe, for each, training them in a single-model multilingual setting: (1) the Meta-LSTM BIBREF0 , an extremely strong baseline for our tasks, and (2) a multilingual BERT-based model BIBREF1 . Meta-LSTM The Meta-LSTM is the best-performing model of the CoNLL 2018 Shared Task BIBREF2 for universal part-of-speech tagging and morphological features. The model is composed of 3 LSTMs: a character-BiLSTM, a word-BiLSTM and a single joint BiLSTM which takes the output of the character and word-BiLSTMs as input. The entire model structure is referred to as Meta-LSTM. To set up multilingual Meta-LSTM training, we take the union of all the word embeddings from the bojanowski2017enriching embeddings model on Wikipedia in all languages. For out-of-vocabulary words, a special unknown token is used in place of the word. The model is then trained as usual with cross-entropy loss. The char-BiLSTM and word-biLSTM are first trained independently. And finally we train the entire Meta-LSTM. Multilingual BERT BERT is a transformer-based model BIBREF3 pretrained with a masked-LM task on millions of words of text. In this paper our BERT-based experiments make use of the cased multilingual BERT model available on GitHub and pretrained on 104 languages. Models fine-tuned on top of BERT models achieve state-of-the-art results on a variety of benchmark and real-world tasks. To train a multilingual BERT model for our sequence prediction tasks, we add a softmax layer on top of the the first wordpiece BIBREF4 of each token and finetune on data from all languages combined. During training, we concatenate examples from all treebanks and randomly shuffle the examples. Small and Practical Models The results in Table TABREF1 make it clear that the BERT-based model for each task is a solid win over a Meta-LSTM model in both the per-language and multilingual settings. However, the number of parameters of the BERT model is very large (179M parameters), making deploying memory intensive and inference slow: 230ms on an Intel Xeon CPU. Our goal is to produce a model fast enough to run on a single CPU while maintaining the modeling capability of the large model on our tasks. Size and speed We choose a three-layer BERT, we call MiniBERT, that has the same number of layers as the Meta-LSTM and has fewer embedding parameters and hidden units than both models. Table TABREF7 shows the parameters of each model. The Meta-LSTM has the largest number of parameters dominated by the large embeddings. BERT's parameters are mostly in the hidden units. The MiniBERT has the fewest total parameters. The inference-speed bottleneck for Meta-LSTM is the sequential character-LSTM-unrolling and for BERT is the large feedforward layers and attention computation that has time complexity quadratic to the sequence length. Table TABREF8 compares the model speeds. BERT is much slower than both MetaLSTM and MiniBERT on CPU. However, it is faster than Meta-LSTM on GPU due to the parallel computation of the transformer. The MiniBERT is significantly faster than the other models on both GPU and CPU. Distillation For model distillation BIBREF6 , we extract sentences from Wikipedia in languages for which public multilingual is pretrained. For each sentence, we use the open-source BERT wordpiece tokenizer BIBREF4 , BIBREF1 and compute cross-entropy loss for each wordpiece: INLINEFORM0 where INLINEFORM0 is the cross-entropy function, INLINEFORM1 is the softmax function, INLINEFORM2 is the BERT model's logit of the current wordpiece, INLINEFORM3 is the small BERT model's logits and INLINEFORM4 is a temperature hyperparameter, explained in Section SECREF11 . To train the distilled multilingual model mMiniBERT, we first use the distillation loss above to train the student from scratch using the teacher's logits on unlabeled data. Afterwards, we finetune the student model on the labeled data the teacher is trained on. Data We use universal part-of-speech tagging and morphology data from the The CoNLL 2018 Shared Task BIBREF7 , BIBREF8 . For comparison simplicity, we remove the languages that the multilingual BERT public checkpoint is not pretrained on. For segmentation, we use a baseline segmenter (UDPipe v2.2) provided by the shared task organizer to segment raw text. We train and tune the models on gold-segmented data and apply the segmenter on the raw test of test data before applying our models. The part-of-speech tagging task has 17 labels for all languages. For morphology, we treat each morphological group as a class and union all classes as a output of 18334 labels. Tuning For Meta-LSTM, we use the public repository's hyperparameters. Following devlin2019, we use a smaller learning rate of 3e-5 for fine-tuning and a larger learning rate of 1e-4 when training from scratch and during distillation. Training batch size is set to 16 for finetuning and 256 for distillation. For distillation, we try temperatures INLINEFORM0 and use the teacher-student accuracy for evaluation. We observe BERT is very confident on its predictions, and using a large temperature INLINEFORM1 to soften the distribution consistently yields the best result. Multilingual Models We compare per-language models trained on single language treebanks with multilingual models in Table TABREF1 and Table TABREF14 . In the experimental results we use a prefix INLINEFORM0 to denote the model is a single multilingual model. We compare Meta-LSTM, BERT, and MiniBERT. mBERT performs the best among all multilingual models. The smallest and fastest model, mMiniBERT, performs comparably to mBERT, and outperforms mMeta-LSTM, a state-of-the-art model for this task. When comparing with per-language models, the multilingual models have lower F1. DBLP:journals/corr/abs-1904-02099 shows similar results. Meta-LSTM, when trained in a multilingual fashion, has bigger drops than BERT in general. Most of the Meta-LSTM drop is due to the character-LSTM, which drops by more than 4 points F1. Low Resource Languages We pick languages with fewer than 500 training examples to investigate the performance of low-resource languages: Tamil (ta), Marathi (mr), Belarusian (be), Lithuanian (lt), Armenian (hy), Kazakh (kk). Table TABREF15 shows the performance of the models. While DBLP:journals/corr/abs-1904-09077 shows effective zero-shot crosslingual transfer from English to other high-resource languages, we show that cross-lingual transfer is even effective on low-resource languages when we train on all languages as mBERT is significantly better than BERT when we have fewer than 50 examples. In these cases, the mMiniBERT distilled from the multilingual mBERT yields results better than training individual BERT models. The gains becomes less significant when we have more training data. The multilingual baseline mMeta-LSTM does not do well on low-resource languages. On the contrary, mMiniBERT performs well and outperforms the state-of-the-art Meta-LSTM on the POS tagging task and on four out of size languages of the Morphology task. Codemixed Input We use the Universal Dependencies' Hindi-English codemixed data set BIBREF9 to test the model's ability to label code-mixed data. This dataset is based on code-switching tweets of Hindi and English multilingual speakers. We use the Devanagari script provided by the data set as input tokens. In the Universal Dependency labeling guidelines, code-switched or foreign-word tokens are labeled as X along with other tokens that cannot be labeled. The trained model learns to partition the languages in a codemixed input by labeling tokens in one language with X, and tokens in the other language with any of the other POS tags. It turns out that the 2nd-most likely label is usually the correct label in this case; we evaluate on this label when the 1-best is X. Table TABREF25 shows that all multilingual models handle codemixed data reasonably well without supervised codemixed traininig data. Conclusion We have described the benefits of multilingual models over models trained on a single language for a single task, and have shown that it is possible to resolve a major concern of deploying large BERT-based models by distilling our multilingual model into one that maintains the quality wins with performance fast enough to run on a single CPU. Our distilled model outperforms a multilingual version of a very strong baseline model, and for most languages yields comparable or better performance to a large BERT model. Training Hyperparameters We use exactly the same hyperparameters as the public multilingual BERT for finetuning our models. We train the part-of-speech tagging task for 10 epochs and the morphology task for 50 epochs. For distillation, we use the following hyperparameters for all tasks. learning rate: 1e-4 temperature: 3 batch size: 256 num epochs: 24 We take the Wikipedia pretraining data as is and drop sentences with fewer than 10 characters. Small BERT structure We use the vocab and wordpiece model included with the cased public multilingual model on GitHub. We use the BERT configuration of the public multilingual BERT with the following modifications for mMiniBERT. Hidden size = 256 Intermediate layer size = 1024 Num attention heads = 4 Layers = 3 The Importance of Distillation To understand the importance of distillation in training mMiniBERT, we compare it to a model with the MiniBERT structure trained from scratch using only labeled multilingual data the teacher is trained on. Table TABREF37 shows that distillation plays an important role in closing the accuracy gap between teacher and student. Per-Language Results We show per-language F1 results of each model in Table SECREF38 and Table SECREF38 . For per-language models, no models are trained for treebanks without tuning data, and metrics of those languages are not reported. All macro-averaged results reported exclude those languages. lccccc treebankBERTMeta-LSTMmBERT mMeta-LSTM mMiniBERT af_afribooms97.6297.6397.4993.1696.08 am_att3.285.63.16 ar_padt90.4690.5590.328990.06 ar_pud71.5968.9671.06 be_hse94.8191.0595.0287.5994.95 bg_btb99.0198.7798.7296.4398.19 ca_ancora98.8498.6298.7797.5798.45 cs_cac99.1799.4399.398.4698.48 cs_cltt87.4887.2587.6787.6287.53 cs_fictree98.6298.6398.2597.297.18 cs_pdt99.0699.0798.9998.2298.61 cs_pud97.1396.5397 da_ddt97.5997.4797.1892.3695.93 de_gsd94.8194.1794.5391.9493.82 de_pud88.7687.4288.7 el_gdt97.9797.497.9194.8797.16 en_ewt95.8295.4595.292.2494.19 en_gum96.2295.0294.7992.3394.24 en_lines97.2296.8195.7993.9695.25 en_partut96.1195.995.0293.2994.61 es_ancora98.8798.7898.1796.2797.8 es_gsd93.793.989.6590.6189.58 es_pud85.8786.185.71 et_edt97.2797.1797.0294.3295.64 eu_bdt96.296.195.5191.5394.15 fa_seraji97.5797.1797.1795.2996.92 fi_ftb96.2696.1293.1587.2389.79 fi_pud95.5593.2395.01 fi_tdt96.8197.0293.991.5892.6 fr_gsd96.6296.4596.2395.3796.05 fr_partut96.189695.4394.3594.93 fr_pud90.7790.190.64 fr_sequoia96.7797.5997.0795.9196.75 fr_spoken97.5595.7896.190.0793.25 ga_idt91.9291.5590.8384.1685.72 gl_ctg96.9997.2196.592.8795.84 gl_treegal93.491.2891.9 he_htb82.7682.4982.6980.9381.93 hi_hdtb97.3197.3997.196.296.43 hi_pud86.4885.3385.68 hr_set97.7997.9497.4796.2497.2 hu_szeged96.5194.7195.9985.595.47 hy_armtdp84.4286.6263.8286.98 id_gsd93.0693.3793.390.8193.35 id_pud63.5263.563.33 it_isdt98.3398.0698.2796.797.8 it_partut98.1298.1798.0996.9998.06 it_postwita95.6695.8695.694.1793.2 it_pud93.8492.7293.67 ja_gsd88.6388.7388.5487.0388.43 ja_modern41.5551.2621.61 ja_pud89.1587.9689.3 kk_ktb75.9361.781.3652.9180.06 ko_gsd95.9295.6490.386.3988.62 ko_kaist95.5695.4293.8687.4693.43 ko_pud41.9346.1131.96 la_ittb98.3498.4298.397.1897.65 la_perseus89.9183.8585.23 la_proiel96.3496.3795.9792.0293.78 lt_hse88.8881.4390.0165.686.9 lv_lvtb94.7994.4793.7188.2591.3 mr_ufal77.4572.175.9265.4875.41 nl_alpino97.196.1697.3393.7896.19 nl_lassysmall95.5495.9295.7294.495.47 no_bokmaal989897.9595.2797.04 no_nynorsklia94.0888.2792.55 no_nynorsk97.9497.9297.6994.9196.59 pl_lfg98.798.598.3995.2197.48 pl_sz98.5697.9198.0594.7397.29 pt_bosque96.7496.7396.1695.5395.85 pt_gsd95.8395.4493.8493.0794.44 pt_pud89.4889.6689.29 ro_nonstandard94.6794.489492.0591.9 ro_rrt97.6397.5297.4795.7896.71 ru_gsd92.2391.3990.8488.1390.14 ru_pud89.788.9289.52 ru_syntagrus98.398.6598.3297.1398.03 ru_taiga93.6292.7593.18 sa_ufal32.4729.5827.11 sk_snk97.0896.3296.9893.6196.35 sl_ssj97.0796.6896.8994.2495.58 sl_sst94.5190.3491.79 sr_set98.6398.3398.3194.7997.36 sv_lines97.2196.5996.9993.6495.57 sv_pud94.5292.0694.32 sv_talbanken98.0397.3497.7794.9196.76 ta_ttb75.7172.774.2861.5174.6 te_mtg94.2592.7293.4287.3293.42 th_pud2.372.731.54 tl_trg70.6928.6268.28 tr_imst93.9694.0393.184.6491.8 tr_pud73.168.3672.47 uk_iu97.2996.697.289396.88 ur_udtb93.8393.8793.699393.05 vi_vtb77.6776.4277.4472.0177.06 yo_ytb43.4830.8534.59 zh_cfl49.8339.7749.42 zh_gsd87.685.785.9682.7686.08 zh_hk66.2957.8865.86 zh_pud83.373.382.95 POS tagging F1 of all models. lccccc treebankBERT F1Meta-LSTM F1mBERT F1mMeta-LSTM F1mMiniBERT F1 af_afribooms97.1197.3696.5388.9893.75 am_att32.3632.36 ar_padt88.2688.2487.7683.1485.34 ar_pud36.3334.2836.08 be_hse82.8374.0387.5259.1681.82 bg_btb97.5497.5897.4791.4195.4 ca_ancora98.3798.2198.2896.0497.67 cs_cac96.3396.4996.5488.1193.47 cs_cltt81.6179.8983.8678.8280.61 cs_fictree96.3996.494.0983.3787.59 cs_pdt97.1896.9197.1589.7794.63 cs_pud93.8887.4491.81 da_ddt97.2297.0895.6289.8294.08 de_gsd90.8490.5890.480.6988.99 de_pud30.4130.5530.4 el_gdt94.5793.9594.8387.692.07 en_gum96.879693.7990.1193.71 en_lines97.3296.6893.1187.4992.07 en_partut94.8895.3890.7679.9990.18 en_pud93.2591.2393.1 es_ancora98.4598.4297.695.1797 es_gsd93.5293.7288.7289.2688.78 es_pud52.752.852.73 et_edt96.1496.1195.7890.5192.14 eu_bdt93.2792.5692.6776.7284.53 fa_seraji97.3597.2596.9193.8296.28 fi_ftb96.3496.4892.3277.8986.47 fi_pud93.5891.1291.65 fi_tdt95.0395.5890.9688.4487.48 fr_gsd96.0596.1194.6786.9794.51 fr_partut93.3292.9388.987.4887.05 fr_pud59.1557.558.94 fr_sequoia97.0997.1391.5485.2390.74 fr_spoken10010098.6280.6796.67 ga_idt82.281.7881.263.4466.82 gl_ctg98.9898.9595.2789.9895.1 gl_treegal80.0568.7375.97 he_htb81.2780.8580.7976.8978.74 hi_hdtb93.3293.8592.9189.0990.65 hi_pud22.122.3722.03 hr_set91.9991.8591.2481.6287.81 hu_szeged93.6591.2892.9371.2587.36 hy_armtdp41.1354.4551.0836.5946.43 id_gsd94.849694.8591.6294.39 id_pud39.8342.7939.79 it_isdt97.797.8297.8795.4797.37 it_partut97.3597.7398.0196.3397.9 it_postwita95.6296.0595.0391.5293.17 it_pud57.8257.4157.6 ja_gsd90.2990.4590.2990.3990.41 ja_modern63.961.1763.99 ja_pud57.457.2657.27 kk_ktb64.625.5559.49 ko_gsd99.6299.5599.498.9999.37 ko_kaist10010099.9499.2499.93 ko_pud38.3338.6638.27 la_ittb96.796.9497.1590.7893.91 la_perseus82.0964.7372.24 la_proiel90.8291.0191.5179.0883.99 lt_hse75.2169.6573.6142.5165.22 lv_lvtb88.6191.3488.179.1181.91 mr_ufal63.9559.1164.233.6354.01 nl_alpino96.2296.1396.5391.995.67 nl_lassysmall96.4696.0295.5592.1695.28 no_bokmaal96.8597.1396.4891.1795.31 no_nynorsklia94.2289.5691.08 no_nynorsk96.797.0496.4992.1294.79 pl_lfg95.8594.6884.9647.9984.56 pl_sz93.991.9371.473.0265.36 pt_bosque96.2796.1687.0483.1385.72 pt_gsd97.295.3367.7276.0171.88 pt_pud52.0649.7950.95 ro_nonstandard88.5288.9186.8982.182.14 ro_rrt97.0297.2396.5893.294.85 ru_gsd88.8386.7381.4464.278.93 ru_pud37.9735.2637.49 ru_syntagrus97.0296.995.9991.9694.33 ru_taiga88.5684.0286.01 sa_ufal15.916.1416.33 sk_snk92.0689.6391.5868.2585.29 sl_ssj94.3993.7894.4182.6989.23 sl_sst88.4691.8978.2285.59 sr_set94.8394.7192.7973.5190.48 sv_lines89.5489.5588.6683.2786.4 sv_pud77.3973.9476.79 sv_talbanken96.9296.5696.1390.2394.49 ta_ttb72.9171.0173.7546.970.22 te_mtg98.9698.9698.5498.6898.54 th_pud8.2708.43 tl_trg29.3128.6225.17 tr_imst89.59188.6373.2381.99 tr_pud23.7223.8423.46 uk_iu92.490.9892.6479.4988.79 ur_udtb82.2483.7282.6481.8982.48 vi_vtb83.748483.9383.5883.94 yo_ytb58.7886.8261.88 zh_cfl46.5543.5545.73 zh_gsd87.6488.3888.3187.0588.5 zh_hk66.3364.9766.23 zh_pud86.3583.686.14 Morphology F1 of all models.
the Meta-LSTM BIBREF0
e8c0fabae0d29491471e37dec34f652910302928
e8c0fabae0d29491471e37dec34f652910302928_0
Q: Which features do they use? Text: Introduction Dialogue Act Recognition (DAR) is an essential problem in modeling and detecting discourse structure. The goal of DAR is to attach semantic labels to each utterance in a conversation and recognize the speaker's intention, which can be regarded as a sequence labeling task. Many applications have benefited from the use of automatic dialogue act recognition such as dialogue systems, machine translation, automatic speech recognition, topic identification and talking avatars BIBREF0 BIBREF1 BIBREF2 . One of the primary applications of DAR is to support task-oriented discourse agent system. Knowing the past utterances of DA can help ease the prediction of the current DA state, thus help to narrow the range of utterance generation topics for the current turn. For instance, the "Greeting" and "Farewell" acts are often followed with another same type utterances, the "Answer" act often responds to the former "Question" type utterance. Thus if we can correctly recognize the current dialogue act, we can easily predict the following utterance act and generate a corresponding response. Table 1 shows a snippet of the kind of discourse structure in which we are interested. The essential problem of DAR lies on predicting the utterance's act by referring to contextual utterances with act labels. Most of existing models adopt handcrafted features and formulate the DAR as a multi-classification problem. However, these methods which adopt feature engineering process and multi-classification algorithms reveal deadly weakness from two aspects: First, they are labor intensive and can not scale up well across different datasets. Furthermore, they abandon the useful correlation information among contextual utterances. Typical multi-classification algorithms like SVM, Naive Bayes BIBREF3 BIBREF4 BIBREF5 can not account for the contextual dependencies and classify the DA label in isolation. It is evident that during a conversation, the speaker's intent is influenced by the former utterance such as the previous "Greeting" and "Farewell" examples. To tackle these two problems, some works have turn to structured prediction algorithm along with deep learning tactics such as DRLM-Conditional BIBREF6 , LSTM-Softmax BIBREF0 and RCNN BIBREF7 . However, most of them failed to utilize the empirical effectiveness of attention in the graphical structured network and relies completely on the hidden layers of the network, which may cause the structural bias. A further limitation is that although these works claim they have considered the contextual correlations, in fact they view the whole conversation as a flat sequence and neglect the dual dependencies in the utterance level and act level BIBREF8 BIBREF9 BIBREF10 . Until now, the achieved performances in DAR field are still far behind human annotator's accuracy. In this paper, we present the problem of DAR from the viewpoint of extending richer CRF-attentive structural dependencies along with neural network without abandoning end-to-end training. For simplicity, we call the framework as CRF-ASN (CRF-Attentive Structured Network). Specifically, we propose the hierarchical semantic inference integrated with memory mechanism on the utterance modeling. The memory mechanism is adopted in order to enable the model to look beyond localized features and have access to the entire sequence. The hierarchical semantic modeling learns different levels of granularity including word level, utterance level and conversation level. We then develop internal structured attention network on the linear-chain conditional random field (CRF) to specify structural dependencies in a soft manner. This approach generalizes the soft-selection attention on the structural CRF dependencies and takes into account the contextual influence on the nearing utterances. It is notably that the whole process is differentiable thus can be trained in an end-to-end manner. The main contributions of this paper are as follows: The rest of this paper is organized as follows. In section 2, we introduce the problem of dialogue act recognition from the viewpoint of introducing CRF-structured attention, and propose the CRF-attentive structural network with hierarchical semantic inference and memory mechanism. A variety of experimental results are presented in Section 3. We have a comprehensive analysis on the experiment results and conduct the ablations to prove the availability of our model. We then provide a brief review of the related work about dialogue act recognition problem in Section 4. Finally, we provide some concluding remarks in Section 5. CRF-attentive Structured Network In this section, we study the problem of dialogue act recognition from the viewpoint of extending rich CRF-attentive structural dependencies. We first present the hierarchical semantic inference with memory mechanism from three levels: word level, utterance level and conversation level. We then develop graphical structured attention to the linear chain conditional random field to fully utilize the contextual dependencies. The problem Before presenting the problem, we first introduce some basic mathematical notions and terminologies for dialogue act recognition. Formally, we assume the input is in the form of sequence pairs: INLINEFORM0 with INLINEFORM1 . INLINEFORM2 is the input of the INLINEFORM3 -th conversation in dataset INLINEFORM4 and INLINEFORM5 is the INLINEFORM6 -th targeted dialogue act type. Each conversation INLINEFORM7 is composed of a sequence of utterances which denoted as INLINEFORM8 with aligned act types INLINEFORM9 . We have each dialogue act type assigned to utterance INLINEFORM10 and each associated INLINEFORM11 denoted the possible dialogue act belongs to INLINEFORM12 act types. Again each utterance consists of a sequence of diverse words INLINEFORM13 . Most of the previous models do not leverage the implicit and intrinsic dependencies among dialogue act and utterances. They just consider a conversation as a flat structure with an extremely long chain of words. However, such a construction suffers vanishing gradient problem as the extremely long words become impractical in the neural network back-propagation training process. To alleviate this problem, we consider the conversation to be a hierarchical structure composed of three level encoders: first encode each word in a fine grained manner, and the second encoder operates at the utterance level, the last encoder encode each utterance in the conversation level. Each encoder is based on the previous one thus can make sure the output of the previous one can capture the dependencies across the conversation. Here we take an example to illustrate the sequence structure in Figure 1. Apart from hierarchical neural encoders, we also integrate external memory to allow the model to have unrestricted access to the whole sequence rather than localized features as in RNNs. Naturally the dialogue act recognition problem can be regarded as a sequence labeling task which can be assigned dialogue act through multi-classification method or the structured prediction algorithms. In our formulation, we adopt the linear chain conditional random field (CRF) along with hierarchical attentive encoders for the structured prediction. Instead of labeling each utterance in isolation, structured prediction models such as HMM, CRF can better capture the contextual dependencies among utterances. In our model, we define the structured attention model as being an extended attention model which provides an alternative approach to incorporate the machinery of structural inference directly into our neural network. Hierarchical Semantic Network Due to the hierarchical nature of conversations, our proposed model is constructed at multiple levels of granularity, e.g. word level, utterance level and conversation level. The representation of a conversation can be composed by each utterance INLINEFORM0 , and each INLINEFORM1 can be obtained by combining the representations of constituent words INLINEFORM2 . Taking inspiration from Memory Networks and incorporate so-called memory hops, we adopt the memory enhanced contextual representations in order to have unrestricted access to the whole sequence rather than localized features as former recurrent neural network. Here we include the memory enhanced hierarchical representation in Figure 2 to depict the conversation level representation. As illustrated in Figure 2, the hierarchical semantic network can be divided into two parts: (1) fine grained embedding layer (2) memory enhanced contextual representation layer. The second part can be further broken down into three main components: (a) the input memory INLINEFORM0 which takes in the output from the word embedding layer (b) the contextual attention which takes the consideration of the former utterance and the latter one. (c) the output memory INLINEFORM1 which is obtained from the input memory connected with the attention mechanism. The weights are determined by measuring the similarity between the input memory and the current utterance input. Fine Grained Embedding: For a given conversation, each utterance INLINEFORM0 is encoded by a fine grained embedding layer. We first try to utilize the rich lexical factors and linguistic properties to enhance the word representation. For each word token INLINEFORM1 in each utterance, we initialized the word embedding using pretrained embeddings such as Word2vec or Glove. Furthermore, in order to tackle the out-of-vocabulary (OOV) problem, we adopt the character-level word embedding via CNN to combine with pretrained word level embeddings. We also extend the lexical factors via POS tag and NER tag to enhance the utterance understanding. The obtained four factors are concatenated to form a rich lexical representation as: INLINEFORM2 Since we consider the bidirectional GRU to encode the representation of each utterance, we concatenate the outputs from the forward and backward GRU hidden representations at the time step. For each utterance INLINEFORM0 which consists a sequence of words INLINEFORM1 , the original semantic representation is as follows: INLINEFORM2 Here we utilize INLINEFORM0 and INLINEFORM1 to represent the word level embedding function and utterance level encoder in our hierarchical model. After obtained the original semantic representations on each utterance, we later apply the memory enhanced contextual layer to further explore the correlations between utterances. Memory Enhanced Contextual Representation: Every utterance in a conversation is encoded with INLINEFORM0 , where INLINEFORM1 is the encoding function via Bi-GRU to map the input words into a vector INLINEFORM2 . The original sequence utterances are denoted as INLINEFORM3 . While this original semantic representation can be the input component in the context of memory network. In order to tackle the drawback of insensitivity to temporal information between memory cells, we adopt the approach in injecting temporal signal into the memory using a contextual recurrent encoding: INLINEFORM4 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are learnable parameters. It is a remarkable fact that the new sequence INLINEFORM0 can be seen as the contextual integrated representations which take consider of the former utterances and the latter ones. The injected temporal signal can further explore the contextual influence on the current input utterance. We thus can make use of this obtained INLINEFORM1 to represent another INLINEFORM2 which cares more about the context influence. For the current input utterance INLINEFORM0 , in memory networks, the input is required to be in the same space as the input memory. Here we adopt the popular attention mechanism in the memory by measuring the relevance between current input utterance INLINEFORM1 and the contextual new representation INLINEFORM2 . The relevance is measured with a softmax function: INLINEFORM3 Once the attention weights have been computed, the output memory can be used to generate the final output of the memory layer in the form of a weighted sum over the attention and the input utterance: INLINEFORM0 The output allows the model to have unrestricted access to elements in previous steps as opposed to a single hidden state INLINEFORM0 in recurrent neural networks. Thereby we can effectively detect the long range dependencies among utterances in a conversation. To further extend the complex reasoning over multiple supporting facts from memory, we adopt a stacking operation which stacks hops between the original utterance semantic representation INLINEFORM0 and the k-th output hop INLINEFORM1 to be the input to the INLINEFORM2 th hop: INLINEFORM3 where INLINEFORM0 encodes not only information at the current step ( INLINEFORM1 ), but also relevant knowledge from the contextual memory ( INLINEFORM2 ). Note that in the scope of this work, we limit the number of hops to 1 to ease the computational cost. Structured CRF-Attention Network Traditional attention networks have proven to be an effective approach for embedding categorical inference within a deep neural network. However, In DAR problem, we need to further explore the structural dependencies among utterances and dialogue acts. As we see, utterances in a conversation are not exist independently. The latter utterance may be the responding answer to the former question, or that the chunk of utterances are in the same act type. Here we consider generalizing selection to types of chunks selecting attention, and propose the structured attention to model richer dependencies by incorporating structural distributions within networks. Such a structured attention can be interpreted as using soft-selection that considers all possible structures over the utterance input. In our paper, we formulate the DAR as a sequence labeling problem. It is a natural choice to assign a label to each element in the sequence via linear chain CRF, which enable us to model dependencies among labels. Here we do not directly apply the original linear chain CRF to the learned utterance. Although the dependencies among utterances have been captured by the former hierarchical semantic networks, we still need to further explore the dialogue act dependencies in the label level. For dialogue act sequence labeling problem, greedily predicting the dialogue act at each time-step might not optimal the solution. Instead, it is better to look into the correlations in both utterance level and the dialogue act level in order to jointly decode the best chain of dialogue acts. Formally, let INLINEFORM0 represent a sequence of utterance inputs, let INLINEFORM1 be the corresponding dialogue act sequence. Variable INLINEFORM2 are discrete latent act variables INLINEFORM3 with sample space INLINEFORM4 that encodes the desired selection among these inputs. The aim of the structured attention is to produce a sequence aware INLINEFORM5 INLINEFORM6 based on the utterances INLINEFORM7 and the dialogue act sequence INLINEFORM8 . We assume the attentive distribution INLINEFORM9 , where we condition INLINEFORM10 on the input utterances INLINEFORM11 and the dialogue act sequence INLINEFORM12 . Here we assume the utterances in the conversation as an undirected graph structure with INLINEFORM13 vertices. The CRF is parameterized with clique potentials INLINEFORM14 , indicating the subset of INLINEFORM15 give by clique INLINEFORM16 . Under this definition, the attention probability is defined as INLINEFORM17 . For symmetry, we use the softmax in a general sense, i.e. INLINEFORM18 , where INLINEFORM19 is the implied recognition function. Here INLINEFORM20 comes from the former memory enhanced deep model over utterances INLINEFORM21 and corresponding dialogue acts INLINEFORM22 . The INLINEFORM0 INLINEFORM1 over the utterances and dialogue acts is defined as expectation: INLINEFORM2 where we assume the annotation function INLINEFORM0 factors into INLINEFORM1 . The annotation function is defined to simply return the selected hidden state. The INLINEFORM2 INLINEFORM3 can be interpreted as an dialogue act aware attentive conversation as taking the expectation of the annotation function with respect to a latent variable INLINEFORM4 , where INLINEFORM5 is parameterized to be function of utterances INLINEFORM6 and dialogue acts INLINEFORM7 . The expectation is a linear combination of the input representation and represents how much attention will be focused on each utterance according to the dialogue act sequence. We can model the structural dependencies distribution over the latent INLINEFORM0 with a linear chain CRF with n states: INLINEFORM1 where INLINEFORM0 is the pairwise potential for INLINEFORM1 and INLINEFORM2 . Notice that the utterance INLINEFORM3 and the dialogue act sequence INLINEFORM4 are both obtained from downstream learned representation. The marginal distribution INLINEFORM5 can be calculated efficiently in linear time via the forward-backward algorithm. These marginals further allow us to implicitly sum over the linear chain conditional random field. We refer to this type of attention layer as a INLINEFORM6 INLINEFORM7 INLINEFORM8 , where we can explicitly look into the undirected graphical CRF structure to find which utterances are in the same chunk or in isolation. Here we define the node potentials with a unary CRF setting: INLINEFORM0 where for each utterance we summarize the possible dialogue act to perform sequential reasoning. Given the potential, we compute the structural marginals INLINEFORM0 using the forward-backward algorithm, which is then used to compute the final probability of predicting the sequence of dialogue acts as: INLINEFORM1 End-to-End Training We adopt the maximum likelihood training estimation to learn the CRF-attentive structured parameters. Given the training set INLINEFORM0 with INLINEFORM1 conversation pairs, the log likelihood can be written as: INLINEFORM2 where we denote the INLINEFORM0 as the set of parameters within neural networks from hierarchical layers: word embedding layer, memory enhanced utterance modeling layer, CRF-attentive structured layer. We define the objective function in training process: DISPLAYFORM0 INLINEFORM0 is a hyper-parameter to trade-off the training loss and regularization. By using SGD optimization with the diagonal variant of AdaGrad, at time step t, the parameter INLINEFORM1 is updated as follows: DISPLAYFORM0 where INLINEFORM0 is the initial learning rate and INLINEFORM1 is the sub-gradient at time t. Notice that one of our contributions is to apply CRF structural attention as the final layer of deep models. The whole model can be trained in an end-to-end manner. Here we consider the standard Viterbi algorithm for computing the distribution INLINEFORM0 . The main procedure is summarized in Algorithm 1. For testing, we adopt Viterbi algorithm to obtain the optimal sequence by using dynamic programming techniques. The testing procedure can be written as: INLINEFORM0 [t] Viterbi algorithm for CRF-ASN [1] The observation space INLINEFORM0 The state space INLINEFORM0 The observation sequence INLINEFORM0 The probabilities INLINEFORM0 The most likely hidden state sequence INLINEFORM0 Construct transition matrix INLINEFORM0 , each element stores the transition probability of transiting from state INLINEFORM1 to state INLINEFORM2 Construct emission matrix INLINEFORM3 , each element stores the probability of observing INLINEFORM4 from state INLINEFORM5 each state INLINEFORM6 INLINEFORM7 INLINEFORM8 each observation INLINEFORM9 each state INLINEFORM10 INLINEFORM11 INLINEFORM12 INLINEFORM13 INLINEFORM14 INLINEFORM15 INLINEFORM16 INLINEFORM17 X Experiments In this section, we conduct several experiments on two public DA datasets SwDA and MRDA, and show the effectiveness of our approach CRF-ASN for dialogue act recognition. Data Preparation We evaluate the performance of our method on two benchmark DA datasets: Switchboard Dialogue Act Corpus (SwDA) and The ICSI Meeting Recorder Dialogue Act Corpus (MRDA). These two datasets have been widely used to conduct the dialogue act recognition or the dialogue act classification tasks by several prior studies. SwDA: Switchboard Dialogue Act Corpus is a large hand-labeled dataset of 1155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. Each conversation involved two randomly selected strangers who had been charged with talking informally about one of several, self-selected general interest topics. For each utterance, together with a variety of automatic and semiautomatic tools, the tag set distinguishes 42 mutually exclusive utterance types via DAMSL taxonomy. The top five frequent DA types include STATEMENT, BACKCHANNEL / ACKNOWLEDGE, OPINION, ABANDONED / UNINTERPRETABLE, AGREEMENT / ACCEPT. We list the top five percentages of utterance type in the overall corpus in table2. MRDA: The ICSI Meeting Recorder Dialogue Act Corpus consists of hand-annotated dialog act, adjacency pair, and hotspot labels for the 75 meetings in the ICSI meeting corpus. The MRDA scheme provides several class-maps and corresponding scripts for grouping related tags together into smaller number of DAs. In this work we use the most widely used class-map that groups all tags into 5 DAs, i.e., Disruption (D) indicates the current Dialogue Act is interrupted. BackChannel (B) are utterances which are not made directly by a speaker as a response and do not function in a way that elicits a response either. FloorGrabber (F) are dialogue acts for grabbing or maintaining the floor. Question (Q) is for eliciting listener feedback. And finally, unless an utterance is completely indecipherable or else can be further described by a general tag, then its default status is Statement (S). We respectively list the percentage of the five general dialogue acts in table 3. From the table 2 and table 3, we can see the datasets are highly imbalanced in terms of label distributions. The dialogue act type STATEMENT occupies the largest proportion in both two datasets. Following the second place is the BACKCHANNEL act type which somewhat reflect the speaker's speech style. We present the detailed data preparation procedure for obtaining the clear dataset. For two datasets, we performed pre-processing steps in order to filter out the noise and some informal nature of utterances. We first strip the exclamations and commas, and then we convert the characters into lower-case. Notice that for SwDA, we only get the training and testing datasets. In order to smooth the training step and tune the parameters, we depart the original training dataset into two parts, one for training and the other small part used to be the validation set. We list the detailed statistics of the two datasets in table 4. Evaluation Criteria We mainly evaluate the performance of our proposed CRF-ASN method based on the widely-used evaluation criteria for dialogue act recognition, Accuracy. The Accuracy is the normalized criteria of accessing the quality of the predicted dialogue acts based on the testing utterance set INLINEFORM0 . Given the testing conversation INLINEFORM1 with its ground-truth dialogue acts INLINEFORM2 , we denote the predicted dialogue acts from our CRF-ASN method by INLINEFORM3 . We now introduce the evaluation criteria below. INLINEFORM4 Implemental Details We preprocess each utterance using the library of nltk BIBREF11 and exploit the popular pretrained word embedding Glove with 100 dimensional vectors BIBREF12 . The size of char-level embedding is also set as 100-dimensional and is obtained by CNN filters under the instruction of Kim BIBREF13 . The Gated Recurrent Unit BIBREF14 which is variant from LSTM BIBREF15 is employed throughout our model. We adopt the AdaDelta BIBREF16 optimizer for training with an initial learning rate of 0.005. We also apply dropout BIBREF17 between layers with a dropout rate of 0.2. For the memory network enhanced reasoning, we set the number of hops as 1 to preliminary learn the contextual dependencies among utterances. We do not set too many hops as increasing the number of GRU layers reduced the accuracy of the model. Early stopping is also used on the validation set with a patience of 5 epochs. Conversations with the same number of utterances were grouped together into mini-batches, and each utterance in a mini-batch was padded to the maximum length for that batch. The maximum batch-size allowed was 48. During training, we set the moving averages of all weights as the exponential decay rate of 0.999 BIBREF18 . The whole training process takes approximately 14 hours on a single 1080Ti GPU. All the hyper-parameters were selected by tuning one hyper-parameter at a time while keeping the others fixed. Performance Comparisons We compare our propose method with other several state-of-the-art methods for the problem of dialogue act recognition as follows: Bi-LSTM-CRF BIBREF19 method builds a hierarchical bidirectional LSTM as a base unit and the conditional random field as the top layer to do the dialogue act recognition task. DRLM-Conditional BIBREF20 method combines postive aspects of neural network architectures with probabilistic graphical models. The model combines a recurrent neural network language model with a latent variable model over shallow discourse structure. LSTM-Softmax BIBREF0 method applies a deep LSTM structure to classify dialogue acts via softmax operation. The authors claim that the word embeddings, dropout, weight decay and number of LSTM layers all have large effect on the final performance. RCNN BIBREF8 method composes both sentence model and discourse model to extend beyond the single sentence. The authors propose hierarchical CNN on sentence model and RNN on the contextual discourses. CNN BIBREF21 method incorporates the preceding short texts to classify dialogue act. The authors demonstrate that adding sequential information improves the quality of the predictions. HMM BIBREF5 method treats the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. CRF Simple baseline which applies the text encoding and CRF-based structure prediction on the DAR problem. SVM Simple baseline which applies the text encoding and multi-classification algorithm on the DAR problem. Among them, The former five approaches eg. Bi-LSTM-CRF, DRLM-Conditional, LSTM-Softmax, RCNN, CNN all adopt the deep neural network model in order to better capture the utterances semantic representations. The latter three methods (HMM, CRF, SVM) just employ the simple feature selection on the text processing. About half of the baselines including Bi-LSTM-CRF, DRLM-Conditional, HMM, CRF consider the graphical structured prediction while the others eg. RCNN, CNN, LSTM-Softmax, SVM just adopt the traditional multi-classification algorithms. Table 5 and Table 6 respectively show the experimental Accuracy results of the methods on the SwDA and MRDA datasets. The hyper-parameters and parameters which achieve the best performance on the validation set are chosen to conduct the testing evaluation. The experiments reveal some interesting points: The results show that our proposed model CRF-ASN obviously outperforms the state-of-the-art baselines on both SwDA and MRDA datasets. Numerically, Our model improves the DAR accuracy over Bi-LSTM-CRF by 2.1% and 0.8% on SwDA and MRDA respectively. It is remarkable that our CRF-ASN method is nearly close to the human annotators' performance on SwDA, which is very convincing to prove the superiority of our model. The deep neural networks outperform the other feature-based models. We can see the last three non-deep models obtain worse performance than the top five deep-based methods. This suggests that the performance of dialogue act recognition can be improved significantly with discriminative deep neural networks, either in convolutional neural network or the recurrent neural network. Apart from deep learning tactics, the problem formulations are also critical to the DAR problem. We see structured prediction approaches eg. CRF-ASN, Bi-LSTM-CRF obtain better results than multi-classification eg. LSTM-Softmax. What's more, under the same text encoding situation, the CRF-based model achieves much better results than the SVM-based method. Which can fully prove the superiority of the structured prediction formulation. We also notice that CRF is better than HMM when adopted to the DAR task. The major differences between our proposed model CRF-ASN and the strong baseline BI-LSTM-CRF lie in two aspects: First we adopt a more fine grained manner to encode the utterances and utilize the memory enhanced mechanism to compute the contextual dependencies. Second we employ an adapted structured attention network on the CRF layer, rather than directly apply the original CRF on the utterances. These two modifications are essential and improve the performance significantly. Ablation Results We respectively evaluate the individual contribution of the proposed module in our model. We conduct thorough ablation experiments on the SwDA dataset, which are recorded on the table 7. To make it fair, we only modify one module at a time and fix the other components to be in the same settings. We replace the proposed structured CRF-attention layer to simple CRF, the results show structured CRF-attention layer results in major improvement in the accuracy, approximately over 2.1% absolute points. We further replace the structure prediction formulation to multi-classification on SVM, the results drop dramatically, which illustrate the benefit of considering structural dependencies among utterances. We replace the fine-grained word INLINEFORM0 to the simple Glove vector. The results suggest that fine grained word embedding is useful to represent a text. We also adapt the context state INLINEFORM1 to only care its neighbor utterances. The result is not satisfying, which conveys us that the basic text understanding is critical in the semantic representations. We replace the memory network to directly apply CRF layer to the utterance layer. We also conduct a comparing experiment which plus the original utterance to memory enhanced output. The two results show the designed hierarchical memory-enhanced components are helpful in the utterance understanding and modeling the contextual influence. Visualization In Figure 3, we visualize of the output edge marginals produced by the CRF-ASN model for a conversation. In this instance, the actual dialogue act recognition procedure is displayed as INLINEFORM0 . In the testing step, the model is uncertain and select the most attentive path to maximize the true dialogue act recognition. Here we can see from the marginal edges the path INLINEFORM1 occupies more attentive weights than the path INLINEFORM2 in predicting the dialogue act label. Thus we ultimately select the right way to recognize the dialogue act. Figure 4 shows the confusion heatmap of our proposed CRF-ASN model for the SwDA dataset. Each element in the heatmap denotes the rate that the predicted label is the same to the true label. We can see from the diagonal, the <sd,sd> <b,b> pairs achieve the most satisfying matching score while <qyd, qyd> is much worse than other pairs. This can be explained that the sd (statement) and b(acknowledge) have clearly self-identification while qyd(Declarative Yes-No-Question) is more easier to be mistakenly recognized. We can see that <qyd,qy> which represents (Declarative Yes-No-Questio,Yes-No-Question) is indeed hard to recognize since their dialogue type are too similar with each other. For another reason, we notice that due to the bias of the ground truth, there are some cases that we predict the dialogue act correctly while the ground truth is wrong. To some reason, classifying so many fine-grained dialogue act labels is not easy for human annotators, besides the human-subjectivity occupies an important role in recognizing the dialogue act. Related Work In this section, we briefly review some related work on dialogue act recognition and attention network. Dialogue Act Recognition The main task of dialogue act recognition is to assign an act label to each utterance in a conversation, which can be defined as a supervised problem due to the properties that each utterance has a corresponding act label. Most of the existing work for the problem of dialogue act recognition can be categorized as following two groups. Regarding the DAR as a multi-classification problem. Reithinger et al. BIBREF22 present deal with the dialogue act classification using a statistically based language model. Webb et al. BIBREF23 apply diverse intra-utterance features involving word n-gram cue phrases to understand the utterance and do the classification. Geertzen et al. BIBREF24 propose a multidimensional approach to distinguish and annotate units in dialogue act segmentation and classification. Grau et al. BIBREF3 focus on the dialogue act classification using a Bayesian approach. Serafin et al. BIBREF25 employ Latent Semantic Analysis (LSA) proper and augmented method to work for dialogue act classification. Chen et al. BIBREF26 had an empirical investigation of sparse log-linear models for improved dialogue act classification. Milajevs et al. BIBREF27 investigate a series of compositional distributional semantic models to dialogue act classification. Regarding the DAR as a sequence labeling problem. Stolcke et al. BIBREF5 treat the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Tavafi et al. BIBREF28 study the effectiveness of supervised learning algorithms SVM-HMM for DA modeling across a comprehensive set of conversations. Similar to the SVM-HMM, Surendran et al. BIBREF29 also use a combination of linear support vector machines and hidden markov models for dialog act tagging in the HCRC MapTask corpus. Lendvai et al. BIBREF30 explore two sequence learners with a memory-based tagger and conditional random fields into turn-internal DA chunks. Boyer et al. BIBREF31 also applied HMM to discover internal dialogue strategies inherent in the structure of the sequenced dialogue acts. Galley et al. BIBREF32 use skip-chain conditional random field to model non-local pragmatic dependencies between paired utterances. Zimmermann et al. BIBREF33 investigate the use of conditional random fields for joint segmentation and classification of dialog acts exploiting both word and prosodic features. Recently, approaches based on deep learning methods improved many state-of-the-art techniques in NLP including DAR accuracy on open-domain conversations BIBREF7 BIBREF34 BIBREF6 BIBREF35 BIBREF21 . Kalchbrenner et al. BIBREF7 used a mixture of CNN and RNN. CNNs were used to extract local features from each utterance and RNNs were used to create a general view of the whole dialogue. Khanpour et al. BIBREF0 design a deep neural network model that benefits from pre-trained word embeddings combined with a variation of the RNN structure for the DA classification task. Ji et al. BIBREF6 also investigated the performance of using standard RNN and CNN on DA classification and got the cutting edge results on the MRDA corpus using CNN. Lee et al. BIBREF21 proposes a model based on CNNs and RNNs that incorporates preceding short texts as context to classify current DAs. Zhou et al. BIBREF34 combine heterogeneous information with conditional random fields for Chinese dialogue act recognition. Kumar et al. BIBREF35 build a hierarchical encoder with CRF to learn multiple levels of utterance and act dependencies. Unlike the previous studies, we formulate the problem from the viewpoint of integrating contextual dependencies in both utterance level and the act label level. We not only consider the fine grained multi-level semantic representations, but also integrate the structured attention network to further capture the structure designpendencies in the CRF layer. Attention Network Attention mechanism has become an essential component in text understanding in recent years. Since the first work proposed by Bahdanau et al. BIBREF36 that adopt the attention mechanism in neural machine translation, attention mechanism based neural networks have become a major trend in diverse text researching field, such as in machine comprehension BIBREF37 BIBREF38 BIBREF39 BIBREF40 , machine translation BIBREF41 BIBREF42 , abstract summarization BIBREF43 BIBREF44 , text classification BIBREF45 BIBREF46 BIBREF47 and so on. The principle of attention mechanism is to select the most pertinent piece of information, rather than using all available information, a large part of it being irrelevant to compute the neural response. In our work, we propose the CRF-attentive structured network in order to encode the internal utterance inference with dialogue acts. The structured attention is a more general attention mechanism which take account of the graphical dependencies and allow for extending attention beyond the standard soft-selection approach. The most similar work to our model is proposed by Kim et al. BIBREF48 . Kim et al. also experiment with two different classes of structured attention networks: subsequence selection and syntactic selection. However, the objectives of these two networks aims to segment the structure dependencies, which are quite different from our DAR task. In DAR task we care more on the dialogue act influences on the overall conversation structure, thus the former structured attention may not be suitable for our problem. Conclusion In this paper, we formulate the problem of dialogue act recognition from the viewpoint of capturing hierarchical rich utterance representations and generalize richer CRF attentive graphical structural dependencies without abandoning end-to-end training. We propose the CRF-Attentive Structured Network (CRF-ASN) for the problem. We implement the model in two steps. We first encode the rich semantic representation on the utterance level by incorporating hierarchical granularity and memory enhanced inference mechanism. The learned utterance representation can capture long term dependencies across the conversation. We next adopt the internal structured attention network to compute the dialogue act influence and specify structural dependencies in a soft manner. This approach enable the soft-selection attention on the structural CRF dependencies and take account of the contextual influence on the nearing utterances. We demonstrate the efficacy of our method using the well-known public datasets SwDA and MRDA. The extensive experiments demonstrate that our model can achieve better performance than several state-of-the-art solutions to the problem.
beyond localized features and have access to the entire sequence
cafa6103e609acaf08274a2f6d8686475c6b8723
cafa6103e609acaf08274a2f6d8686475c6b8723_0
Q: By how much do they outperform state-of-the-art solutions on SWDA and MRDA? Text: Introduction Dialogue Act Recognition (DAR) is an essential problem in modeling and detecting discourse structure. The goal of DAR is to attach semantic labels to each utterance in a conversation and recognize the speaker's intention, which can be regarded as a sequence labeling task. Many applications have benefited from the use of automatic dialogue act recognition such as dialogue systems, machine translation, automatic speech recognition, topic identification and talking avatars BIBREF0 BIBREF1 BIBREF2 . One of the primary applications of DAR is to support task-oriented discourse agent system. Knowing the past utterances of DA can help ease the prediction of the current DA state, thus help to narrow the range of utterance generation topics for the current turn. For instance, the "Greeting" and "Farewell" acts are often followed with another same type utterances, the "Answer" act often responds to the former "Question" type utterance. Thus if we can correctly recognize the current dialogue act, we can easily predict the following utterance act and generate a corresponding response. Table 1 shows a snippet of the kind of discourse structure in which we are interested. The essential problem of DAR lies on predicting the utterance's act by referring to contextual utterances with act labels. Most of existing models adopt handcrafted features and formulate the DAR as a multi-classification problem. However, these methods which adopt feature engineering process and multi-classification algorithms reveal deadly weakness from two aspects: First, they are labor intensive and can not scale up well across different datasets. Furthermore, they abandon the useful correlation information among contextual utterances. Typical multi-classification algorithms like SVM, Naive Bayes BIBREF3 BIBREF4 BIBREF5 can not account for the contextual dependencies and classify the DA label in isolation. It is evident that during a conversation, the speaker's intent is influenced by the former utterance such as the previous "Greeting" and "Farewell" examples. To tackle these two problems, some works have turn to structured prediction algorithm along with deep learning tactics such as DRLM-Conditional BIBREF6 , LSTM-Softmax BIBREF0 and RCNN BIBREF7 . However, most of them failed to utilize the empirical effectiveness of attention in the graphical structured network and relies completely on the hidden layers of the network, which may cause the structural bias. A further limitation is that although these works claim they have considered the contextual correlations, in fact they view the whole conversation as a flat sequence and neglect the dual dependencies in the utterance level and act level BIBREF8 BIBREF9 BIBREF10 . Until now, the achieved performances in DAR field are still far behind human annotator's accuracy. In this paper, we present the problem of DAR from the viewpoint of extending richer CRF-attentive structural dependencies along with neural network without abandoning end-to-end training. For simplicity, we call the framework as CRF-ASN (CRF-Attentive Structured Network). Specifically, we propose the hierarchical semantic inference integrated with memory mechanism on the utterance modeling. The memory mechanism is adopted in order to enable the model to look beyond localized features and have access to the entire sequence. The hierarchical semantic modeling learns different levels of granularity including word level, utterance level and conversation level. We then develop internal structured attention network on the linear-chain conditional random field (CRF) to specify structural dependencies in a soft manner. This approach generalizes the soft-selection attention on the structural CRF dependencies and takes into account the contextual influence on the nearing utterances. It is notably that the whole process is differentiable thus can be trained in an end-to-end manner. The main contributions of this paper are as follows: The rest of this paper is organized as follows. In section 2, we introduce the problem of dialogue act recognition from the viewpoint of introducing CRF-structured attention, and propose the CRF-attentive structural network with hierarchical semantic inference and memory mechanism. A variety of experimental results are presented in Section 3. We have a comprehensive analysis on the experiment results and conduct the ablations to prove the availability of our model. We then provide a brief review of the related work about dialogue act recognition problem in Section 4. Finally, we provide some concluding remarks in Section 5. CRF-attentive Structured Network In this section, we study the problem of dialogue act recognition from the viewpoint of extending rich CRF-attentive structural dependencies. We first present the hierarchical semantic inference with memory mechanism from three levels: word level, utterance level and conversation level. We then develop graphical structured attention to the linear chain conditional random field to fully utilize the contextual dependencies. The problem Before presenting the problem, we first introduce some basic mathematical notions and terminologies for dialogue act recognition. Formally, we assume the input is in the form of sequence pairs: INLINEFORM0 with INLINEFORM1 . INLINEFORM2 is the input of the INLINEFORM3 -th conversation in dataset INLINEFORM4 and INLINEFORM5 is the INLINEFORM6 -th targeted dialogue act type. Each conversation INLINEFORM7 is composed of a sequence of utterances which denoted as INLINEFORM8 with aligned act types INLINEFORM9 . We have each dialogue act type assigned to utterance INLINEFORM10 and each associated INLINEFORM11 denoted the possible dialogue act belongs to INLINEFORM12 act types. Again each utterance consists of a sequence of diverse words INLINEFORM13 . Most of the previous models do not leverage the implicit and intrinsic dependencies among dialogue act and utterances. They just consider a conversation as a flat structure with an extremely long chain of words. However, such a construction suffers vanishing gradient problem as the extremely long words become impractical in the neural network back-propagation training process. To alleviate this problem, we consider the conversation to be a hierarchical structure composed of three level encoders: first encode each word in a fine grained manner, and the second encoder operates at the utterance level, the last encoder encode each utterance in the conversation level. Each encoder is based on the previous one thus can make sure the output of the previous one can capture the dependencies across the conversation. Here we take an example to illustrate the sequence structure in Figure 1. Apart from hierarchical neural encoders, we also integrate external memory to allow the model to have unrestricted access to the whole sequence rather than localized features as in RNNs. Naturally the dialogue act recognition problem can be regarded as a sequence labeling task which can be assigned dialogue act through multi-classification method or the structured prediction algorithms. In our formulation, we adopt the linear chain conditional random field (CRF) along with hierarchical attentive encoders for the structured prediction. Instead of labeling each utterance in isolation, structured prediction models such as HMM, CRF can better capture the contextual dependencies among utterances. In our model, we define the structured attention model as being an extended attention model which provides an alternative approach to incorporate the machinery of structural inference directly into our neural network. Hierarchical Semantic Network Due to the hierarchical nature of conversations, our proposed model is constructed at multiple levels of granularity, e.g. word level, utterance level and conversation level. The representation of a conversation can be composed by each utterance INLINEFORM0 , and each INLINEFORM1 can be obtained by combining the representations of constituent words INLINEFORM2 . Taking inspiration from Memory Networks and incorporate so-called memory hops, we adopt the memory enhanced contextual representations in order to have unrestricted access to the whole sequence rather than localized features as former recurrent neural network. Here we include the memory enhanced hierarchical representation in Figure 2 to depict the conversation level representation. As illustrated in Figure 2, the hierarchical semantic network can be divided into two parts: (1) fine grained embedding layer (2) memory enhanced contextual representation layer. The second part can be further broken down into three main components: (a) the input memory INLINEFORM0 which takes in the output from the word embedding layer (b) the contextual attention which takes the consideration of the former utterance and the latter one. (c) the output memory INLINEFORM1 which is obtained from the input memory connected with the attention mechanism. The weights are determined by measuring the similarity between the input memory and the current utterance input. Fine Grained Embedding: For a given conversation, each utterance INLINEFORM0 is encoded by a fine grained embedding layer. We first try to utilize the rich lexical factors and linguistic properties to enhance the word representation. For each word token INLINEFORM1 in each utterance, we initialized the word embedding using pretrained embeddings such as Word2vec or Glove. Furthermore, in order to tackle the out-of-vocabulary (OOV) problem, we adopt the character-level word embedding via CNN to combine with pretrained word level embeddings. We also extend the lexical factors via POS tag and NER tag to enhance the utterance understanding. The obtained four factors are concatenated to form a rich lexical representation as: INLINEFORM2 Since we consider the bidirectional GRU to encode the representation of each utterance, we concatenate the outputs from the forward and backward GRU hidden representations at the time step. For each utterance INLINEFORM0 which consists a sequence of words INLINEFORM1 , the original semantic representation is as follows: INLINEFORM2 Here we utilize INLINEFORM0 and INLINEFORM1 to represent the word level embedding function and utterance level encoder in our hierarchical model. After obtained the original semantic representations on each utterance, we later apply the memory enhanced contextual layer to further explore the correlations between utterances. Memory Enhanced Contextual Representation: Every utterance in a conversation is encoded with INLINEFORM0 , where INLINEFORM1 is the encoding function via Bi-GRU to map the input words into a vector INLINEFORM2 . The original sequence utterances are denoted as INLINEFORM3 . While this original semantic representation can be the input component in the context of memory network. In order to tackle the drawback of insensitivity to temporal information between memory cells, we adopt the approach in injecting temporal signal into the memory using a contextual recurrent encoding: INLINEFORM4 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are learnable parameters. It is a remarkable fact that the new sequence INLINEFORM0 can be seen as the contextual integrated representations which take consider of the former utterances and the latter ones. The injected temporal signal can further explore the contextual influence on the current input utterance. We thus can make use of this obtained INLINEFORM1 to represent another INLINEFORM2 which cares more about the context influence. For the current input utterance INLINEFORM0 , in memory networks, the input is required to be in the same space as the input memory. Here we adopt the popular attention mechanism in the memory by measuring the relevance between current input utterance INLINEFORM1 and the contextual new representation INLINEFORM2 . The relevance is measured with a softmax function: INLINEFORM3 Once the attention weights have been computed, the output memory can be used to generate the final output of the memory layer in the form of a weighted sum over the attention and the input utterance: INLINEFORM0 The output allows the model to have unrestricted access to elements in previous steps as opposed to a single hidden state INLINEFORM0 in recurrent neural networks. Thereby we can effectively detect the long range dependencies among utterances in a conversation. To further extend the complex reasoning over multiple supporting facts from memory, we adopt a stacking operation which stacks hops between the original utterance semantic representation INLINEFORM0 and the k-th output hop INLINEFORM1 to be the input to the INLINEFORM2 th hop: INLINEFORM3 where INLINEFORM0 encodes not only information at the current step ( INLINEFORM1 ), but also relevant knowledge from the contextual memory ( INLINEFORM2 ). Note that in the scope of this work, we limit the number of hops to 1 to ease the computational cost. Structured CRF-Attention Network Traditional attention networks have proven to be an effective approach for embedding categorical inference within a deep neural network. However, In DAR problem, we need to further explore the structural dependencies among utterances and dialogue acts. As we see, utterances in a conversation are not exist independently. The latter utterance may be the responding answer to the former question, or that the chunk of utterances are in the same act type. Here we consider generalizing selection to types of chunks selecting attention, and propose the structured attention to model richer dependencies by incorporating structural distributions within networks. Such a structured attention can be interpreted as using soft-selection that considers all possible structures over the utterance input. In our paper, we formulate the DAR as a sequence labeling problem. It is a natural choice to assign a label to each element in the sequence via linear chain CRF, which enable us to model dependencies among labels. Here we do not directly apply the original linear chain CRF to the learned utterance. Although the dependencies among utterances have been captured by the former hierarchical semantic networks, we still need to further explore the dialogue act dependencies in the label level. For dialogue act sequence labeling problem, greedily predicting the dialogue act at each time-step might not optimal the solution. Instead, it is better to look into the correlations in both utterance level and the dialogue act level in order to jointly decode the best chain of dialogue acts. Formally, let INLINEFORM0 represent a sequence of utterance inputs, let INLINEFORM1 be the corresponding dialogue act sequence. Variable INLINEFORM2 are discrete latent act variables INLINEFORM3 with sample space INLINEFORM4 that encodes the desired selection among these inputs. The aim of the structured attention is to produce a sequence aware INLINEFORM5 INLINEFORM6 based on the utterances INLINEFORM7 and the dialogue act sequence INLINEFORM8 . We assume the attentive distribution INLINEFORM9 , where we condition INLINEFORM10 on the input utterances INLINEFORM11 and the dialogue act sequence INLINEFORM12 . Here we assume the utterances in the conversation as an undirected graph structure with INLINEFORM13 vertices. The CRF is parameterized with clique potentials INLINEFORM14 , indicating the subset of INLINEFORM15 give by clique INLINEFORM16 . Under this definition, the attention probability is defined as INLINEFORM17 . For symmetry, we use the softmax in a general sense, i.e. INLINEFORM18 , where INLINEFORM19 is the implied recognition function. Here INLINEFORM20 comes from the former memory enhanced deep model over utterances INLINEFORM21 and corresponding dialogue acts INLINEFORM22 . The INLINEFORM0 INLINEFORM1 over the utterances and dialogue acts is defined as expectation: INLINEFORM2 where we assume the annotation function INLINEFORM0 factors into INLINEFORM1 . The annotation function is defined to simply return the selected hidden state. The INLINEFORM2 INLINEFORM3 can be interpreted as an dialogue act aware attentive conversation as taking the expectation of the annotation function with respect to a latent variable INLINEFORM4 , where INLINEFORM5 is parameterized to be function of utterances INLINEFORM6 and dialogue acts INLINEFORM7 . The expectation is a linear combination of the input representation and represents how much attention will be focused on each utterance according to the dialogue act sequence. We can model the structural dependencies distribution over the latent INLINEFORM0 with a linear chain CRF with n states: INLINEFORM1 where INLINEFORM0 is the pairwise potential for INLINEFORM1 and INLINEFORM2 . Notice that the utterance INLINEFORM3 and the dialogue act sequence INLINEFORM4 are both obtained from downstream learned representation. The marginal distribution INLINEFORM5 can be calculated efficiently in linear time via the forward-backward algorithm. These marginals further allow us to implicitly sum over the linear chain conditional random field. We refer to this type of attention layer as a INLINEFORM6 INLINEFORM7 INLINEFORM8 , where we can explicitly look into the undirected graphical CRF structure to find which utterances are in the same chunk or in isolation. Here we define the node potentials with a unary CRF setting: INLINEFORM0 where for each utterance we summarize the possible dialogue act to perform sequential reasoning. Given the potential, we compute the structural marginals INLINEFORM0 using the forward-backward algorithm, which is then used to compute the final probability of predicting the sequence of dialogue acts as: INLINEFORM1 End-to-End Training We adopt the maximum likelihood training estimation to learn the CRF-attentive structured parameters. Given the training set INLINEFORM0 with INLINEFORM1 conversation pairs, the log likelihood can be written as: INLINEFORM2 where we denote the INLINEFORM0 as the set of parameters within neural networks from hierarchical layers: word embedding layer, memory enhanced utterance modeling layer, CRF-attentive structured layer. We define the objective function in training process: DISPLAYFORM0 INLINEFORM0 is a hyper-parameter to trade-off the training loss and regularization. By using SGD optimization with the diagonal variant of AdaGrad, at time step t, the parameter INLINEFORM1 is updated as follows: DISPLAYFORM0 where INLINEFORM0 is the initial learning rate and INLINEFORM1 is the sub-gradient at time t. Notice that one of our contributions is to apply CRF structural attention as the final layer of deep models. The whole model can be trained in an end-to-end manner. Here we consider the standard Viterbi algorithm for computing the distribution INLINEFORM0 . The main procedure is summarized in Algorithm 1. For testing, we adopt Viterbi algorithm to obtain the optimal sequence by using dynamic programming techniques. The testing procedure can be written as: INLINEFORM0 [t] Viterbi algorithm for CRF-ASN [1] The observation space INLINEFORM0 The state space INLINEFORM0 The observation sequence INLINEFORM0 The probabilities INLINEFORM0 The most likely hidden state sequence INLINEFORM0 Construct transition matrix INLINEFORM0 , each element stores the transition probability of transiting from state INLINEFORM1 to state INLINEFORM2 Construct emission matrix INLINEFORM3 , each element stores the probability of observing INLINEFORM4 from state INLINEFORM5 each state INLINEFORM6 INLINEFORM7 INLINEFORM8 each observation INLINEFORM9 each state INLINEFORM10 INLINEFORM11 INLINEFORM12 INLINEFORM13 INLINEFORM14 INLINEFORM15 INLINEFORM16 INLINEFORM17 X Experiments In this section, we conduct several experiments on two public DA datasets SwDA and MRDA, and show the effectiveness of our approach CRF-ASN for dialogue act recognition. Data Preparation We evaluate the performance of our method on two benchmark DA datasets: Switchboard Dialogue Act Corpus (SwDA) and The ICSI Meeting Recorder Dialogue Act Corpus (MRDA). These two datasets have been widely used to conduct the dialogue act recognition or the dialogue act classification tasks by several prior studies. SwDA: Switchboard Dialogue Act Corpus is a large hand-labeled dataset of 1155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. Each conversation involved two randomly selected strangers who had been charged with talking informally about one of several, self-selected general interest topics. For each utterance, together with a variety of automatic and semiautomatic tools, the tag set distinguishes 42 mutually exclusive utterance types via DAMSL taxonomy. The top five frequent DA types include STATEMENT, BACKCHANNEL / ACKNOWLEDGE, OPINION, ABANDONED / UNINTERPRETABLE, AGREEMENT / ACCEPT. We list the top five percentages of utterance type in the overall corpus in table2. MRDA: The ICSI Meeting Recorder Dialogue Act Corpus consists of hand-annotated dialog act, adjacency pair, and hotspot labels for the 75 meetings in the ICSI meeting corpus. The MRDA scheme provides several class-maps and corresponding scripts for grouping related tags together into smaller number of DAs. In this work we use the most widely used class-map that groups all tags into 5 DAs, i.e., Disruption (D) indicates the current Dialogue Act is interrupted. BackChannel (B) are utterances which are not made directly by a speaker as a response and do not function in a way that elicits a response either. FloorGrabber (F) are dialogue acts for grabbing or maintaining the floor. Question (Q) is for eliciting listener feedback. And finally, unless an utterance is completely indecipherable or else can be further described by a general tag, then its default status is Statement (S). We respectively list the percentage of the five general dialogue acts in table 3. From the table 2 and table 3, we can see the datasets are highly imbalanced in terms of label distributions. The dialogue act type STATEMENT occupies the largest proportion in both two datasets. Following the second place is the BACKCHANNEL act type which somewhat reflect the speaker's speech style. We present the detailed data preparation procedure for obtaining the clear dataset. For two datasets, we performed pre-processing steps in order to filter out the noise and some informal nature of utterances. We first strip the exclamations and commas, and then we convert the characters into lower-case. Notice that for SwDA, we only get the training and testing datasets. In order to smooth the training step and tune the parameters, we depart the original training dataset into two parts, one for training and the other small part used to be the validation set. We list the detailed statistics of the two datasets in table 4. Evaluation Criteria We mainly evaluate the performance of our proposed CRF-ASN method based on the widely-used evaluation criteria for dialogue act recognition, Accuracy. The Accuracy is the normalized criteria of accessing the quality of the predicted dialogue acts based on the testing utterance set INLINEFORM0 . Given the testing conversation INLINEFORM1 with its ground-truth dialogue acts INLINEFORM2 , we denote the predicted dialogue acts from our CRF-ASN method by INLINEFORM3 . We now introduce the evaluation criteria below. INLINEFORM4 Implemental Details We preprocess each utterance using the library of nltk BIBREF11 and exploit the popular pretrained word embedding Glove with 100 dimensional vectors BIBREF12 . The size of char-level embedding is also set as 100-dimensional and is obtained by CNN filters under the instruction of Kim BIBREF13 . The Gated Recurrent Unit BIBREF14 which is variant from LSTM BIBREF15 is employed throughout our model. We adopt the AdaDelta BIBREF16 optimizer for training with an initial learning rate of 0.005. We also apply dropout BIBREF17 between layers with a dropout rate of 0.2. For the memory network enhanced reasoning, we set the number of hops as 1 to preliminary learn the contextual dependencies among utterances. We do not set too many hops as increasing the number of GRU layers reduced the accuracy of the model. Early stopping is also used on the validation set with a patience of 5 epochs. Conversations with the same number of utterances were grouped together into mini-batches, and each utterance in a mini-batch was padded to the maximum length for that batch. The maximum batch-size allowed was 48. During training, we set the moving averages of all weights as the exponential decay rate of 0.999 BIBREF18 . The whole training process takes approximately 14 hours on a single 1080Ti GPU. All the hyper-parameters were selected by tuning one hyper-parameter at a time while keeping the others fixed. Performance Comparisons We compare our propose method with other several state-of-the-art methods for the problem of dialogue act recognition as follows: Bi-LSTM-CRF BIBREF19 method builds a hierarchical bidirectional LSTM as a base unit and the conditional random field as the top layer to do the dialogue act recognition task. DRLM-Conditional BIBREF20 method combines postive aspects of neural network architectures with probabilistic graphical models. The model combines a recurrent neural network language model with a latent variable model over shallow discourse structure. LSTM-Softmax BIBREF0 method applies a deep LSTM structure to classify dialogue acts via softmax operation. The authors claim that the word embeddings, dropout, weight decay and number of LSTM layers all have large effect on the final performance. RCNN BIBREF8 method composes both sentence model and discourse model to extend beyond the single sentence. The authors propose hierarchical CNN on sentence model and RNN on the contextual discourses. CNN BIBREF21 method incorporates the preceding short texts to classify dialogue act. The authors demonstrate that adding sequential information improves the quality of the predictions. HMM BIBREF5 method treats the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. CRF Simple baseline which applies the text encoding and CRF-based structure prediction on the DAR problem. SVM Simple baseline which applies the text encoding and multi-classification algorithm on the DAR problem. Among them, The former five approaches eg. Bi-LSTM-CRF, DRLM-Conditional, LSTM-Softmax, RCNN, CNN all adopt the deep neural network model in order to better capture the utterances semantic representations. The latter three methods (HMM, CRF, SVM) just employ the simple feature selection on the text processing. About half of the baselines including Bi-LSTM-CRF, DRLM-Conditional, HMM, CRF consider the graphical structured prediction while the others eg. RCNN, CNN, LSTM-Softmax, SVM just adopt the traditional multi-classification algorithms. Table 5 and Table 6 respectively show the experimental Accuracy results of the methods on the SwDA and MRDA datasets. The hyper-parameters and parameters which achieve the best performance on the validation set are chosen to conduct the testing evaluation. The experiments reveal some interesting points: The results show that our proposed model CRF-ASN obviously outperforms the state-of-the-art baselines on both SwDA and MRDA datasets. Numerically, Our model improves the DAR accuracy over Bi-LSTM-CRF by 2.1% and 0.8% on SwDA and MRDA respectively. It is remarkable that our CRF-ASN method is nearly close to the human annotators' performance on SwDA, which is very convincing to prove the superiority of our model. The deep neural networks outperform the other feature-based models. We can see the last three non-deep models obtain worse performance than the top five deep-based methods. This suggests that the performance of dialogue act recognition can be improved significantly with discriminative deep neural networks, either in convolutional neural network or the recurrent neural network. Apart from deep learning tactics, the problem formulations are also critical to the DAR problem. We see structured prediction approaches eg. CRF-ASN, Bi-LSTM-CRF obtain better results than multi-classification eg. LSTM-Softmax. What's more, under the same text encoding situation, the CRF-based model achieves much better results than the SVM-based method. Which can fully prove the superiority of the structured prediction formulation. We also notice that CRF is better than HMM when adopted to the DAR task. The major differences between our proposed model CRF-ASN and the strong baseline BI-LSTM-CRF lie in two aspects: First we adopt a more fine grained manner to encode the utterances and utilize the memory enhanced mechanism to compute the contextual dependencies. Second we employ an adapted structured attention network on the CRF layer, rather than directly apply the original CRF on the utterances. These two modifications are essential and improve the performance significantly. Ablation Results We respectively evaluate the individual contribution of the proposed module in our model. We conduct thorough ablation experiments on the SwDA dataset, which are recorded on the table 7. To make it fair, we only modify one module at a time and fix the other components to be in the same settings. We replace the proposed structured CRF-attention layer to simple CRF, the results show structured CRF-attention layer results in major improvement in the accuracy, approximately over 2.1% absolute points. We further replace the structure prediction formulation to multi-classification on SVM, the results drop dramatically, which illustrate the benefit of considering structural dependencies among utterances. We replace the fine-grained word INLINEFORM0 to the simple Glove vector. The results suggest that fine grained word embedding is useful to represent a text. We also adapt the context state INLINEFORM1 to only care its neighbor utterances. The result is not satisfying, which conveys us that the basic text understanding is critical in the semantic representations. We replace the memory network to directly apply CRF layer to the utterance layer. We also conduct a comparing experiment which plus the original utterance to memory enhanced output. The two results show the designed hierarchical memory-enhanced components are helpful in the utterance understanding and modeling the contextual influence. Visualization In Figure 3, we visualize of the output edge marginals produced by the CRF-ASN model for a conversation. In this instance, the actual dialogue act recognition procedure is displayed as INLINEFORM0 . In the testing step, the model is uncertain and select the most attentive path to maximize the true dialogue act recognition. Here we can see from the marginal edges the path INLINEFORM1 occupies more attentive weights than the path INLINEFORM2 in predicting the dialogue act label. Thus we ultimately select the right way to recognize the dialogue act. Figure 4 shows the confusion heatmap of our proposed CRF-ASN model for the SwDA dataset. Each element in the heatmap denotes the rate that the predicted label is the same to the true label. We can see from the diagonal, the <sd,sd> <b,b> pairs achieve the most satisfying matching score while <qyd, qyd> is much worse than other pairs. This can be explained that the sd (statement) and b(acknowledge) have clearly self-identification while qyd(Declarative Yes-No-Question) is more easier to be mistakenly recognized. We can see that <qyd,qy> which represents (Declarative Yes-No-Questio,Yes-No-Question) is indeed hard to recognize since their dialogue type are too similar with each other. For another reason, we notice that due to the bias of the ground truth, there are some cases that we predict the dialogue act correctly while the ground truth is wrong. To some reason, classifying so many fine-grained dialogue act labels is not easy for human annotators, besides the human-subjectivity occupies an important role in recognizing the dialogue act. Related Work In this section, we briefly review some related work on dialogue act recognition and attention network. Dialogue Act Recognition The main task of dialogue act recognition is to assign an act label to each utterance in a conversation, which can be defined as a supervised problem due to the properties that each utterance has a corresponding act label. Most of the existing work for the problem of dialogue act recognition can be categorized as following two groups. Regarding the DAR as a multi-classification problem. Reithinger et al. BIBREF22 present deal with the dialogue act classification using a statistically based language model. Webb et al. BIBREF23 apply diverse intra-utterance features involving word n-gram cue phrases to understand the utterance and do the classification. Geertzen et al. BIBREF24 propose a multidimensional approach to distinguish and annotate units in dialogue act segmentation and classification. Grau et al. BIBREF3 focus on the dialogue act classification using a Bayesian approach. Serafin et al. BIBREF25 employ Latent Semantic Analysis (LSA) proper and augmented method to work for dialogue act classification. Chen et al. BIBREF26 had an empirical investigation of sparse log-linear models for improved dialogue act classification. Milajevs et al. BIBREF27 investigate a series of compositional distributional semantic models to dialogue act classification. Regarding the DAR as a sequence labeling problem. Stolcke et al. BIBREF5 treat the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Tavafi et al. BIBREF28 study the effectiveness of supervised learning algorithms SVM-HMM for DA modeling across a comprehensive set of conversations. Similar to the SVM-HMM, Surendran et al. BIBREF29 also use a combination of linear support vector machines and hidden markov models for dialog act tagging in the HCRC MapTask corpus. Lendvai et al. BIBREF30 explore two sequence learners with a memory-based tagger and conditional random fields into turn-internal DA chunks. Boyer et al. BIBREF31 also applied HMM to discover internal dialogue strategies inherent in the structure of the sequenced dialogue acts. Galley et al. BIBREF32 use skip-chain conditional random field to model non-local pragmatic dependencies between paired utterances. Zimmermann et al. BIBREF33 investigate the use of conditional random fields for joint segmentation and classification of dialog acts exploiting both word and prosodic features. Recently, approaches based on deep learning methods improved many state-of-the-art techniques in NLP including DAR accuracy on open-domain conversations BIBREF7 BIBREF34 BIBREF6 BIBREF35 BIBREF21 . Kalchbrenner et al. BIBREF7 used a mixture of CNN and RNN. CNNs were used to extract local features from each utterance and RNNs were used to create a general view of the whole dialogue. Khanpour et al. BIBREF0 design a deep neural network model that benefits from pre-trained word embeddings combined with a variation of the RNN structure for the DA classification task. Ji et al. BIBREF6 also investigated the performance of using standard RNN and CNN on DA classification and got the cutting edge results on the MRDA corpus using CNN. Lee et al. BIBREF21 proposes a model based on CNNs and RNNs that incorporates preceding short texts as context to classify current DAs. Zhou et al. BIBREF34 combine heterogeneous information with conditional random fields for Chinese dialogue act recognition. Kumar et al. BIBREF35 build a hierarchical encoder with CRF to learn multiple levels of utterance and act dependencies. Unlike the previous studies, we formulate the problem from the viewpoint of integrating contextual dependencies in both utterance level and the act label level. We not only consider the fine grained multi-level semantic representations, but also integrate the structured attention network to further capture the structure designpendencies in the CRF layer. Attention Network Attention mechanism has become an essential component in text understanding in recent years. Since the first work proposed by Bahdanau et al. BIBREF36 that adopt the attention mechanism in neural machine translation, attention mechanism based neural networks have become a major trend in diverse text researching field, such as in machine comprehension BIBREF37 BIBREF38 BIBREF39 BIBREF40 , machine translation BIBREF41 BIBREF42 , abstract summarization BIBREF43 BIBREF44 , text classification BIBREF45 BIBREF46 BIBREF47 and so on. The principle of attention mechanism is to select the most pertinent piece of information, rather than using all available information, a large part of it being irrelevant to compute the neural response. In our work, we propose the CRF-attentive structured network in order to encode the internal utterance inference with dialogue acts. The structured attention is a more general attention mechanism which take account of the graphical dependencies and allow for extending attention beyond the standard soft-selection approach. The most similar work to our model is proposed by Kim et al. BIBREF48 . Kim et al. also experiment with two different classes of structured attention networks: subsequence selection and syntactic selection. However, the objectives of these two networks aims to segment the structure dependencies, which are quite different from our DAR task. In DAR task we care more on the dialogue act influences on the overall conversation structure, thus the former structured attention may not be suitable for our problem. Conclusion In this paper, we formulate the problem of dialogue act recognition from the viewpoint of capturing hierarchical rich utterance representations and generalize richer CRF attentive graphical structural dependencies without abandoning end-to-end training. We propose the CRF-Attentive Structured Network (CRF-ASN) for the problem. We implement the model in two steps. We first encode the rich semantic representation on the utterance level by incorporating hierarchical granularity and memory enhanced inference mechanism. The learned utterance representation can capture long term dependencies across the conversation. We next adopt the internal structured attention network to compute the dialogue act influence and specify structural dependencies in a soft manner. This approach enable the soft-selection attention on the structural CRF dependencies and take account of the contextual influence on the nearing utterances. We demonstrate the efficacy of our method using the well-known public datasets SwDA and MRDA. The extensive experiments demonstrate that our model can achieve better performance than several state-of-the-art solutions to the problem.
improves the DAR accuracy over Bi-LSTM-CRF by 2.1% and 0.8% on SwDA and MRDA respectively
7f2fd7ab968de720082133c42c2052d351589a67
7f2fd7ab968de720082133c42c2052d351589a67_0
Q: What type and size of word embeddings were used? Text: Introduction Microblogging environments, which allow users to post short messages, have gained increased popularity in the last decade. Twitter, which is one of the most popular microblogging platforms, has become an interesting platform for exchanging ideas, following recent developments and trends, or discussing any possible topic. Since Twitter has an enormously wide range of users with varying interests and sharing preferences, a significant amount of content is being created rapidly. Therefore, mining such platforms can extract valuable information. As a consequence, extracting information from Twitter has become a hot topic of research. For Twitter text mining, one popular research area is opinion mining or sentiment analysis, which is surely useful for companies or political parties to gather information about their services and products BIBREF0 . Another popular research area is content analysis, or more specifically topic modeling, which is useful for text classification and filtering applications on Twitter BIBREF1 . Moreover, event monitoring and trend analysis are also other examples of useful application areas on microblog texts BIBREF2 . In order to build successful social media analysis applications, it is necessary to employ successful processing tools for Natural Language Processing (NLP) tasks such as Named Entity Recognition (NER). NER is a critical stage for various NLP applications including machine translation, question answering and opinion mining. The aim of NER is to classify and locate atomic elements in a given text into predefined categories like the names of the persons, locations, and organizations (PLOs). NER on well-written texts is accepted as a solved problem for well-studied languages like English. However, it still needs further work for morphologically rich languages like Turkish due to their complex structure and relatively scarce language processing tools and data sets BIBREF3 . In addition, most of the NER systems are designed for formal texts. The performance of such systems drops significantly when applied on informal texts. To illustrate, the state-of-the-art Turkish NER system has CoNLL F-score of 91.94% on news data, but the performance drops to F-score of 19.28% when this system is adopted to Twitter data BIBREF4 . There are several challenges for NER on tweets, which are also summarized in Kucuk-2014-1, due to the very short text length and informal structure of the language used. Missing proper grammar rules and punctuation, lack of capitalization and apostrophes, usage of hashtags, abbreviations, and slang words are some of those challenges. In Twitter, using contracted forms and metonymic expressions instead of full organization or location names is very common as well. The usage of non-diacritic characters and the limited annotated data bring additional challenges for processing Turkish tweets. Due to the dynamic language used in Twitter, heavy feature engineering is not feasible for Twitter NER. Demir-2014 developed a semi-supervised approach for Turkish NER on formal (newswire) text using word embeddings obtained from unlabeled data. They obtained promising results without using any gazetteers and language dependent features. We adopted this approach for informal texts and evaluated it on Turkish tweets, where we achieved the state-of-the-art F-score performance. Our results show that using word embeddings for Twitter NER in Turkish can result in better F-score performance compared to using text normalization as a pre-processing step. In addition, utilizing in-domain word embeddings can be a promising approach for Twitter NER. Related Work There are various important studies of NER on Twitter for English. Ritter-2011 presented a two-phase NER system for tweets, T-NER, using Conditional Random Fields (CRF) and including tweet-specific features. Liu-2011 proposed a hybrid NER approach based on K-Nearest Neighbors and linear CRF. Liu-2012 presented a factor graph-based method for NER on Twitter. Li-2012 described an unsupervised approach for tweets, called TwiNER. Bontcheva-2013 described an NLP pipeline for tweets, called TwitIE. Very recently, Cherry-2015 have shown the effectiveness of Brown clusters and word vectors on Twitter NER for English. For Turkish NER on formal texts, Tur-2003 presented the first study with a Hidden Markov Model based approach. Tatar-2011 presented an automatic rule learning system. Yeniterzi-2011 used CRF for Turkish NER, and Kucuk-2012 proposed a hybrid approach. A CRF-based model by Seker-2012 is the state-of-the-art Turkish NER system with CoNLL F-score of 91.94%, using gazetteers. Demir-2014 achieved a similar F-score of 91.85%, without gazetteers and language dependent features, using a semi-supervised model with word embeddings. For Turkish NER on Twitter, Celikkaya-2013 presented the first study by adopting the CRF-based NER of Seker-2012 with a text normalizer. Kucuk-2014-1 adopted a multilingual rule-based NER by extending the resources for Turkish. Kucuk-2014-2 adopted a rule-based approach for Turkish tweets, where diacritics-based expansion to lexical resources and relaxing the capitalization yielded an F-score of 48% with strict CoNLL-like metric. NER for Turkish Tweets using Semi-supervised Learning To build a NER model with a semi-supervised learning approach on Turkish tweets, we used a neural network based architecture consisting of unsupervised and supervised stages. Unsupervised Stage In the unsupervised stage, our aim is to learn distributed word representations, or word embeddings, in continuous vector space where semantically similar words are expected to be close to each other. Word vectors trained on large unlabeled Turkish corpus can provide additional knowledge base for NER systems trained with limited amount of labeled data in the supervised stage. A word representation is usually a vector associated with each word, where each dimension represents a feature. The value of each dimension is defined to be representing the amount of activity for that specific feature. A distributed representation represents each word as a dense vector of continuous values. By having lower dimensional dense vectors, and by having real values at each dimension, distributed word representations are helpful to solve the sparsity problem. Distributed word representations are trained with a huge unlabeled corpus using unsupervised learning. If this unlabeled corpus is large enough, then we expect that the distributed word representations will capture the syntactic and semantic properties of each word and this will provide a mechanism to obtain similar representations for semantically and syntactically close words. Vector space distributed representations of words are helpful for learning algorithms to reach better results in many NLP tasks, since they provide a method for grouping similar words together. The idea of using distributed word representations in vector space is applied to statistical language modeling for the first time by using a neural network based approach with a significant success by Bengio-2003. The approach is based on learning a distributed representation of each word, where each dimension of such a word embedding represents a hidden feature of this word and is used to capture the word's semantic and grammatical properties. Later on, Collobert-2011 proposed to use distributed word representations together with the supervised neural networks and achieved state-of-the art results in different NLP tasks, including NER for English. We used the public tool, word2vec, released by Mikolov-2013 to obtain the word embeddings. Their neural network approach is similar to the feed-forward neural networks BIBREF5 , BIBREF6 . To be more precise, the previous words to the current word are encoded in the input layer and then projected to the projection layer with a shared projection matrix. After that, the projection is given to the non-linear hidden layer and then the output is given to softmax in order to receive a probability distribution over all the words in the vocabulary. However, as suggested by Mikolov-2013, removing the non-linear hidden layer and making the projection layer shared by all words is much faster, which allowed us to use a larger unlabeled corpus and obtain better word embeddings. Among the methods presented in Mikolov-2013, we used the continuous Skip-gram model to obtain semantic representations of Turkish words. The Skip-gram model uses the current word as an input to the projection layer with a log-linear classifier and attempts to predict the representation of neighboring words within a certain range. In the Skip-gram model architecture we used, we have chosen 200 as the dimension of the obtained word vectors. The range of surrounding words is chosen to be 5, so that we will predict the distributed representations of the previous 2 words and the next 2 words using the current word. Our vector size and range decisions are aligned with the choices made in the previous study for Turkish NER by Demir-2014. The Skip-gram model architecture we used is shown in Figure FIGREF3 . Supervised Stage In this stage, a comparably smaller amount of labeled data is used for training the final NER models. We used the publicly available neural network implementation by Turian-2010, which actually follows the study by Ratinov-2009, where a regularized averaged multiclass perceptron is used. Note that although non-local features are proven to be useful for the NER task on formal text types such as news articles, their usage and benefit is questionable for informal and short text types. Due to the fact that each tweet is treated as a single document with only 140 characters, it is difficult to make use of non-local features such as context aggregation and prediction history for the NER task on tweets. On the other hand, local features are mostly related to the previous and next tokens of the current token. With this motivation, we explored both local and non-local features but observed that we achieve better results without non-local features. As a result, to construct our NER model on tweets, we used the following local features: Context: All tokens in the current window of size two. Capitalization: Boolean feature indicating whether the first character of a token is upper-case or not. This feature is generated for all the tokens in the current window. Previous tags: Named entity tag predictions of the previous two tokens. Word type information: Type information of tokens in the current window, i.e. all-capitalized, is-capitalized, all-digits, contains-apostrophe, and is-alphanumeric. Token prefixes: First characters with length three and four, if exists, of current token. Token suffixes: Last characters with length one to four, if exists, of current token. Word embeddings: Vector representations of words in the current window. In addition to tailoring the features used by Ratinov-2009 for tweets, there are other Twitter-specific aspects of our NER system such as using word embeddings trained on an unlabeled tweet corpus, applying normalization on labeled tweets, and extracting Twitter-specific keywords like hashtags, mentions, smileys, and URLs from both labeled and unlabeled Turkish tweets. For text normalization as a pre-processing step of our system, we used the Turkish normalization interface developed for social media text with ill formed word detection and candidate word generation BIBREF8 . Along with the features used, the representation scheme for named entities is also important in terms of performance for a NER system. Two popular such encoding schemes are BIO and BILOU. The BIO scheme identifies the Beginning, the Inside and the Outside of the named entities, whereas the BILOU scheme identifies the Beginning, the Inside and the Last tokens of multi-token named entities, plus the Outside if it is not a named entity and the Unit length if the entity has single token. Since it is shown by Ratinov-2009 that BILOU representation scheme significantly outperforms the BIO encoding scheme, we make use of BILOU encoding for tagging named entities in our study. Furthermore, we applied normalization to numerical expressions as described in Turian-2010, which helps to achieve a degree of abstraction to numerical expressions. Unlabeled Data In the unsupervised stage, we used two types of unlabeled data to obtain Turkish word embeddings. The first one is a Turkish news-web corpus containing 423M words and 491M tokens, namely the BOUN Web Corpus BIBREF9 , BIBREF10 . The second one is composed of 21M Turkish tweets with 241M words and 293M tokens, where we combined 1M tweets from TS TweetS by Sezer-2013 and 20M Turkish Tweets by Bolat and Amasyalı. We applied tokenization on both Turkish news-web corpus and Turkish tweets corpus using the publicly available Zemberek tool developed for Turkish. We have also applied lower-casing on both corpora in order to limit the number of unique words. Since our combined tweets corpus is composed of Twitter-specific texts, we applied what we call Twitter processing where we replaced mentions, hashtags, smileys and URLs with certain keywords. Labeled Data In the supervised stage, we used two types of labeled data to train and test our NER models. The first one is Turkish news data annotated with ENAMEX-type named entities, or PLOs BIBREF11 . It includes 14481 person, 9409 location, and 9034 organization names in the training partition of 450K words. This data set is popularly used for performance evaluation of NER systems for Turkish, including the ones presented by Seker-2012, by Yeniterzi-2011 and by Demir-2014. The second type of labeled data is annotated Turkish tweets, where we used two different sets. The first set, TwitterDS-1, has around 5K tweets with 54K tokens and 1336 annotated PLOs BIBREF4 . The second set, TwitterDS-2, which is publicly available, has 2320 tweets with around 21K tokens and 980 PLOs in total BIBREF12 . The counts for each of the ENAMEX-type named entities for these Turkish Twitter data sets are provided in Table TABREF21 . Experiments and Results We designed a number of experimental settings to investigate their effects on Turkish Twitter NER. These settings are as follows: the text type of annotated data used for training, the text type of unlabeled data used to learn the word embeddings, using the capitalization feature or not, and applying text normalization. We evaluated all models on ENAMEX types with the CoNLL metric and reported phrase-level overall F-score performance results. To be more precise, the F-score values presented in Table TABREF23 , Table TABREF26 and Table TABREF27 are micro-averaged over the classes using the strict metric. NER Models Trained on News Most of our NER models are trained on annotated Turkish news data by Tur-2003 and tested on tweets, due to the limited amount of annotated Turkish tweets. In addition to using TwitterDS-1 and TwitterDS-2 as test sets, we detected 291 completely non-Turkish tweets out of 5040 in TwitterDS-1 and filtered them out using the isTurkish tool BIBREF13 to obtain TwitterDS-1_FT. We also used the normalized versions of these data sets. As shown in Table TABREF23 , turning off the capitalization feature is better when text normalization is not applied (bold entries), but the best results are achieved when normalization is applied and the capitalization feature is used (underlined bold entries). To observe the effects of the type of the source text used to learn the word embeddings, we have three models as Web, Twt, and Web+Twt where we used the Turkish web corpus, tweet corpus, and their combination respectively to learn the word embeddings. Including in-domain data from a relatively smaller tweet corpus together with a larger web corpus yields in better Twitter NER performance. We examined the effects of word embeddings on the performance of our NER models, and compared them to the improvements achieved by applying normalization on Turkish tweets. The baseline NER model is built by using the features explained in section 3.2, except the capitalization and word embeddings features. Using word embeddings obtained with unsupervised learning from a large corpus of web articles and tweets results in better NER performance than applying a Twitter-specific text normalizer, as shown in Table TABREF26 . This is crucial since Turkish text normalization for unstructured data is a challenging task and requires successful morphological analysis, whereas extracting word embeddings for any language or domain is much easier, yet more effective. NER Models Trained on Tweets Although an ideal Turkish NER model for Twitter should be trained on similar informal texts, all previous Turkish Twitter NER systems are trained on news data due to the limited amount of annotated Turkish tweets. We also experimented training NER models on relatively smaller labeled Twitter data with 10-fold cross-validation. Our best phrase-level F-score of 46.61% achieved on TwitterDS-1_FT is increased to 48.96% when trained on the much smaller tweets data, TwitterDS-2, instead of news data. Comparison with the State-of-the-art The best F-scores of the previously published Turkish Twitter NER systems BIBREF4 , BIBREF12 , BIBREF14 as well as our proposed NER system are shown in Table TABREF27 . We used the same training set with the first system BIBREF4 in our study, but the second NER system BIBREF12 uses a different multilingual news data and the third system BIBREF14 , which is rule based, does not have a training phase at all. All of these previous NER systems use gazetteer lists for named entities, which are manually constructed and highly language-dependent, whereas our system does not. Note that there is no publicly available gazetteer lists in Turkish. Kucuk-2014-2 achieved the state-of-the-art performance results for Turkish Twitter NER with their best model settings (shown in italic). These settings are namely using gazetteers list, with capitalization feature turned off, and with no normalization, together by expanding their gazetteer lists of named entities with diacritics variations. Our proposed system outperforms the state-of-the-art results on both Turkish Twitter data sets, even without using gazetteers (shown in bold). We achieved our best performance results with Turkish word embeddings obtained from our Web+Tweets corpus, when we apply normalization on tweets and keep the capitalization as a feature. Conclusion We adopted a neural networks based semi-supervised approach using word embeddings for the NER task on Turkish tweets. At the first stage, we attained distributed representations of words by employing a fast unsupervised learning method on a large unlabeled corpus. At the second stage, we exploited these word embeddings together with language independent features in order to train our neural network on labeled data. We compared our results on two different Turkish Twitter data sets with the state-of-the-art NER systems proposed for Twitter data in Turkish and showed that our system outperforms the state-of-the-art results on both data sets. Our results also show that using word embeddings from an unlabeled corpus can lead to better performance than applying Twitter-specific text normalization. We discussed the promising benefits of using in-domain data to learn word embeddings at the unsupervised stage as well. Since the only language dependent part of our Turkish Twitter NER system is text normalization, and since even without text normalization it outperforms the previous state-of-the-art results, we believe that our approach can be adapted to other morphologically rich languages. Our Turkish Twitter NER system, namely TTNER, is publicly available. We believe that there is still room for improvement for NLP tasks on Turkish social media data. As a future work, we aim to construct a much larger in-domain resource, i.e., unlabeled Turkish tweets corpus, and investigate the full benefits of attaining word embeddings from in-domain data on Twitter NER. Acknowledgements This research is partially supported by Boğaziçi University Research Fund Grant Number 11170. We would also like to thank The Scientific and Technological Research Council of Turkey (TÜBİTAK), The Science Fellowships and Grant Programmes Department (BİDEB) for providing financial support with 2210 National Scholarship Programme for MSc Students.
word2vec, 200 as the dimension of the obtained word vectors
369b0a481a4b75439ade0ec4f12b44414c4e5164
369b0a481a4b75439ade0ec4f12b44414c4e5164_0
Q: What data was used to build the word embeddings? Text: Introduction Microblogging environments, which allow users to post short messages, have gained increased popularity in the last decade. Twitter, which is one of the most popular microblogging platforms, has become an interesting platform for exchanging ideas, following recent developments and trends, or discussing any possible topic. Since Twitter has an enormously wide range of users with varying interests and sharing preferences, a significant amount of content is being created rapidly. Therefore, mining such platforms can extract valuable information. As a consequence, extracting information from Twitter has become a hot topic of research. For Twitter text mining, one popular research area is opinion mining or sentiment analysis, which is surely useful for companies or political parties to gather information about their services and products BIBREF0 . Another popular research area is content analysis, or more specifically topic modeling, which is useful for text classification and filtering applications on Twitter BIBREF1 . Moreover, event monitoring and trend analysis are also other examples of useful application areas on microblog texts BIBREF2 . In order to build successful social media analysis applications, it is necessary to employ successful processing tools for Natural Language Processing (NLP) tasks such as Named Entity Recognition (NER). NER is a critical stage for various NLP applications including machine translation, question answering and opinion mining. The aim of NER is to classify and locate atomic elements in a given text into predefined categories like the names of the persons, locations, and organizations (PLOs). NER on well-written texts is accepted as a solved problem for well-studied languages like English. However, it still needs further work for morphologically rich languages like Turkish due to their complex structure and relatively scarce language processing tools and data sets BIBREF3 . In addition, most of the NER systems are designed for formal texts. The performance of such systems drops significantly when applied on informal texts. To illustrate, the state-of-the-art Turkish NER system has CoNLL F-score of 91.94% on news data, but the performance drops to F-score of 19.28% when this system is adopted to Twitter data BIBREF4 . There are several challenges for NER on tweets, which are also summarized in Kucuk-2014-1, due to the very short text length and informal structure of the language used. Missing proper grammar rules and punctuation, lack of capitalization and apostrophes, usage of hashtags, abbreviations, and slang words are some of those challenges. In Twitter, using contracted forms and metonymic expressions instead of full organization or location names is very common as well. The usage of non-diacritic characters and the limited annotated data bring additional challenges for processing Turkish tweets. Due to the dynamic language used in Twitter, heavy feature engineering is not feasible for Twitter NER. Demir-2014 developed a semi-supervised approach for Turkish NER on formal (newswire) text using word embeddings obtained from unlabeled data. They obtained promising results without using any gazetteers and language dependent features. We adopted this approach for informal texts and evaluated it on Turkish tweets, where we achieved the state-of-the-art F-score performance. Our results show that using word embeddings for Twitter NER in Turkish can result in better F-score performance compared to using text normalization as a pre-processing step. In addition, utilizing in-domain word embeddings can be a promising approach for Twitter NER. Related Work There are various important studies of NER on Twitter for English. Ritter-2011 presented a two-phase NER system for tweets, T-NER, using Conditional Random Fields (CRF) and including tweet-specific features. Liu-2011 proposed a hybrid NER approach based on K-Nearest Neighbors and linear CRF. Liu-2012 presented a factor graph-based method for NER on Twitter. Li-2012 described an unsupervised approach for tweets, called TwiNER. Bontcheva-2013 described an NLP pipeline for tweets, called TwitIE. Very recently, Cherry-2015 have shown the effectiveness of Brown clusters and word vectors on Twitter NER for English. For Turkish NER on formal texts, Tur-2003 presented the first study with a Hidden Markov Model based approach. Tatar-2011 presented an automatic rule learning system. Yeniterzi-2011 used CRF for Turkish NER, and Kucuk-2012 proposed a hybrid approach. A CRF-based model by Seker-2012 is the state-of-the-art Turkish NER system with CoNLL F-score of 91.94%, using gazetteers. Demir-2014 achieved a similar F-score of 91.85%, without gazetteers and language dependent features, using a semi-supervised model with word embeddings. For Turkish NER on Twitter, Celikkaya-2013 presented the first study by adopting the CRF-based NER of Seker-2012 with a text normalizer. Kucuk-2014-1 adopted a multilingual rule-based NER by extending the resources for Turkish. Kucuk-2014-2 adopted a rule-based approach for Turkish tweets, where diacritics-based expansion to lexical resources and relaxing the capitalization yielded an F-score of 48% with strict CoNLL-like metric. NER for Turkish Tweets using Semi-supervised Learning To build a NER model with a semi-supervised learning approach on Turkish tweets, we used a neural network based architecture consisting of unsupervised and supervised stages. Unsupervised Stage In the unsupervised stage, our aim is to learn distributed word representations, or word embeddings, in continuous vector space where semantically similar words are expected to be close to each other. Word vectors trained on large unlabeled Turkish corpus can provide additional knowledge base for NER systems trained with limited amount of labeled data in the supervised stage. A word representation is usually a vector associated with each word, where each dimension represents a feature. The value of each dimension is defined to be representing the amount of activity for that specific feature. A distributed representation represents each word as a dense vector of continuous values. By having lower dimensional dense vectors, and by having real values at each dimension, distributed word representations are helpful to solve the sparsity problem. Distributed word representations are trained with a huge unlabeled corpus using unsupervised learning. If this unlabeled corpus is large enough, then we expect that the distributed word representations will capture the syntactic and semantic properties of each word and this will provide a mechanism to obtain similar representations for semantically and syntactically close words. Vector space distributed representations of words are helpful for learning algorithms to reach better results in many NLP tasks, since they provide a method for grouping similar words together. The idea of using distributed word representations in vector space is applied to statistical language modeling for the first time by using a neural network based approach with a significant success by Bengio-2003. The approach is based on learning a distributed representation of each word, where each dimension of such a word embedding represents a hidden feature of this word and is used to capture the word's semantic and grammatical properties. Later on, Collobert-2011 proposed to use distributed word representations together with the supervised neural networks and achieved state-of-the art results in different NLP tasks, including NER for English. We used the public tool, word2vec, released by Mikolov-2013 to obtain the word embeddings. Their neural network approach is similar to the feed-forward neural networks BIBREF5 , BIBREF6 . To be more precise, the previous words to the current word are encoded in the input layer and then projected to the projection layer with a shared projection matrix. After that, the projection is given to the non-linear hidden layer and then the output is given to softmax in order to receive a probability distribution over all the words in the vocabulary. However, as suggested by Mikolov-2013, removing the non-linear hidden layer and making the projection layer shared by all words is much faster, which allowed us to use a larger unlabeled corpus and obtain better word embeddings. Among the methods presented in Mikolov-2013, we used the continuous Skip-gram model to obtain semantic representations of Turkish words. The Skip-gram model uses the current word as an input to the projection layer with a log-linear classifier and attempts to predict the representation of neighboring words within a certain range. In the Skip-gram model architecture we used, we have chosen 200 as the dimension of the obtained word vectors. The range of surrounding words is chosen to be 5, so that we will predict the distributed representations of the previous 2 words and the next 2 words using the current word. Our vector size and range decisions are aligned with the choices made in the previous study for Turkish NER by Demir-2014. The Skip-gram model architecture we used is shown in Figure FIGREF3 . Supervised Stage In this stage, a comparably smaller amount of labeled data is used for training the final NER models. We used the publicly available neural network implementation by Turian-2010, which actually follows the study by Ratinov-2009, where a regularized averaged multiclass perceptron is used. Note that although non-local features are proven to be useful for the NER task on formal text types such as news articles, their usage and benefit is questionable for informal and short text types. Due to the fact that each tweet is treated as a single document with only 140 characters, it is difficult to make use of non-local features such as context aggregation and prediction history for the NER task on tweets. On the other hand, local features are mostly related to the previous and next tokens of the current token. With this motivation, we explored both local and non-local features but observed that we achieve better results without non-local features. As a result, to construct our NER model on tweets, we used the following local features: Context: All tokens in the current window of size two. Capitalization: Boolean feature indicating whether the first character of a token is upper-case or not. This feature is generated for all the tokens in the current window. Previous tags: Named entity tag predictions of the previous two tokens. Word type information: Type information of tokens in the current window, i.e. all-capitalized, is-capitalized, all-digits, contains-apostrophe, and is-alphanumeric. Token prefixes: First characters with length three and four, if exists, of current token. Token suffixes: Last characters with length one to four, if exists, of current token. Word embeddings: Vector representations of words in the current window. In addition to tailoring the features used by Ratinov-2009 for tweets, there are other Twitter-specific aspects of our NER system such as using word embeddings trained on an unlabeled tweet corpus, applying normalization on labeled tweets, and extracting Twitter-specific keywords like hashtags, mentions, smileys, and URLs from both labeled and unlabeled Turkish tweets. For text normalization as a pre-processing step of our system, we used the Turkish normalization interface developed for social media text with ill formed word detection and candidate word generation BIBREF8 . Along with the features used, the representation scheme for named entities is also important in terms of performance for a NER system. Two popular such encoding schemes are BIO and BILOU. The BIO scheme identifies the Beginning, the Inside and the Outside of the named entities, whereas the BILOU scheme identifies the Beginning, the Inside and the Last tokens of multi-token named entities, plus the Outside if it is not a named entity and the Unit length if the entity has single token. Since it is shown by Ratinov-2009 that BILOU representation scheme significantly outperforms the BIO encoding scheme, we make use of BILOU encoding for tagging named entities in our study. Furthermore, we applied normalization to numerical expressions as described in Turian-2010, which helps to achieve a degree of abstraction to numerical expressions. Unlabeled Data In the unsupervised stage, we used two types of unlabeled data to obtain Turkish word embeddings. The first one is a Turkish news-web corpus containing 423M words and 491M tokens, namely the BOUN Web Corpus BIBREF9 , BIBREF10 . The second one is composed of 21M Turkish tweets with 241M words and 293M tokens, where we combined 1M tweets from TS TweetS by Sezer-2013 and 20M Turkish Tweets by Bolat and Amasyalı. We applied tokenization on both Turkish news-web corpus and Turkish tweets corpus using the publicly available Zemberek tool developed for Turkish. We have also applied lower-casing on both corpora in order to limit the number of unique words. Since our combined tweets corpus is composed of Twitter-specific texts, we applied what we call Twitter processing where we replaced mentions, hashtags, smileys and URLs with certain keywords. Labeled Data In the supervised stage, we used two types of labeled data to train and test our NER models. The first one is Turkish news data annotated with ENAMEX-type named entities, or PLOs BIBREF11 . It includes 14481 person, 9409 location, and 9034 organization names in the training partition of 450K words. This data set is popularly used for performance evaluation of NER systems for Turkish, including the ones presented by Seker-2012, by Yeniterzi-2011 and by Demir-2014. The second type of labeled data is annotated Turkish tweets, where we used two different sets. The first set, TwitterDS-1, has around 5K tweets with 54K tokens and 1336 annotated PLOs BIBREF4 . The second set, TwitterDS-2, which is publicly available, has 2320 tweets with around 21K tokens and 980 PLOs in total BIBREF12 . The counts for each of the ENAMEX-type named entities for these Turkish Twitter data sets are provided in Table TABREF21 . Experiments and Results We designed a number of experimental settings to investigate their effects on Turkish Twitter NER. These settings are as follows: the text type of annotated data used for training, the text type of unlabeled data used to learn the word embeddings, using the capitalization feature or not, and applying text normalization. We evaluated all models on ENAMEX types with the CoNLL metric and reported phrase-level overall F-score performance results. To be more precise, the F-score values presented in Table TABREF23 , Table TABREF26 and Table TABREF27 are micro-averaged over the classes using the strict metric. NER Models Trained on News Most of our NER models are trained on annotated Turkish news data by Tur-2003 and tested on tweets, due to the limited amount of annotated Turkish tweets. In addition to using TwitterDS-1 and TwitterDS-2 as test sets, we detected 291 completely non-Turkish tweets out of 5040 in TwitterDS-1 and filtered them out using the isTurkish tool BIBREF13 to obtain TwitterDS-1_FT. We also used the normalized versions of these data sets. As shown in Table TABREF23 , turning off the capitalization feature is better when text normalization is not applied (bold entries), but the best results are achieved when normalization is applied and the capitalization feature is used (underlined bold entries). To observe the effects of the type of the source text used to learn the word embeddings, we have three models as Web, Twt, and Web+Twt where we used the Turkish web corpus, tweet corpus, and their combination respectively to learn the word embeddings. Including in-domain data from a relatively smaller tweet corpus together with a larger web corpus yields in better Twitter NER performance. We examined the effects of word embeddings on the performance of our NER models, and compared them to the improvements achieved by applying normalization on Turkish tweets. The baseline NER model is built by using the features explained in section 3.2, except the capitalization and word embeddings features. Using word embeddings obtained with unsupervised learning from a large corpus of web articles and tweets results in better NER performance than applying a Twitter-specific text normalizer, as shown in Table TABREF26 . This is crucial since Turkish text normalization for unstructured data is a challenging task and requires successful morphological analysis, whereas extracting word embeddings for any language or domain is much easier, yet more effective. NER Models Trained on Tweets Although an ideal Turkish NER model for Twitter should be trained on similar informal texts, all previous Turkish Twitter NER systems are trained on news data due to the limited amount of annotated Turkish tweets. We also experimented training NER models on relatively smaller labeled Twitter data with 10-fold cross-validation. Our best phrase-level F-score of 46.61% achieved on TwitterDS-1_FT is increased to 48.96% when trained on the much smaller tweets data, TwitterDS-2, instead of news data. Comparison with the State-of-the-art The best F-scores of the previously published Turkish Twitter NER systems BIBREF4 , BIBREF12 , BIBREF14 as well as our proposed NER system are shown in Table TABREF27 . We used the same training set with the first system BIBREF4 in our study, but the second NER system BIBREF12 uses a different multilingual news data and the third system BIBREF14 , which is rule based, does not have a training phase at all. All of these previous NER systems use gazetteer lists for named entities, which are manually constructed and highly language-dependent, whereas our system does not. Note that there is no publicly available gazetteer lists in Turkish. Kucuk-2014-2 achieved the state-of-the-art performance results for Turkish Twitter NER with their best model settings (shown in italic). These settings are namely using gazetteers list, with capitalization feature turned off, and with no normalization, together by expanding their gazetteer lists of named entities with diacritics variations. Our proposed system outperforms the state-of-the-art results on both Turkish Twitter data sets, even without using gazetteers (shown in bold). We achieved our best performance results with Turkish word embeddings obtained from our Web+Tweets corpus, when we apply normalization on tweets and keep the capitalization as a feature. Conclusion We adopted a neural networks based semi-supervised approach using word embeddings for the NER task on Turkish tweets. At the first stage, we attained distributed representations of words by employing a fast unsupervised learning method on a large unlabeled corpus. At the second stage, we exploited these word embeddings together with language independent features in order to train our neural network on labeled data. We compared our results on two different Turkish Twitter data sets with the state-of-the-art NER systems proposed for Twitter data in Turkish and showed that our system outperforms the state-of-the-art results on both data sets. Our results also show that using word embeddings from an unlabeled corpus can lead to better performance than applying Twitter-specific text normalization. We discussed the promising benefits of using in-domain data to learn word embeddings at the unsupervised stage as well. Since the only language dependent part of our Turkish Twitter NER system is text normalization, and since even without text normalization it outperforms the previous state-of-the-art results, we believe that our approach can be adapted to other morphologically rich languages. Our Turkish Twitter NER system, namely TTNER, is publicly available. We believe that there is still room for improvement for NLP tasks on Turkish social media data. As a future work, we aim to construct a much larger in-domain resource, i.e., unlabeled Turkish tweets corpus, and investigate the full benefits of attaining word embeddings from in-domain data on Twitter NER. Acknowledgements This research is partially supported by Boğaziçi University Research Fund Grant Number 11170. We would also like to thank The Scientific and Technological Research Council of Turkey (TÜBİTAK), The Science Fellowships and Grant Programmes Department (BİDEB) for providing financial support with 2210 National Scholarship Programme for MSc Students.
Turkish news-web corpus, TS TweetS by Sezer-2013 and 20M Turkish Tweets by Bolat and Amasyalı
e97545f4a5e7bc96515e60f2f9b23d8023d1eed9
e97545f4a5e7bc96515e60f2f9b23d8023d1eed9_0
Q: How are templates discovered from training data? Text: Introduction Abstractive summarization aims to shorten a source article or paragraph by rewriting while preserving the main idea. Due to the difficulties in rewriting long documents, a large body of research on this topic has focused on paragraph-level article summarization. Among them, sequence-to-sequence models have become the mainstream and some have achieved state-of-the-art performance BIBREF0 , BIBREF1 , BIBREF2 . In general, the only available information for these models during decoding is simply the source article representations from the encoder and the generated words from the previous time steps BIBREF2 , BIBREF3 , BIBREF4 , while the previous words are also generated based on the article representations. Since natural language text is complicated and verbose in nature, and training data is insufficient in size to help the models distinguish important article information from noise, sequence-to-sequence models tend to deteriorate with the accumulation of word generation, e.g., they generate irrelevant and repeated words frequently BIBREF5 . Template-based summarization BIBREF6 is an effective approach to traditional abstractive summarization, in which a number of hard templates are manually created by domain experts, and key snippets are then extracted and populated into the templates to form the final summaries. The advantage of such approach is it can guarantee concise and coherent summaries in no need of any training data. However, it is unrealistic to create all the templates manually since this work requires considerable domain knowledge and is also labor-intensive. Fortunately, the summaries of some specific training articles can provide similar guidance to the summarization as hard templates. Accordingly, these summaries are referred to as soft templates, or templates for simplicity, in this paper. Despite their potential in relieving the verbosity and insufficiency problems of natural language data, templates have not been exploited to full advantage. For example, cao2018retrieve simply concatenated template encoding after the source article in their summarization work. To this end, we propose a Bi-directional Selective Encoding with Template (BiSET) model for abstractive sentence summarization. Our model involves a novel bi-directional selective layer with two gates to mutually select key information from an article and its template to assist with summary generation. Due to the limitations in obtaining handcrafted templates, we further propose a multi-stage process for automatic retrieval of high-quality templates from training corpus. Extensive experiments were conducted on the Gigaword dataset BIBREF0 , a public dataset widely used for abstractive sentence summarization, and the results appear to be quite promising. Merely using the templates selected by our approach as the final summaries, our model can already achieve superior performance to some baseline models, demonstrating the effect of our templates. This may also indicate the availability of many quality templates in the corpus. Secondly, the template-equipped summarization model, BiSET, outperforms all the state-of-the-art models significantly. To evaluate the importance of the bi-directional selective layer and the two gates, we conducted an ablation study by discarding them respectively, and the results show that, while both of the gates are necessary, the template-to-article (T2A) gate tends to be more important than the article-to-template (A2T) gate. A human evaluation further validates the effectiveness of our model in generating informative, concise and readable summaries. 1.0 The contributions of this work include: The Framework Our framework includes three key modules: Retrieve, Fast Rerank, and BiSET. For each source article, Retrieve aims to return a few candidate templates from the training corpus. Then, the Fast Rerank module quickly identifies a best template from the candidates. Finally, BiSET mutually selects important information from the source article and the template to generate an enhanced article representation for summarization. Retrieve This module starts with a standard information retrieval library to retrieve a small set of candidates for fine-grained filtering as cao2018retrieve. To do that, all non-alphabetic characters (e.g., dates) are removed to eliminate their influence on article matching. The retrieval process starts by querying the training corpus with a source article to find a few (5 to 30) related articles, the summaries of which will be treated as candidate templates. Fast Rerank The above retrieval process is essentially based on superficial word matching and cannot measure the deep semantic relationship between two articles. Therefore, the Fast Rerank module is developed to identify a best template from the candidates based on their deep semantic relevance with the source article. We regard the candidate with highest relevance as the template. As illustrated in Figure FIGREF6 , this module consists of a Convolution Encoder Block, a Similarity Matrix and a Pooling Layer. Convolution Encoder Block. This block maps the input article and its candidate templates into high-level representations. The popular ways to this are either by using recurrent neural network (RNN) or a stack of convolutional neural network (CNN), while none of them are suitable for our problem. This is because a source article is usually much longer than a template, and both RNN and CNN may lead to semantic irrelevance after encodings. Instead, we implement a new convolution encoder block which includes a word embedding layer, a 1-D convolution followed by a non-linearity function, and residual connections BIBREF7 . Formally, given word embeddings INLINEFORM0 of an article, we use a 1-D convolution with kernel INLINEFORM1 and bias INLINEFORM2 to extract the n-gram features: DISPLAYFORM0 where INLINEFORM0 . We pad both sides of an article/template with zeros to keep fixed length. After that, we employ the gated linear unit (GLU) BIBREF8 as our activation function to control the proportion of information to pass through. GLU takes half the dimension of INLINEFORM1 as input and reduces the input dimension to INLINEFORM2 . Let INLINEFORM3 , where INLINEFORM4 , we have: DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 is the sigmoid function, and INLINEFORM2 means element-wise multiplication. To retain the original information, we add residual connections from the input of the convolution layer to the output of this block: INLINEFORM3 . Similarity Matrix. The above encoder block generates a high-level representation for each source article/candidate template. Then, a similarity matrix INLINEFORM0 is calculated for a given article representation, INLINEFORM1 , and a template representation, INLINEFORM2 : DISPLAYFORM0 where INLINEFORM0 is the similarity function, and the common options for INLINEFORM1 include: DISPLAYFORM0 Most previous work uses dot product or bilinear function BIBREF9 for the similarity, yet we find the family of Euclidean distance perform much better for our task. Therefore, we define the similarity function as: DISPLAYFORM0 Pooling Layer. This layer is intended to filter out unnecessary information in the matrix INLINEFORM0 . Before applying such pooling operations as max-pooling and k-max pooling BIBREF10 over the similarity matrix, we note there are repeated words in the source article, which we only want to count once. For this reason, we first identify some salient weights from INLINEFORM1 : DISPLAYFORM0 where INLINEFORM0 is a column-wise maximum function. We then apply k-max pooling over INLINEFORM1 to select INLINEFORM2 most important weights, INLINEFORM3 . Finally, we apply a two-layer feed-forward network to output a similarity score for the source article and the candidate template: DISPLAYFORM0 As mentioned before, the role of Fast Rerank is to re-rank the initial search results and return a best template for summarization. To examine the effect of this module, we studied its ranking quality under different ranges as in Section SECREF38 . The original rankings by Retrieve are presented for comparison with the NDCG metric. We regard the ROUGE-2 score of each candidate template with the reference summary as the ground truth. As shown in Figure FIGREF42 , Fast Rerank consistently provides enhanced rankings over the original. Traditional Methodologies In this section, we explore three traditional approaches to taking advantage of the templates for summarization. They share the same encoder and decoder layers, but own different interaction layers for combination of a source article and template. The encoder layer uses a standard bi-directional RNN (BiRNN) to separately encode the source article and the template into hidden states INLINEFORM0 and INLINEFORM1 . Concatenation. This approach directly concatenates the hidden state, INLINEFORM0 , of a template after the article representation, INLINEFORM1 , to form a new article representation, INLINEFORM2 . This approach is similar to INLINEFORM3 BIBREF11 but uses our Fast Rerank and summary generation modules. Concatenation+Self-Attention. This approach adds a multi-head self-attention BIBREF12 layer with 4 heads on the basis of the above direct concatenation. DCN Attention. Initially introduced for machine reading comprehension BIBREF13 , this interaction approach is employed here to create template-aware article representations. First, we compute a similarity matrix, INLINEFORM0 , for each pair of article and template words by INLINEFORM1 , where `;' is the concatenation operation. We then normalize each row and column of INLINEFORM2 by softmax, giving rise to two new matrices INLINEFORM3 and INLINEFORM4 . After that, the Dynamic Coattention Network (DCN) attention is applied to compute the bi-directional attention: INLINEFORM5 and INLINEFORM6 , where INLINEFORM7 denotes article-to-template attention and INLINEFORM8 is template-to-article attention. Finally, we obtain the template-aware article representation INLINEFORM9 : DISPLAYFORM0 BiSET Inspired by the research in machine reading comprehension BIBREF13 and selective mechanism BIBREF14 , we propose a novel Bi-directional Selective Encoding with Template (BiSET) model for abstractive sentence summarization. The core idea behind BiSET is to involve templates to assist with article representation and summary generation. As shown in Figure FIGREF17 , BiSET contains two selective gates: Template-to-Article (T2A) gate and Article-to-Template (A2T) gate. The role of T2A is to use a template to filter the source article representation: DISPLAYFORM0 where INLINEFORM0 is the concatenation of the last forward hidden state, INLINEFORM1 , and the first backward hidden state, INLINEFORM2 , of the template. On the other hand, the purpose of A2T is to control the proportion of INLINEFORM0 in the final article representation. We assume the source article is credible and use its representation INLINEFORM1 together with INLINEFORM2 to calculate a confidence degree, where INLINEFORM3 is obtained in a similar way as INLINEFORM4 . The confidence degree INLINEFORM5 is computed by: DISPLAYFORM0 The final source article representation is calculated as the weighted sum of INLINEFORM0 and INLINEFORM1 : DISPLAYFORM0 which allows a flexible manner for template incorporation and helps to resist errors when low-quality templates are given. The decoder layer. This layer includes an ordinary RNN decoder BIBREF15 . At each time step INLINEFORM0 , the decoder reads the word INLINEFORM1 and hidden state INLINEFORM2 generated in the previous step, and gives a new hidden state for the current step: DISPLAYFORM0 where the hidden state is initialized with the original source article representation, INLINEFORM0 . We then compute the attention between INLINEFORM1 and the final article representation INLINEFORM2 to obtain a context vector INLINEFORM3 : DISPLAYFORM0 After that, a simple concatenation layer is used to combine the hidden state INLINEFORM0 and the context vector INLINEFORM1 into a new hidden state INLINEFORM2 : DISPLAYFORM0 which will be mapped to a new representation of vocabulary size and fed through a softmax layer to output the target word distribution: DISPLAYFORM0 The overall performance of all the studied models is shown in Table TABREF46 . The results show that our model significantly outperforms all the baseline models and sets a new state of the art for abstractive sentence summarization. To evaluate the impact of templates on our model, we also implemented BiSET with two other types of templates: randomly-selected templates and best templates identified by Fast Rank under different ranges. As shown in Table TABREF47 , the performance of our model improves constantly with the improvement of template quality (larger ranges lead to better chances for good templates). Even with randomly-selected templates, our model still works with stable performance, demonstrating its robustness. Training The Retrieve module involves an unsupervised process with traditional indexing and retrieval techniques. For Fast Rerank, since there is no ground truth available, we use ROUGE-1 BIBREF16 to evaluate the saliency of a candidate template with respect to the gold summary of current source article. Therefore, the loss function is defined as: DISPLAYFORM0 where INLINEFORM0 is a score predicted by Equation EQREF16 , and INLINEFORM1 is the product of the training set size, INLINEFORM2 , and the number of retrieved templates for each article. For the BiSET module, the loss function is chosen as the negative log-likelihood between the generated summary, INLINEFORM0 , and the true summary, INLINEFORM1 : DISPLAYFORM0 where INLINEFORM0 is the length of the true summary, INLINEFORM1 contains all the trainable variables, and INLINEFORM2 and INLINEFORM3 denote the source article and the template, respectively. Experiments In this section, we introduce our evaluations on a standard dataset. Dataset and Implementation The dataset used for evaluation is Annotated English Gigaword BIBREF17 , a parallel corpus formed by pairing the first sentence of an article with its headline. For a fair comparison, we use the version preprocessed by Rush2015A as previous work. During training, both the Fast Rerank and BiSET modules have a batch size of 64 with the Adam optimizer BIBREF18 . We also apply grad clipping BIBREF19 with a range of [-5,5]. The differences of the two modules in settings are listed below. Fast Rerank. We set the size of word embeddings to 300, the convolution encoder block number to 1, and the kernel size of CNN to 3. The weights are shared between the article and template encoders. The INLINEFORM0 of k-max pooling is set to 10. L2 weight decay with INLINEFORM1 is performed over all trainable variables. The initial learning rate is 0.001 and multiplied by 0.1 every 10K steps. Dropout between layers is applied. BiSET. A two-layer BiLSTM is used as the encoder, and another two-layer LSTM as the decoder. The sizes of word embeddings and LSTM hidden states are both set to 500. We only apply dropout in the LSTM stack with a rate of 0.3. The learning rate is set to 0.001 for the first 50K steps and halved every 10K steps. Beam search with size 5 is applied to search for optimal answers. Evaluation Metrics Following previous work BIBREF2 , BIBREF14 , BIBREF11 , we use the standard F1 scores of ROUGE-1, ROUGE-2 and ROUGE-L BIBREF16 to evaluate the selected templates and generated summaries, where the official ROUGE script is applied. We employ the normalized discounted cumulative gain (NDCG) BIBREF20 from information retrieval to evaluate the Fast Rerank module. Results and Analysis In this section, we report our experimental results with thorough analysis and discussions. Performance of Retrieve The Retrieve module is intended to narrow down the search range for a best template. We evaluated this module by considering three types of templates: (a) Random means a randomly selected summary from the training corpus; (b) Retrieve-top is the highest-ranked summary by Retrieve; (c) N-Optimal means among the INLINEFORM0 top search results, the template is specified as the summary with largest ROUGE score with gold summary. As the results show in Table TABREF40 , randomly selected templates are totally irrelevant and unhelpful. When they are replaced by the Retrieve-top templates, the results improve apparently, demonstrating the relatedness of top-ranked summaries to gold summaries. Furthermore, when the N-Optimal templates are used, additional improvements can be observed as INLINEFORM0 grows. This trend is also confirmed by Figure FIGREF39 , in which the ROUGE scores increase before 30 and stabilize afterwards. These results suggest that the ranges given by Retrieve indeed help to find quality templates. Interaction Approaches In Section SECREF20 , we also explored three alternative approaches to integrating an article with its template. The results are shown in Table TABREF44 , from which we can note that none of these approaches help yield satisfactory performance. Even though DCN Attention works impressively in machine reading comprehension, it performs even worse in this task than the simple concatenation. We conjecture the reason is that the DCN Attention attempts to fuse the template information into an article as in machine reading comprehension, rather than selects key information from the two to form an enhanced article representation. Speed Comparison Our model is designed for both accuracy and efficiency. Due to the parallelizable nature of CNN, the Fast Rerank module only takes about 30 minutes for training and 3 seconds for inference on the whole test set. The BiSET model takes about 8 hours for training (GPU:GTX 1080), 6 times faster than INLINEFORM0 BIBREF11 . Ablation Study The purpose of this study is to examine the roles of the bi-directional selective layer and its two gates. Firstly, we removed the selective layer and replaced it with the direct concatenation of an article with its template representation. As the results show in Table TABREF51 , the model performs even worse than some ordinary sequence-to-sequence models in Table TABREF46 . The reason might be that templates would overwhelm the original article representations and become noise after concatenation. Then, we removed the Template-to-Article (T2A) gate, and as a result the model shows a great decline in performance, indicating the importance of templates in article representations. Finally, when we removed the Article-to-Template (A2T) gate, whose role is to control the weight of T2A in article representations, only a small performance decline is observed. This may suggest that the T2A gate alone can already capture most of the important article information, while A2T plays some supplemental role. Human Evaluation We then carried out a human evaluation to evaluate the generated summaries from another perspective. Our evaluators include 8 graduate students and 4 senior undergraduates, while the dataset is 100 randomly-selected articles from the test set. Each sample in this dataset also includes: 1 reference summary, 5 summaries generated by Open-NMT BIBREF21 , INLINEFORM0 BIBREF11 and BiSET under three settings, respectively, and 3 randomly-selected summaries for trapping. We asked the evaluators to independently rate each summary on a scale of 1 to 5, with respect to its quality in informativity, conciseness, and readability. While collecting the results, we rejected the samples in which more than half evaluators rate the informativity of the reference summary below 3. We also rejected the samples in which the informativity of a randomly-selected summary is scored higher than 3. Finally, we obtained 43 remaining samples and calculated an average score for each aspect. As the results show in Table TABREF55 , our model not only performs much better than the baselines, it also shows quite comparable performance with the reference summaries. In Table TABREF56 we present two real examples, which show the templates found by our model are indeed related to the source articles, and with their aid, our model succeeds to keep the main content of the source articles for summarization while discarding unrelated words like `US' and `Olympic Games'. Related Work Abstractive sentence summarization, a task analogous to headline generation or sentence compression, aims to generate a brief summary given a short source article. Early studies in this problem mainly focus on statistical or linguistic-rule-based methods, including those based on extractive and compression BIBREF23 , BIBREF24 , BIBREF25 , templates BIBREF6 and statistical machine translation BIBREF26 . The advent of large-scale summarization corpora accelerates the development of various neural network methods. Rush2015A first applied an attention-based sequence-to-sequence model for abstractive summarization, which includes a convolutional neural network (CNN) encoder and a feed-forward network decoder. Chopra2016Abstractive replaced the decoder with a recurrent neural network (RNN). Nallapati2016Abstractive further changed the sequence-to-sequence model to a fully RNN-based model. Besides, Gu2016Incorporating found that this task benefits from copying words from the source articles and proposed the CopyNet correspondingly. With a similar purpose, Gulcehre2016Pointing proposed to use a switch gate to control when to copy from the source article and when to generate from the vocabulary. Zhou2017Selective employed a selective gate to filter out unimportant information when encoding. Some other work attempts to incorporate external knowledge for abstractive summarization. For example, Nallapati2016Abstractive proposed to enrich their encoder with handcrafted features such as named entities and part-of-speech (POS) tags. guu2018generating also attempted to encode human-written sentences to improve neural text generation. Similar to our work, cao2018retrieve proposed to retrieve a related summary from the training set as soft template to assist with the summarization. However, their approach tends to oversimplify the role of the template, by directly concatenating a template after the source article encoding. In contrast, our bi-directional selective mechanism exhibits a novel attempt to selecting key information from the article and the template in a mutual manner, offering greater flexibility in using the template. Conclusion In this paper, we presented a novel Bi-directional Selective Encoding with Template (BiSET) model for abstractive sentence summarization. To counteract the verbosity and insufficiency of training data, we proposed to retrieve high-quality existing summaries as templates to assist with source article representations through an ingenious bi-directional selective layer. The enhanced article representations are expected to contribute towards better summarization eventually. We also developed the corresponding retrieval and re-ranking modules for obtaining quality templates. Extensive evaluations were conducted on a standard benchmark dataset and experimental results show that our model can quickly pick out high-quality templates from the training corpus, laying key foundation for effective article representations and summary generations. The results also show that our model outperforms all the baseline models and sets a new state of the art. An ablation study validates the role of the bi-directional selective layer, and a human evaluation further proves that our model can generate informative, concise, and readable summaries. Acknowledgement The paper was partially supported by the Program for Guangdong Introducing Innovative and Enterpreneurial Teams (No.2017ZT07X355) and the Key R INLINEFORM0 D Program of Guangdong Province (2019B010120001).
For each source article, Retrieve aims to return a few candidate templates from the training corpus. Then, the Fast Rerank module quickly identifies a best template from the candidates.
aaed6e30cf16727df0075b364873df2a4ec7605b
aaed6e30cf16727df0075b364873df2a4ec7605b_0
Q: What is WNGT 2019 shared task? Text: Introduction The Transformer network BIBREF3 is a neural sequence-to-sequence model that has achieved state-of-the-art results in machine translation. However, Transformer models tend to be very large, typically consisting of hundreds of millions of parameters. As the number of parameters directly corresponds to secondary storage requirements and memory consumption during inference, using Transformer networks may be prohibitively expensive in scenarios with constrained resources. For the 2019 Workshop on Neural Generation of Text (WNGT) Efficiency shared task BIBREF0, the Notre Dame Natural Language Processing (NDNLP) group looked at a method of inducing sparsity in parameters called auto-sizing in order to reduce the number of parameters in the Transformer at the cost of a relatively minimal drop in performance. Auto-sizing, first introduced by BIBREF1, uses group regularizers to encourage parameter sparsity. When applied over neurons, it can delete neurons in a network and shrink the total number of parameters. A nice advantage of auto-sizing is that it is independent of model architecture; although we apply it to the Transformer network in this task, it can easily be applied to any other neural architecture. NDNLP's submission to the 2019 WNGT Efficiency shared task uses a standard, recommended baseline Transformer network. Following BIBREF2, we investigate the application of auto-sizing to various portions of the network. Differing from their work, the shared task used a significantly larger training dataset from WMT 2014 BIBREF4, as well as the goal of reducing model size even if it impacted translation performance. Our best system was able to prune over 25% of the parameters, yet had a BLEU drop of only 1.1 points. This translates to over 25 million parameters pruned and saves almost 100 megabytes of disk space to store the model. Auto-sizing Auto-sizing is a method that encourages sparsity through use of a group regularizer. Whereas the most common applications of regularization will act over parameters individually, a group regularizer works over groupings of parameters. For instance, applying a sparsity inducing regularizer to a two-dimensional parameter tensor will encourage individual values to be driven to 0.0. A sparsity-inducing group regularizer will act over defined sub-structures, such as entire rows or columns, driving the entire groups to zero. Depending on model specifications, one row or column of a tensor in a neural network can correspond to one neuron in the model. Following the discussion of BIBREF1 and BIBREF2, auto-sizing works by training a neural network while using a regularizer to prune units from the network, minimizing: $W$ are the parameters of the model and $R$ is a regularizer. Here, as with the previous work, we experiment with two regularizers: The optimization is done using proximal gradient descent BIBREF5, which alternates between stochastic gradient descent steps and proximal steps: Auto-sizing the Transformer The Transformer network BIBREF3 is a sequence-to-sequence model in which both the encoder and the decoder consist of stacked self-attention layers. The multi-head attention uses two affine transformations, followed by a softmax layer. Each layer has a position-wise feed-forward neural network (FFN) with a hidden layer of rectified linear units. Both the multi-head attention and the feed-forward neural network have residual connections that allow information to bypass those layers. In addition, there are also word and position embeddings. Figure FIGREF1, taken from the original paper, shows the architecture. NDNLP's submission focuses on the $N$ stacked encoder and decoder layers. The Transformer has demonstrated remarkable success on a variety of datasets, but it is highly over-parameterized. For example, the baseline Transformer model has more than 98 million parameters, but the English portion of the training data in this shared task has only 116 million tokens and 816 thousand types. Early NMT models such as BIBREF6 have most of their parameters in the embedding layers, but the transformer has a larger percentage of the model in the actual encoder and decoder layers. Though the group regularizers of auto-sizing can be applied to any parameter matrix, here we focus on the parameter matrices within the encoder and decoder layers. We note that there has been some work recently on shrinking networks through pruning. However, these differ from auto-sizing as they frequently require an arbitrary threshold and are not included during the training process. For instance, BIBREF7 prunes networks based off a variety of thresholds and then retrains a model. BIBREF8 also look at pruning, but of attention heads specifically. They do this through a relaxation of an $\ell _0$ regularizer in order to make it differentiable. This allows them to not need to use a proximal step. This method too starts with pre-trained model and then continues training. BIBREF9 also look at pruning attention heads in the transformer. However, they too use thresholding, but only apply it at test time. Auto-sizing does not require a thresholding value, nor does it require a pre-trained model. Of particular interest are the large, position-wise feed-forward networks in each encoder and decoder layer: $W_1$ and $W_2$ are two large affine transformations that take inputs from $D$ dimensions to $4D$, then project them back to $D$ again. These layers make use of rectified linear unit activations, which were the focus of auto-sizing in the work of BIBREF1. No theory or intuition is given as to why this value of $4D$ should be used. Following BIBREF2, we apply the auto-sizing method to the Transformer network, focusing on the two largest components, the feed-forward layers and the multi-head attentions (blue and orange rectangles in Figure FIGREF1). Remember that since there are residual connections allowing information to bypass the layers we are auto-sizing, information can still flow through the network even if the regularizer drives all the neurons in a layer to zero – effectively pruning out an entire layer. Experiments All of our models are trained using the fairseq implementation of the Transformer BIBREF10. For the regularizers used in auto-sizing, we make use of an open-source, proximal gradient toolkit implemented in PyTorch BIBREF2. For each mini-batch update, the stochastic gradient descent step is handled with a standard PyTorch forward-backward call. Then the proximal step is applied to parameter matrices. Experiments ::: Settings We used the originally proposed transformer architecture – with six encoder and six decoder layers. Our model dimension was 512 and we used 8 attention heads. The feed-forward network sub-components were of size 2048. All of our systems were run using subword units (BPE) with 32,000 merge operations on concatenated source and target training data BIBREF11. We clip norms at 0.1, use label smoothed cross-entropy with value 0.1, and an early stopping criterion when the learning rate is smaller than $10^{-5}$. We used the Adam optimizer BIBREF12, a learning rate of $10^{-4}$, and dropout of 0.1. Following recommendations in the fairseq and tensor2tensor BIBREF13 code bases, we apply layer normalization before a sub-component as opposed to after. At test time, we decoded using a beam of 5 with length normalization BIBREF14 and evaluate using case-sensitive, tokenized BLEU BIBREF15. For the auto-sizing experiments, we looked at both $\ell _{2,1}$ and $\ell _{\infty ,1}$ regularizers. We experimented over a range of regularizer coefficient strengths, $\lambda $, that control how large the proximal gradient step will be. Similar to BIBREF1, but differing from BIBREF16, we use one value of $\lambda $ for all parameter matrices in the network. We note that different regularization coefficient values are suited for different types or regularizers. Additionally, all of our experiments use the same batch size, which is also related to $\lambda $. Experiments ::: Auto-sizing sub-components We applied auto-sizing to the sub-components of the encoder and decoder layers, without touching the word or positional embeddings. Recall from Figure FIGREF1, that each layer has multi-head attention and feed-forward network sub-components. In turn, each multi-head attention sub-component is comprised of two parameter matrices. Similarly, each feed-forward network has two parameter matrices, $W_1$ and $W_2$. We looked at three main experimental configurations: All: Auto-sizing is applied to every multi-head attention and feed-forward network sub-component in every layer of the encoder and decoder. Encoder: As with All, auto-sizing is applied to both multi-head attention and feed-forward network sub-components, but only in the encoder layers. The decoder remains the same. FFN: Auto-sizing applied only to the feed-forward network sub-components $W_1$ and $W_2$, but not to the multi-head portions. This too is applied to both the encoder and decoder. Experiments ::: Results Our results are presented in Table TABREF6. The baseline system has 98.2 million parameters and a BLEU score of 27.9 on newstest2015. It takes up 375 megabytes on disk. Our systems that applied auto-sizing only to the feed-forward network sub-components of the transformer network maintained the best BLEU scores while also pruning out the most parameters of the model. Overall, our best system used $\ell _{2,1}=1.0$ regularization for auto-sizing and left 73.1 million parameters remaining. On disk, the model takes 279 megabytes to store – roughly 100 megabytes less than the baseline. The performance drop compared to the baseline is 1.1 BLEU points, but the model is over 25% smaller. Applying auto-sizing to the multi-head attention and feed-forward network sub-components of only the encoder also pruned a substantial amount of parameters. Though this too resulted in a smaller model on disk, the BLEU scores were worse than auto-sizing just the feed-forward sub-components. Auto-sizing the multi-head attention and feed-forward network sub-components of both the encoder and decoder actually resulted in a larger model than the encoder only, but with a lower BLEU score. Overall, our results suggest that the attention portion of the transformer network is more important for model performance than the feed-forward networks in each layer. Conclusion In this paper, we have investigated the impact of using auto-sizing on the transformer network of the 2019 WNGT efficiency task. We were able to delete more than 25% of the parameters in the model while only suffering a modest BLEU drop. In particular, focusing on the parameter matrices of the feed-forward networks in every layer of the encoder and decoder yielded the smallest models that still performed well. A nice aspect of our proposed method is that the proximal gradient step of auto-sizing can be applied to a wide variety of parameter matrices. Whereas for the transformer, the largest impact was on feed-forward networks within a layer, should a new architecture emerge in the future, auto-sizing can be easily adapted to the trainable parameters. Overall, NDNLP's submission has shown that auto-sizing is a flexible framework for pruning parameters in a large NMT system. With an aggressive regularization scheme, large portions of the model can be deleted with only a modest impact on BLEU scores. This in turn yields a much smaller model on disk and at run-time. Acknowledgements This research was supported in part by University of Southern California, subcontract 67108176 under DARPA contract HR0011-15-C-0115.
efficiency task aimed at reducing the number of parameters while minimizing drop in performance