id
stringlengths
10
10
title
stringlengths
12
156
abstract
stringlengths
279
2.02k
full_text
sequence
qas
sequence
figures_and_tables
sequence
1909.01013
Duality Regularization for Unsupervised Bilingual Lexicon Induction
Unsupervised bilingual lexicon induction naturally exhibits duality, which results from symmetry in back-translation. For example, EN-IT and IT-EN induction can be mutually primal and dual problems. Current state-of-the-art methods, however, consider the two tasks independently. In this paper, we propose to train primal and dual models jointly, using regularizers to encourage consistency in back translation cycles. Experiments across 6 language pairs show that the proposed method significantly outperforms competitive baselines, obtaining the best-published results on a standard benchmark.
{ "section_name": [ "Introduction", "Related Work", "Approach", "Approach ::: Baseline Adversarial Model", "Approach ::: Regularizers for Dual Models", "Approach ::: Model Selection", "Experiments", "Experiments ::: Experimental Settings", "Experiments ::: The Effectiveness of Dual Learning", "Experiments ::: Comparison with the State-of-the-art", "Conclusion" ], "paragraphs": [ [ "Unsupervised bilingual lexicon induction (UBLI) has been shown to benefit NLP tasks for low resource languages, including unsupervised NMT BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, information retrieval BIBREF5, BIBREF6, dependency parsing BIBREF7, and named entity recognition BIBREF8, BIBREF9.", "Recent research has attempted to induce unsupervised bilingual lexicons by aligning monolingual word vector spaces BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15. Given a pair of languages, their word alignment is inherently a bi-directional problem (e.g. English-Italian vs Italian-English). However, most existing research considers mapping from one language to another without making use of symmetry. Our experiments show that separately learned UBLI models are not always consistent in opposite directions. As shown in Figure 1a, when the model of BIBREF11 Conneau18a is applied to English and Italian, the primal model maps the word “three” to the Italian word “tre”, but the dual model maps “tre” to “two” instead of “three”.", "We propose to address this issue by exploiting duality, encouraging forward and backward mappings to form a closed loop (Figure 1b). In particular, we extend the model of BIBREF11 Conneau18a by using a cycle consistency loss BIBREF16 to regularize two models in opposite directions. Experiments on two benchmark datasets show that the simple method of enforcing consistency gives better results in both directions. Our model significantly outperforms competitive baselines, obtaining the best published results. We release our code at xxx." ], [ "UBLI. A typical line of work uses adversarial training BIBREF17, BIBREF10, BIBREF18, BIBREF11, matching the distributions of source and target word embeddings through generative adversarial networks BIBREF19. Non-adversarial approaches have also been explored. For instance, BIBREF15 Mukherjee18EMNLP use squared-loss mutual information to search for optimal cross-lingual word pairing. BIBREF13 and BIBREF20 exploit the structural similarity of word embedding spaces to learn word mappings. In this paper, we choose BIBREF11 Conneau18a as our baseline as it is theoretically attractive and gives strong results on large-scale datasets.", "Cycle Consistency. Forward-backward consistency has been used to discover the correspondence between unpaired images BIBREF21, BIBREF22. In machine translation, similar ideas were exploited, BIBREF23, BIBREF24 and BIBREF25 use dual learning to train two “opposite” language translators by minimizing the reconstruction loss. BIBREF26 consider back-translation, where a backward model is used to build synthetic parallel corpus and a forward model learns to generate genuine text based on the synthetic output.", "Closer to our method, BIBREF27 jointly train two autoencoders to learn supervised bilingual word embeddings. BIBREF28 use sinkhorn distance BIBREF29 and back-translation to align word embeddings. However, they cannot perform fully unsupervised training, relying on WGAN BIBREF30 for providing initial mappings. Concurrent with our work, BIBREF31 build a adversarial autoencoder with cycle consistency loss and post-cycle reconstruction loss. In contrast to these works, our method is fully unsupervised, simpler, and empirically more effective." ], [ "We take BIBREF11 as our baseline, introducing a novel regularizer to enforce cycle consistency. Let $X=\\lbrace x_1,...,x_n\\rbrace $ and $Y=\\lbrace y_1,...,y_m\\rbrace $ be two sets of $n$ and $m$ word embeddings for a source and a target language, respectively. The primal UBLI task aims to learn a linear mapping $\\mathcal {F}:X\\rightarrow Y$ such that for each $x_i$, $\\mathcal {F}(x_i)$ corresponds to its translation in $Y$. Similarly, a linear mapping $\\mathcal {G}:Y\\rightarrow X$ is defined for the dual task. In addition, we introduce two language discriminators $D_x$ and $D_y$, which are trained to discriminate between the mapped word embeddings and the original word embeddings." ], [ "BIBREF11 align two word embedding spaces through generative adversarial networks, in which two networks are trained simultaneously. Specifically, take the primal UBLI task as an example, the linear mapping $\\mathcal {F}$ tries to generate “fake” word embeddings $\\mathcal {F}(x)$ that look similar to word embeddings from $Y$, while the discriminator $D_y$ aims to distinguish between “fake” and real word embeddings from $Y$. Formally, this idea can be expressed as the minmax game min$_{\\mathcal {F}}$max$_{D{_y}}\\ell _{adv}(\\mathcal {F},D_y,X,Y)$, where", "$P_{D_y}(src|y_j)$ is a model probability from $D_y$ to distinguish whether word embedding $y_j$ is coming from the target language (src = 1) or the primal mapping $\\mathcal {F}$ (src = 0). Similarly, the dual UBLI problem can be formulated as min$_{\\mathcal {G}}$max$_{D_x}\\ell _{adv}(\\mathcal {G},D_x,Y,X)$, where $\\mathcal {G}$ is the dual mapping, and $D_x$ is a source discriminator.", "Theoretically, a unique solution for above minmax game exists, with the mapping and the discriminator reaching a nash equilibrium. Since the adversarial training happens at the distribution level, no cross-lingual supervision is required." ], [ "We train $\\mathcal {F}$ and $\\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\\mathcal {G}(\\mathcal {F}(X))$ is similar to $X$ and $\\mathcal {F}(\\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\\ell _{cycle}$) on each side to avoid $\\mathcal {F}$ and $\\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4.", "Cycle Consistency Loss. We introduce", "where $\\Delta $ denotes the discrepancy criterion, which is set as the average cosine similarity in our model.", "Full objective. The final objective is:" ], [ "We follow BIBREF11, using an unsupervised criterion to perform model selection. In preliminary experiments, we find in adversarial training that the single-direction criterion $S(\\mathcal {F}, X, Y)$ by BIBREF11 does not always work well. To address this, we make a simple extension by calculating the weighted average of forward and backward scores:", "Where $\\lambda $ is a hyperparameter to control the importance of the two objectives. Here $S$ first generates bilingual lexicons by learned mappings, and then computes the average cosine similarity of these translations." ], [ "We perform two sets of experiments, to investigate the effectiveness of our duality regularization in isolation (Section SECREF16) and to compare our final models with the state-of-the-art methods in the literature (Section SECREF18), respectively." ], [ "Dataset and Setup. Our datasets includes: (i) The Multilingual Unsupervised and Supervised Embeddings (MUSE) dataset released by BIBREF11 Conneau18a. (ii) the more challenging Vecmap dataset from BIBREF32 Dinu15 and the extensions of BIBREF33 Artetxe17ACL. We follow the evaluation setups of BIBREF11, utilizing cross-domain similarity local scaling (CSLS) for retrieving the translation of given source words. Following a standard evaluation practice BIBREF34, BIBREF35, BIBREF11, we report precision at 1 scores (P@1). Given the instability of existing methods, we follow BIBREF13 to perform 10 runs for each method and report the best and the average accuracies." ], [ "We compare our method with BIBREF11 (Adv-C) under the same settings. As shown in Table TABREF12, our model outperforms Adv-C on both MUSE and Vecmap for all language pairs (except ES-EN). In addition, the proposed approach is less sensitive to initialization, and thus more stable than Adv-C over multiple runs. These results demonstrate the effectiveness of dual learning. Our method is also superior to Adv-C for the low-resource language pairs English $\\leftrightarrow $ Malay (MS) and English $\\leftrightarrow $ English-Esperanto (EO). Adv-C gives low performances on ES-EN, DE-EN, but much better results on the opposite directions on Vecmap. This is likely because the separate models are highly under-constrained, and thus easy to get stuck in poor local optima. In contrast, our method gives comparable results on both directions for the two languages, thanks to the use of information symmetry.", "Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization." ], [ "In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$).", "Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima.", "Additionally, we observe that our unsupervised method performs competitively and even better compared with strong supervised and semi-supervised approaches. Ours-Procrustes obtains comparable results with Procrustes on EN-IT and gives strong results on EN-DE, EN-FI, EN-ES and the opposite directions. Ours-GeoMM$_{semi}$ obtains the state-of-the-art results on all tested language pairs except EN-FI, with the additional advantage of being fully unsupervised." ], [ "We investigated a regularization method to enhance unsupervised bilingual lexicon induction, by encouraging symmetry in lexical mapping between a pair of word embedding spaces. Results show that strengthening bi-directional mapping consistency significantly improves the effectiveness over the state-of-the-art method, leading to the best results on a standard benchmark." ] ] }
{ "question": [ "What regularizers were used to encourage consistency in back translation cycles?", "What are new best results on standard benchmark?", "How better is performance compared to competitive baselines?", "How big is data used in experiments?", "What 6 language pairs is experimented on?", "What are current state-of-the-art methods that consider the two tasks independently?" ], "question_id": [ "3a8d65eb8e1dbb995981a0e02d86ebf3feab107a", "d0c79f4a5d5c45fe673d9fcb3cd0b7dd65df7636", "54c7fc08598b8b91a8c0399f6ab018c45e259f79", "5112bbf13c7cf644bf401daecb5e3265889a4bfc", "03ce42ff53aa3f1775bc57e50012f6eb1998c480", "ebeedbb8eecdf118d543fdb5224ae610eef212c8" ], "nlp_background": [ "zero", "zero", "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "an adversarial loss ($\\ell _{adv}$) for each model as in the baseline", "a cycle consistency loss ($\\ell _{cycle}$) on each side" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We train $\\mathcal {F}$ and $\\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\\mathcal {G}(\\mathcal {F}(X))$ is similar to $X$ and $\\mathcal {F}(\\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\\ell _{cycle}$) on each side to avoid $\\mathcal {F}$ and $\\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4." ], "highlighted_evidence": [ "We train $\\mathcal {F}$ and $\\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\\mathcal {G}(\\mathcal {F}(X))$ is similar to $X$ and $\\mathcal {F}(\\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\\ell _{cycle}$) on each side to avoid $\\mathcal {F}$ and $\\mathcal {G}$ from contradicting each other." ] } ], "annotation_id": [ "b9a984425cbc2d5d4e9ee47b1389f956badcb464" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "New best results of accuracy (P@1) on Vecmap:\nOurs-GeoMMsemi: EN-IT 50.00 IT-EN 42.67 EN-DE 51.60 DE-EN 47.22 FI-EN 39.62 EN-ES 39.47 ES-EN 36.43", "evidence": [ "Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima.", "FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs." ], "highlighted_evidence": [ "Table TABREF15 shows the final results on Vecmap.", "FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs." ] } ], "annotation_id": [ "0e8bac71d1d4d344b19e68d3a517f0602009c7b8" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Proposed method vs best baseline result on Vecmap (Accuracy P@1):\nEN-IT: 50 vs 50\nIT-EN: 42.67 vs 42.67\nEN-DE: 51.6 vs 51.47\nDE-EN: 47.22 vs 46.96\nEN-FI: 35.88 vs 36.24\nFI-EN: 39.62 vs 39.57\nEN-ES: 39.47 vs 39.30\nES-EN: 36.43 vs 36.06", "evidence": [ "FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs.", "Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs.", "Table TABREF15 shows the final results on Vecmap." ] } ], "annotation_id": [ "208ff0e360529ceb1220d1c11abc0b48d2208cd3" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "55e2519b0e80ebeca6f4334336688963a9a7da25" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "EN<->ES\nEN<->DE\nEN<->IT\nEN<->EO\nEN<->MS\nEN<->FI", "evidence": [ "Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization.", "FLOAT SELECTED: Table 1: Accuracy on MUSE and Vecmap." ], "highlighted_evidence": [ "Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12.", "FLOAT SELECTED: Table 1: Accuracy on MUSE and Vecmap." ] } ], "annotation_id": [ "259abfe9d7fa091be049c2554871e822c006e168" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Procrustes", "GPA", "GeoMM", "GeoMM$_{semi}$", "Adv-C-Procrustes", "Unsup-SL", "Sinkhorn-BT" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$)." ], "highlighted_evidence": [ "In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation." ] } ], "annotation_id": [ "a2a38b25d3dca1acd3bc852e88bb4ee8038f3cee" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: (a) Inconsistency between primal model F and the dual model G. (b) An ideal scenario.", "Figure 2: The proposed framework. (a)X → F(X)→ G(F(X))→ X; (b) Y → G(Y )→ F(G(Y ))→ Y .", "Table 1: Accuracy on MUSE and Vecmap.", "Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "3-Table1-1.png", "4-Table4-1.png" ] }
1901.02534
Team Papelo: Transformer Networks at FEVER
We develop a system for the FEVER fact extraction and verification challenge that uses a high precision entailment classifier based on transformer networks pretrained with language modeling, to classify a broad set of potential evidence. The precision of the entailment classifier allows us to enhance recall by considering every statement from several articles to decide upon each claim. We include not only the articles best matching the claim text by TFIDF score, but read additional articles whose titles match named entities and capitalized expressions occurring in the claim text. The entailment module evaluates potential evidence one statement at a time, together with the title of the page the evidence came from (providing a hint about possible pronoun antecedents). In preliminary evaluation, the system achieves .5736 FEVER score, .6108 label accuracy, and .6485 evidence F1 on the FEVER shared task test set.
{ "section_name": [ "Introduction", "Transformer network", "Reframing entailment", "Improving retrieval", "Discussion" ], "paragraphs": [ [ "The release of the FEVER fact extraction and verification dataset BIBREF0 provides a large-scale challenge that tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem. Systems are evaluated on the accuracy of the claim predictions, with credit only given when correct evidence is submitted.", "As entailment data, premises in FEVER data differ substantially from those in the image caption data used as the basis for the Stanford Natural Language Inference (SNLI) BIBREF1 dataset. Sentences are longer (31 compared to 14 words on average), vocabulary is more abstract, and the prevalence of named entities and out-of-vocabulary terms is higher.", "The retrieval aspect of FEVER is not straightforward either. A claim may have small word overlap with the relevant evidence, especially if the claim is refuted by the evidence.", "Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods." ], [ "The core of our system is an entailment module based on a transformer network. Transformer networks BIBREF6 are deep networks applied to sequential input data, with each layer implementing multiple heads of scaled dot product attention. This attention mechanism allows deep features to be compared across positions in the input.", "Many entailment networks have two sequence inputs, but the transformer is designed with just one. A separator token divides the premise from the hypothesis.", "We use a specific transformer network released by OpenAI BIBREF5 that has been pre-trained for language modeling. The network consists of twelve blocks. Each block consists of a multi-head masked self-attention layer, layer normalization BIBREF7 , a feed forward network, and another layer normalization. After the twelfth block, two branches exist. In one branch, matrix multiplication and softmax layers are applied at the terminal sequence position to predict the entailment classification. In the other branch, a hidden state is multiplied by each token embedding and a softmax is taken to predict the next token. The language modeling branch has been pre-trained on the BookCorpus dataset BIBREF8 . We take the pre-trained model and train both branches on examples from FEVER." ], [ "The baseline FEVER system BIBREF0 ran the AllenNLP BIBREF3 implementation of Decomposable Attention BIBREF2 to classify a group of five premise statements concatenated together against the claim. These five premise statements were fixed by the retrieval module and not considered individually. In our system, premise statements are individually evaluated.", "We collect training data as the five sentences with the highest TFIDF score against the claim, taken from the Wikipedia pages selected by the retrieval module. If any ground truth evidence group for a claim requires more than one sentence, the claim is dropped from the training set. Otherwise, each sentence is labeled with the truth value of the claim if it is in the ground truth evidence set, and labeled as neutral if not. The resulting data forms an entailment problem that we call “FEVER One.” For comparison, we form “FEVER Five” and “FEVER Five Oracle” by concatenating all five retrieved sentences, as in the baseline. In FEVER Five Oracle, the ground truth is the claim ground truth (if verifiable), but in FEVER Five, ground truth depends on whether the retrieved evidence is in the ground truth evidence set.", "Several FEVER claims require multiple statements as evidence in order to be supported or refuted. The number of such claims is relatively small: in the first half of the development set, only 623 of 9999 claims were verifiable and had no singleton evidence groups. Furthermore, we disagreed with many of these annotations and thought that less evidence should have sufficed. Thus we chose not to develop a strategy for multiple evidence statements.", "To compare results on FEVER Five to FEVER One, we must aggregate decisions about individual sentences of possible evidence to a decision about the claim. We do this by applying the following rules:", "We resolve conflicts between supporting and refuting information in favor of the supporting information, because we observed cases in the development data where information was retrieved for different entities with the same name. For example, Ann Richards appeared both as a governor of Texas and as an Australian actress. Information that would be a contradiction regarding the actress should not stop evidence that would support a claim about the politician.", "Even if a sentence is in the evidence set, it might not be possible for the classifier to correctly determine whether it supports the claim, because the sentence could have pronouns with antecedents outside the given sentence. Ideally, a coreference resolution system could add this information to the sentence, but running one could be time consuming and introduce its own errors. As a cheap alternative, we make the classifier aware of the title of the Wikipedia page. We convert any undersores in the page title to spaces, and insert the title between brackets before the rest of each premise sentence. The dataset constructed in this way is called “FEVER Title One.”", "The FEVER baseline system works by solving FEVER Five Oracle. Using Decomposable Attention, it achieves .505 accuracy on the test half of the development set. Swapping in the Enhanced Sequential Inference Model (ESIM) BIBREF4 to solve FEVER Five Oracle results in an accuracy of .561. Because ESIM uses a single out-of-vocabulary (OOV) token for all unknown words, we expect it to confuse named entities. Thus we extend the model by allocating 10,000 indices for out-of-vocabulary words with randomly initialized embeddings, and taking a hash of each OOV word to select one of these indices. With extended ESIM, the accuracy is .586. Therefore, we run most later comparisons with extended ESIM or transformer networks as the entailment module, rather than Decomposable Attention.", "The FEVER One dataset is highly unbalanced in favor of neutral statements, so that the majority class baseline would achieve 93.0% on this data. In fact it makes training ESIM a challenge, as the model only learns the trivial majority class predictor if the natural training distribution is followed. We reweight the examples in FEVER One for ESIM so that each class contributes to the loss equally. Then, we use Cohen's Kappa rather than the accuracy to evaluate a model's quality, so that following the bias with purely random agreement is not rewarded in the evaluation. In Table 1 we compare FEVER One to FEVER Title One, both at the level of classifying individual support statements and of classifying the claim by aggregating these decisions as described above. On a support basis, we find a 52% increase in Kappa by adding the titles.", "When ESIM is replaced by the transformer network, class reweighting is not necessary. The network naturally learns to perform in excess of the majority class baseline. Cohen's Kappa is 68% higher than that for ESIM. The possibility of training on oracle labels for a concatenated set of evidence allows a classifier to simply guess whether the hypothesis is true and supported somewhere, rather than having to consider the relationship between hypothesis and premise. For example, it is possible to classify 67% of SNLI examples correctly without reading the premise BIBREF9 . As we show in Table 2 , for ESIM, we find that this kind of guessing makes the FEVER Title Five Oracle performance better than FEVER Title Five. The Transformer model is accurate enough that oracle guessing does not help. Both models perform best when classifying each bit of evidence separately and then aggregating." ], [ "Regardless of how strong the entailment classifier is, FEVER score is limited by whether the document and sentence retrieval modules, which produce the input to the entailment classifier, find the right evidence. In Table 3 , we examine the percentage of claims for which correct evidence is retrieved, before filtering with the entailment classifier. For this calculation, we skip any claim with an evidence group with multiple statements, and count a claim as succesfully retrieved if it is not verifiable or if the statement in one of the evidence groups is retrieved. The baseline system retrieves the five articles with the highest TFIDF score, and then extracts the five sentences from that collection with the highest TFIDF score against the claim. It achieves 66.1% evidence retrieval.", "Our first modification simply adds the title to each premise statement when computing its TFIDF against the claim, so that statements from a relevant article get credit even if the subject is not repeated. This raises evidence retrieval to 68.3%.", "A more significant boost comes from retrieving additional Wikipedia pages based on named entity recognition (NER). We start with phrases tagged as named entities by SpaCy BIBREF10 , but these tags are not very reliable, so we include various capitalized phrases. We retrieve Wikipedia pages whose title exactly matches one of these phrases.", "The named entity retrieval strategy boosts the evidence retrieval rate to 80.8%, while less than doubling the processing time. However, sometimes the named entity page thus retrieved is only a Wikipedia disambiguation page with no useful information. Noticing a lot of questions about films in the development set, we modify the strategy to also retrieve a page titled “X (film)” if it exists, whenever “X” is retrieved. The film retrievals raise evidence retrieval to 81.2%.", "Finally, we eliminate the TFIDF sentence ranking to expand sentence retrieval from five sentences to entire articles, up to the first fifty sentences from each. Thus we obtain 2.6 million statements to classify regarding the 19,998 claims in the shared task development set, for an average of 128 premises per claim. The evidence retrieval rate, including all these premises, increases to 90.1%. We continue to apply the entailment module trained with only five premise retrievals. Running the entailment module on this batch using a machine with three NVIDIA GeForce GTX 1080Ti GPU cards takes on the order of six hours.", "Retrieving more than five sentences means that we can no longer submit all retrieved evidence as support for the claims. Instead, we follow the aggregation strategy from Section \"Reframing entailment\" to decide the claim label, and only submit statements whose classification matches. Limiting evidence in this way when only five statements are retrieved (“narrow evidence” in Table 4 ) pushes FEVER score down very little, to .5550 from .5617 on the development set, so we have confidence that the extra retrieval will make up for the loss. Indeed, when the system reviews the extra evidence, FEVER score goes up to .5844 on the development set.", "Table 4 compares the end-to-end performance of systems that evaluate five retrieved statements together, evaluate five retrieved statements separately, and evaluate all statements from entire articles separately. Evaluating the statements separately gives better performance. We submit the systems that retrieve five statements and entire articles for evaluation on the test set, achieving preliminary FEVER scores of .5539 and .5736 respectively (label accuracy of .5754 and .6108, evidence recall of .6245 and .5002, evidence F1 of .2542 and .6485). In preliminary standings, the latter system ranks fourth in FEVER score and first in evidence F1." ], [ "Our approach to FEVER involves a minimum of heuristics and relies mainly on the strength of the Transformer Network based entailment classification. The main performance gains come from adding retrievals that resolve named entities rather than matching the claim text only, filtering fewer of the retrievals, and making the entailment classifier somewhat aware of the topic of what it is reading by including the title. If higher quality and more plentiful multi-evidence claims would be constructed, it would be nice to incorporate dynamic retrievals into the system, allowing the classifier to decide that it needs more information about keywords it encountered during reading." ] ] }
{ "question": [ "How big is their training set?", "What baseline do they compare to?", "Which pre-trained transformer do they use?", "What is the FEVER task?" ], "question_id": [ "9efd025cfa69c6ff2777528bd158f79ead9353d1", "559c1307610a15427caeb8aff4d2c01ae5c9de20", "4ecb6674bcb4162bf71aea8d8b82759255875df3", "eacc1eb65daad055df934e0e878f417b73b2ecc1" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "transformer", "transformer", "transformer", "transformer" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "0efbcf10ffd60b7ac765e797acb4188b6fb548c7" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 ." ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods." ], "highlighted_evidence": [ "Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods." ] } ], "annotation_id": [ "dca8d216296bceafacb89fa8c0e8e3404ad2f298" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "BIBREF5" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods." ], "highlighted_evidence": [ "For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . " ] } ], "annotation_id": [ "dfd6ac4bdae8afaa4796cb91975e84117cd7f088" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The release of the FEVER fact extraction and verification dataset BIBREF0 provides a large-scale challenge that tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem. Systems are evaluated on the accuracy of the claim predictions, with credit only given when correct evidence is submitted.", "As entailment data, premises in FEVER data differ substantially from those in the image caption data used as the basis for the Stanford Natural Language Inference (SNLI) BIBREF1 dataset. Sentences are longer (31 compared to 14 words on average), vocabulary is more abstract, and the prevalence of named entities and out-of-vocabulary terms is higher.", "The retrieval aspect of FEVER is not straightforward either. A claim may have small word overlap with the relevant evidence, especially if the claim is refuted by the evidence." ], "highlighted_evidence": [ "The release of the FEVER fact extraction and verification dataset BIBREF0 provides a large-scale challenge that tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem. Systems are evaluated on the accuracy of the claim predictions, with credit only given when correct evidence is submitted.", "As entailment data, premises in FEVER data differ substantially from those in the image caption data used as the basis for the Stanford Natural Language Inference (SNLI) BIBREF1 dataset. Sentences are longer (31 compared to 14 words on average), vocabulary is more abstract, and the prevalence of named entities and out-of-vocabulary terms is higher.", "The retrieval aspect of FEVER is not straightforward either. A claim may have small word overlap with the relevant evidence, especially if the claim is refuted by the evidence." ] } ], "annotation_id": [ "723b977f0074c3cb287db7a362930b75459cfc32" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Table 1: Effect of adding titles to premises.", "Table 2: Concatenating evidence or not.", "Table 3: Percentage of evidence retrieved from first half of development set. Single-evidence claims only.", "Table 4: FEVER Score of various systems. All use NE+Film retrieval." ], "file": [ "3-Table1-1.png", "3-Table2-1.png", "3-Table3-1.png", "3-Table4-1.png" ] }
2004.04435
Automatic Differentiation in ROOT
In mathematics and computer algebra, automatic differentiation (AD) is a set of techniques to evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.), elementary functions (exp, log, sin, cos, etc.) and control flow statements. AD takes source code of a function as input and produces source code of the derived function. By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program. This paper presents AD techniques available in ROOT, supported by Cling, to produce derivatives of arbitrary C/C++ functions through implementing source code transformation and employing the chain rule of differential calculus in both forward mode and reverse mode. We explain its current integration for gradient computation in TFormula. We demonstrate the correctness and performance improvements in ROOT's fitting algorithms.
{ "section_name": [ "Introduction", "Background", "Background ::: AD and its Modes", "Background ::: AD Implementations", "Architecture and Implementation", "Results", "Results ::: Accuracy", "Results ::: Performance", "Results ::: Performance in TFormula", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Accurate and efficient computation of derivatives is vital for a wide variety of computing applications, including numerical optimization, solution of nonlinear equations, sensitivity analysis, and nonlinear inverse problems. Virtually every process could be described with a mathematical function, which can be thought of as an association between elements from different sets. Derivatives track how a varying quantity depends on another quantity, for example how the position of a planet varies as time varies.", "Derivatives and gradients (vectors of partial derivatives of multivariable functions) allow us to explore the properties of a function and thus the described process as a whole. Gradients are an essential component in gradient-based optimization methods, which have become more and more important in recent years, in particular with its application training of (deep) neural networks BIBREF0.", "Several different techniques are commonly used to compute the derivatives of a given function, either exactly or approximately BIBREF1, BIBREF0, BIBREF2. The most prevalent techniques are:", "Numerical differentiation, based on the finite difference method, provides a way to evaluate derivatives approximately. While simple, numerical differentiation can be slow (the run-time complexity grows linearly with the number of input variables) and may have problems with accuracy due to round-off and truncation errors.", "Symbolic differentiation, based on transformations of symbolic expressions of functions, provides exact closed-form expressions for the derivatives. It faces difficulties when the function to be differentiated is not available in a closed form, which is often the case for computer programs which may contain control flow. Symbolic differentiation can produce derivative expressions that are computationally expensive to evaluate due to difficulties in exploiting common subexpressions.", "Automatic differentiation (AD) computes derivatives accurately to the precision of the original function, supports control flow and uses at most a small constant factor more time and space than it takes to evaluate the original function, at the expense of increased implementation complexity and introducing more software dependencies.", "Numerical and symbolic differentiation methods are slow at computing gradients of functions with many input variables, as is often needed for gradient-based optimization algorithms. Both methods have problems calculating higher-order derivatives, where the complexity and errors due to numerical precision increase. Automatic differentiation largely avoids the problems of numerical and symbolic differentiation.", "In this paper, we describe the implementation of automatic differentiation techniques in ROOT, which is the data analysis framework broadly used High-Energy Physics BIBREF3. This implementation is based on Clad BIBREF4, BIBREF5, which is an automatic differentiation plugin for computation expressed in C/C++." ], [ "Here, we briefly discuss main algorithmic and implementation principles behind AD. An in-depth overview and more formal description can be found in BIBREF1 and BIBREF2, respectively." ], [ "AD is based on the decomposition of the procedure (e.g. a source code that computes the original function) into a sequence of simple mathematical operations (e.g. $+, -, *, /, \\sin , \\cos , \\exp $) that can be expressed using a series of intermediate results. Subsequently, derivatives of every intermediate result are evaluated and combined via the chain rule of calculus to obtain the derivatives of the whole sequence. The control flow (e.g. branches, loops) can be incorporated by differentiating the control flow of the original function during the derivative evaluation. Two main modes of AD, which differ in the order of application of the chain rule, are used:", "Forward mode operates in a top-down approach and computes the derivative of every intermediate result with respect to a single selected input variable of the function. As soon as a final result of the function is reached, the partial derivative with respect to the selected input is available. A single evaluation of the forward mode can only compute partial derivatives with respect to a single input variable. Thus, when the whole gradient is required, forward mode must be invoked once per every input variable, leading to $m \\cdot c_{F} \\cdot n$ runtime complexity, where $m$ is the number of input variables, $n$ is the algorithmic complexity of the original function and $c_{F} < 3 $ is a small constant factor overhead of a single invocation of the forward mode BIBREF2.", "Reverse mode operates in a bottom-up approach and computes the derivative of a function's output with respect to every intermediate result. Once every input variable of the function is reached, the whole gradient of an output is available. Note that, independently on the number of input variables $N$, a single evaluation of the reverse mode is sufficient to get the whole gradient of a function's output, leading to $c_{R} \\cdot n$ runtime complexity, where $n$ is the complexity of the original function and $c_{R} \\le 4$ is a small constant factor overhead BIBREF2. This is a huge advantage in settings with a single scalar output and many inputs, which is often the case in machine-learning problems where $N >> 10^6$ that makes the forward mode infeasible. As a disadvantage, reverse mode implementations are more complicated, and dynamic memory allocations may be required when dynamic control flow is involved. Depending on the original function, this may cause a single evaluation of the reverse mode to be somewhat slower compared to a single evaluation of the forward mode." ], [ "AD techniques have been implemented in a variety of programming languages and paradigms, ranging from classical tools for Fortran BIBREF6 and C BIBREF7, to recent active work on tools specific to machine-learning applications BIBREF8, BIBREF9, and modern general-purpose programming languages BIBREF10, BIBREF11. We refer the reader to www.autodiff.org for a comprehensive list of available AD implementations for various languages.", "In particular, several implementations exist for C++, e.g. BIBREF12, BIBREF13, BIBREF14. Majority of implementations of AD fall into one of the two categories of implementation techniques:", "Tools based on operator overloading utilize features of programming languages like C++ and Python to define custom types and overload mathematical operators (e.g. +, -, *, /) and functions (e.g. $\\exp , \\sin , \\cos $) on them. Such implementations are often based on custom AD-enabled types that wrap values of both the original and derivative functions and redefine operators to simultaneously act on original and derivative values. In C++, such tools are often implemented as a library that introduces templated differentiable types and corresponding mathematical operations. Then, functions called on the custom type return both original and derivative values. This is a powerful technique but has two primary limitations: legacy code and performance. Functions must be either polymorphic (templated) or explicitly defined on AD-enabled type to be differentiated. Differentiation of pre-existing source code using builtin types such as double and float is not possible. Users are required to use additional level of abstraction in the form of library-specific types instead of first-class language features. Moreover, the performance of the derivative generation can be suboptimal due to the C++ metaprogramming system which usually constructs deep template instantiation chains. Performance can be even more problematic when creating a higher order derivatives.", "Tools based on source transformation analyze the source code of the original function and build another source code for the derivative function. Such techniques typically accept and generate any code using built-in features of the original language and do not require custom libraries. On the other hand, they require an additional pass over the source file to analyze and generate derivative code. Source transformation can fully utilize source-level optimizations and has reasonably good performance. Implementation is more complicated and it is problematic to achieve full coverage of C++ language features. While full integration with a compiler can make AD a first-class language feature that is transparent for the user, most current implementations for C++ are based on custom parsers that do not have full coverage of the vast variety of C++ language constructs and require a separate step before compilation." ], [ "Automatic differentiation in ROOT is based on Clad BIBREF4, BIBREF5. Clad is a source transformation AD tool for C++. It is based on LLVM compiler infrastructure BIBREF15 and is implemented as a plugin for C++ compiler Clang, which allows Clad to be transparently integrated into the compilation phase and to utilize large parts of the compiler. Clad relies on Clang's parsing and code generation functionality and can differentiate complicated C++ constructs. Clad supports both forward and reverse mode. It is available as a standalone Clang plugin that, when attached to the compiler, produces derivatives in the compilation phase.", "On top of that, Clad is integrated directly into ROOT to provide AD functionality as an integral part of the framework. ROOT has a C++ interpreter Cling BIBREF16 which is built on the top of LLVM and Clang. This allows Clad to be attached to Cling as a plugin in a similar way as it can be attached to Clang. In this section, we discuss 1) architecture of Clad and its interaction with Cling; and 2) details of its integration into ROOT.", "Clad operates on Clang AST (abstract syntax tree) by analyzing the AST of the original function and generating the AST of the derivative. Clad provides two API functions: clad::differentiate for forward mode and clad::gradient for reverse mode, which can be used directly in the source code to mark a function for differentiation (see BIBREF5 for more details on usage and code examples).", "The information flow of interactions with Cling during differentiation (Figure FIGREF13) is:", "A function is marked for differentiation with the C++ construct clad::differentiate or clad::gradient (step 1).", "Cling in ROOT performs incremental compilation and receives an abstract syntax tree (AST) representation of the code (step 2).", "Cling detects the differentiation marker and sends the AST of the original function to Clad, which transforms the AST to produce the AST of the derivative (step 3).", "Clad returns the derivative AST to Cling for code generation and execution by the low level LLVM primitives (steps 4, 5, 6, 7). Alternatively, if Clad was configured for non-interactive use, the generated AST can be converted to a C++ source code and written to a text file. The generated code then can be compiled with any C++ compiler (steps 8, 9).", "Inside of ROOT, interface functions clad::differentiate and clad::gradient are accessible via include <Math/CladDerivator.h>. Clad is also directly integrated into the TFormula class that encapsulates the concept of multidimensional mathematical functions in ROOT. TFormula is a primitive in ROOT's math package which is connected to the Cling interpreter. In the context of TFormula, Clad can differentiate functions available in the interpreter. The TFormula::GenerateGradientPar method uses Clad to differentiate the underlying code of the formula with respect to its parameters and generate the code for the gradient. TFormula::GradientPar method then evaluates the gradient at a specified point." ], [ "In this section, we empirically compare automatic differentiation (AD, our implementation based on Clad) and numerical differentiation (ND, based on finite difference method) in ROOT. We show that AD can drastically improve accuracy and performance of derivative evaluation, compared to ND." ], [ "As stated in Section SECREF1, numerical differentiation may give imprecise results while AD computes the derivatives exactly. We show an example of a function where this difference is apparent: AD provides exact result while ND suffers from the loss of accuracy.", "2", "", "The function is the PDF of Breit-Wigner distribution (Eq. DISPLAY_FORM19), whose derivative with respect to $\\Gamma $ (Eq. DISPLAY_FORM20) has critical points at $\\Gamma =\\pm {2x}$. In ROOT, the function is implemented as in (Listing SECREF18).", "linenos=false inline double breitwignerpdf(double x, double gamma, double x0 = 0) double gammahalf = gamma/2.0; return gammahalf/(MPI * ((x-x0)*(x-x0) + gammahalf*gammahalf));", "listingBreit-Wigner PDF implementation in ROOT", "", "When evaluating the derivative of breitwignerpdf with respect to gamma at x=1, gamma=2, ND in ROOT the yields a result close to 0 with an absolute error of $10^{-13}$ despite the fact that the function is smooth and well-conditioned at this point. The approximation error becomes larger when the derivative is evaluated further from the critical point. In contrast, the automatic differentiation (in both modes) yields the exact result of 0." ], [ "Section SECREF2 showed that reverse mode AD computes gradients in a single pass with a runtime complexity of at most $4 \\cdot n$, which depends only on the complexity $n$ and not the dimensionality $dim$ of the original function. On the other hand, numerical differentiation requires a separate evaluation of the original function for every dimension to compute the entire gradient, making the overall the run-time complexity of gradient evaluation via central finite difference method $2 \\cdot dim \\cdot n$. Hence, in theory, reverse mode achieves an asymptotic speedup of $O(dim)$ over the numerical differentiation and can be up to $dim / 2$ times faster.", "We experimentally verify this by comparing the performance of gradient evaluation produced by reverse mode AD against our an implementation of numerical differentiation via the central finite difference method. We use the two functions in Listing SECREF21: sum, which computes the sum of all values in a vector; and mvn, which implements the PDF of a multivariate normal distribution. Both functions have a parameter dim which defines the dimension, and gradients are taken with respect to dim-dimensional vector p. While closed-form expressions of these gradients are well-known, these functions make a good basis of a benchmark as they perform typical operations that are commonly found inside more complicated functions (e.g. +, *, pow, exp inside loop).", "", "linenos=false double sum(double* p, int dim) double r = 0.0; for (int i = 0; i < dim; i++) r += p[i]; return r; linenos=false double mvn(double* x, double* p /*means*/, double sigma, int dim) double t = 0; for (int i = 0; i < dim; i++) t += (x[i] - p[i])*(x[i] - p[i]); t = -t / (2*sigma*sigma); return std::pow(2*MPI, -n/2.0) * std::pow(sigma, -0.5) * std::exp(t); listingImplementations of sum and mvn functions", "Gradients of sum produced by numerical differentiation and Clad are shown in Listing SECREF21.", "", "linenos=false double* sumnumgrad(double* p, int dim, double eps = 1e-8) double result = new double[dim]; for (int i = 0; i < dim; i++) double pi = p[i]; p[i] = pi + eps; double v1 = sum(p, dim); p[i] = pi - eps; double v2 = sum(p, dim); result[i] = (v1 - v2)/(2 * eps); p[i] = pi; return result;", "linenos=false void sumadgrad(double *p, int dim, double *result) double dr = 0; unsigned long t0; int di = 0; clad::tape<int> t1 = ; double r = 0.; t0 = 0; for (int i = 0; i < dim; i++) t0++; r += p[clad::push(t1, i)]; double sumreturn = r; dr += 1; for (; t0; t0–) double rd0 = dr; dr += rd0; result[clad::pop(t1)] += rd0; dr -= rd0; listingGradient of sum: (left) using finite differences, (right) generated by Clad", "We perform the evaluation for values of dim between 5 and 20480. Figure FIGREF22 shows the comparison for (a) sum; (b) mvn and confirms the expected theoretical speedup of $O(dim)$, with AD-generated gradient being $~dim/4$ times faster for sum and $~dim/25$ times faster for mvn (slowdown is due to more expensive operations like pow, exp).", "", "" ], [ "Figure FIGREF26 shows the performance comparisons of reverse-mode AD and ND for the task of evaluating gradients of TFormula's builtin primitive probability density functions. The functions are gaus ($dim=3$), expo ($dim=2$), crystalball ($dim=5$), breitwigner ($dim=5$) and cheb2 ($dim=4$). Despite the low dimensionality ($dim \\le 5$), AD gives significant (approx. 10x) speedups. The speedups are even larger than expected factor of $dim/2$ that follows from theoretical results, apparently due to additional overhead of the implementation of numerical differentiation in ROOT, which tries to find the optimal step size for its finite difference method to improve accuracy.", "In Figure FIGREF26, we perform fitting of a Gaussian distribution to a histogram of random samples via gradient-based optimization. In ROOT, this functionality is implemented in TFormula-based TF1 class. We can therefore use AD due to the integration of Clad into TFormula. Figure FIGREF26 compares the performance of the AD-based TF1 fitting with the numerical fitting in the Hist package. As in previous experiments, we show that AD scales better with problem dimensionality (number of histogram bins) on this task. The integration of Clad into TFormula makes it straightforward to use AD for fitting in ROOT." ], [ "We discussed our implementation of automatic differentiation in ROOT based on Clad. We demonstrated that Clad is integrated into ROOT and can be easily used in various contexts inside ROOT (e.g. histogram fitting). Furthermore, we showed that automatic differentiation in ROOT achieves significant improvements in accuracy and performance over numerical differentiation. The performance and accuracy are promising and encourage further work in the development of Clad and its integration in ROOT.", "Possible further improvements for Clad include optimizations to code transformation and design of a consistent interface for derivatives and gradients computation. This functionality can be further extended, including the computation of Jacobians and higher-order derivatives. In order to achieve optimal performance, the evaluation of individual derivatives could be executed in parallel. Besides, the Clad API should enable a flexible execution method based on the needs of its user." ], [ "This work has been supported by U.S. NSF grants PHY-1450377 and 1450323." ] ] }
{ "question": [ "How is correctness of automatic derivation proved?", "Is this AD implementation used in any deep learning framework?" ], "question_id": [ "d353a6bbdc66be9298494d0c853e0d8d752dec4b", "e2cfaa2ec89b944bbc46e5edf7753b3018dbdc8f" ], "nlp_background": [ "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ "computer vision", "computer vision" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "empirically compare automatic differentiation (AD, our implementation based on Clad) and numerical differentiation (ND, based on finite difference method)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this section, we empirically compare automatic differentiation (AD, our implementation based on Clad) and numerical differentiation (ND, based on finite difference method) in ROOT. We show that AD can drastically improve accuracy and performance of derivative evaluation, compared to ND." ], "highlighted_evidence": [ "In this section, we empirically compare automatic differentiation (AD, our implementation based on Clad) and numerical differentiation (ND, based on finite difference method) in ROOT. We show that AD can drastically improve accuracy and performance of derivative evaluation, compared to ND." ] } ], "annotation_id": [ "4dd979c13a81b4917f659a7642001fc09afba8e2" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "0f01280a865518c283061e77aba517769dc8d464" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: Information flow of Clad in ROOT", "Figure 2: Comparison of reverse mode AD and ND with increasing dimension", "Figure 3: Performance benchmarks in ROOT" ], "file": [ "4-Figure1-1.png", "6-Figure2-1.png", "7-Figure3-1.png" ] }
1910.10408
Controlling the Output Length of Neural Machine Translation
The recent advances introduced by neural machine translation (NMT) are rapidly expanding the application fields of machine translation, as well as reshaping the quality level to be targeted. In particular, if translations have to fit some given layout, quality should not only be measured in terms of adequacy and fluency, but also length. Exemplary cases are the translation of document files, subtitles, and scripts for dubbing, where the output length should ideally be as close as possible to the length of the input text. This paper addresses for the first time, to the best of our knowledge, the problem of controlling the output length in NMT. We investigate two methods for biasing the output length with a transformer architecture: i) conditioning the output to a given target-source length-ratio class and ii) enriching the transformer positional embedding with length information. Our experiments show that both methods can induce the network to generate shorter translations, as well as acquiring interpretable linguistic skills.
{ "section_name": [ "Introduction", "Background", "Background ::: Transformer", "Background ::: Length encoding in summarization", "Methods", "Methods ::: Length Token Method", "Methods ::: Length Encoding Method", "Methods ::: Combining the two methods", "Methods ::: Fine-Tuning for length control", "Experiments ::: Data and Settings", "Experiments ::: Models", "Experiments ::: Evaluation", "Results", "Results ::: Small Data condition", "Results ::: Large data condition", "Results ::: Human Evaluation and Analysis", "Related works", "Conclusion" ], "paragraphs": [ [ "The sequence to sequence BIBREF0, BIBREF1 approach to Neural Machine Translation (NMT) has shown to improve quality in various translation tasks BIBREF2, BIBREF3, BIBREF4. While translation quality is normally measured in terms of correct transfer of meaning and of fluency, there are several applications of NMT that would benefit from optimizing the output length, such as the translation of document elements that have to fit a given layout – e.g. entries of tables or bullet points of a presentation – or subtitles, which have to fit visual constraints and readability goals, as well as speech dubbing, for which the length of the translation should be as close as possible to the length of the original sentence.", "Current NMT models do not model explicitly sentence lengths of input and output, and the decoding methods do not allow to specify desired number of tokens to be generated. Instead, they implicitly rely on the observed length of the training examples BIBREF5, BIBREF6.", "Sequence-to-sequence models have been also applied to text summarization BIBREF7 to map the relevant information found in a long text into a limited-length summary. Such models have shown promising results by directly controlling the output length BIBREF8, BIBREF9, BIBREF10, BIBREF11. However, differently from MT, text summarization (besides being a monolingual task) is characterized by target sentences that are always much shorter than the corresponding source sentences. While in MT, the distribution of the relative lengths of source and target depends on the two languages and can significantly vary from one sentence pair to another due to stylistic decisions of the translator and linguistic constraints (e.g. idiomatic expressions).", "In this work, we propose two approaches to control the output length of a transformer NMT model. In the first approach, we augment the source side with a token representing a specific length-ratio class, i.e. short, normal, and long, which at training time corresponds to the observed ratio and at inference time to the desired ratio. In the second approach, inspired by recent work in text summarization BIBREF11, we enrich the position encoding used by the transformer model with information representing the position of words with respect to the end of the target string.", "We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01." ], [ "Our proposal is based on the transformer architecture and a recently proposed extension of its positional encoding aimed to control the length of generated sentences in text summarization." ], [ "Transformer BIBREF12 is a sequence-to-sequence architecture that processes sequences using only attention and feed forward layers. Its core component is the so-called multi-head attention, which computes attention BIBREF0, BIBREF13 between two sequences in a multi-branch fashion BIBREF14. Within the encoder or the decoder, each layer first computes attention between two copies of the same sequence (self-attention). In the decoder, this step is followed by an attention over the encoder output sequence. The last step in each layer is a two-layered time-distributed feed-forward network, with a hidden size larger than its input and output. Attention and feed-forward layers are characterized by a position-invariant processing of their input. Thus, in order to enrich input embeddings in source and target with positional information, they are summed with positional vectors of the same dimension $d$, which are computed with the following trigonometric encoding ($\\text{PE}$):", "for $i=1,\\ldots ,d/2$." ], [ "Recently, an extension of the positional encoding BIBREF11 was proposed to model the output length for text summarization. The goal is achieved by computing the distance from every position to the end of the sentence. The new length encoding is present only in the decoder network as an additional vector summed to the input embedding. The authors proposed two different variants. The first variant replaces the variable pos in equations (1-2) with the difference $len - pos$, where len is the sentence length. The second variant attempts to model the proportion of the sentence that has been covered at a given position by replacing the constant 10000 in the denominator of equations (1-2) with $len$. As decoding is performed at the character level, len and pos are given in number of characters. At training time, len is the observed length of the reference summary, while at inference time it is the desired length." ], [ "We propose two methods to control the output length in NMT. In the first method we partition the training set in three groups according to the observed length ratio of the reference over the source text. The idea is to let the model learn translation variants by observing them jointly with an extra input token. The second method extends the Transformer positional encoding to give information about the remaining sentence length. With this second method the network can leverage fine-grained information about the sentence length." ], [ "Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\\text{min}$ and $t_\\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\\text{min}$ and $t_\\text{max}$ are in the normal group, the ones with ratio below $t_\\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group." ], [ "Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length:", "where $i=1,\\ldots ,d/2$.", "Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers:", "where $q_N: [0, 1] \\rightarrow \\lbrace 0, 1, .., N\\rbrace $ is simply defined as $q_N(x) = \\lfloor {x \\times N}\\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9." ], [ "We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length." ], [ "Training an NMT model from scratch is a compute intensive and time consuming task. Alternatively, fine-tuning a pre-trained network shows to improve performance in several NMT scenarios BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24. For our length control approaches, we further propose to use fine-tuning an NMT model with length information, instead of training it from scratch. By adopting a fine-tuning strategy, we specifically aim; i) to decouple the performance of the baseline NMT model from that of the additional length information, ii) control the level of aggressiveness that can come from the data (length token) and the model (length encoding), and iii) make the approaches versatile to any pre-trained model. More importantly, it will allow to transform any NMT model to an output length aware version, while getting better improvements on the quality of the generated sequences." ], [ "Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder.", "In all the experiments, we use the Adam BIBREF26 optimizer with an initial learning rate of $1\\times 10^{-7}$ that increases linearly up to $0.001$ for 4000 warm-up steps, and decreases afterwards with the inverse square root of the training step. The dropout is set to $0.3$ in all layers but the attention, where it is $0.1$. The models are trained with label smoothed cross-entropy with a smoothing factor of $0.1$. Training is performed on 8 Nvidia V100 GPUs, with batches of 4500 tokens per GPU. Gradients are accumulated for 16 batches in each GPU BIBREF27. We select the models for evaluation by applying early stopping based on the validation loss. All texts are tokenized with scripts from the Moses toolkit BIBREF28, and then words are segmented with BPE BIBREF17 with 32K joint merge rules.", "For evaluation we take the best performing checkpoint on the dev set according to the loss. The size of the data clusters used for the length token method and their corresponding target-source length ratios are reported in Table TABREF19. The value of $N$ of the relative encoding is set to a small value (5), as in preliminary experiments we observed that a high value (100) produces results similar to the absolute encoding." ], [ "We evaluate our Baseline Transformer using two decoding strategies: i) a standard beam search inference (standard), and ii) beam search with length penalty (penalty) set to $0.5$ to favor shorter translations BIBREF29.", "Length token models are evaluated with three strategies that correspond to the tokens prepended to the source test set at a time (short, normal, and long), and reported as Len-Tok. Length encoding (Len-Enc) models are evaluated in a length matching condition, i.e. output length has to match input length. We report the relative (Rel) and absolute (Abs) strategies of the approach as discussed in Section SECREF10. In the small data condition, we additionally evaluated how the fine-tuning strategy compares with a model trained from scratch. In the large data condition, we added a setting that combines both the length-token and length-encoding strategies." ], [ "To evaluate all models' performance we compute BLEU BIBREF30 with the multi-bleu.perl implementation on the single-reference test sets of the En-It and En-De pairs. Given the absence of multiple references covering different length ratios, we also report n-gram precision scores (BLEU$^*$), by multiplying the BLEU score by the inverse of the brevity penalty BIBREF30. BLEU$^*$ scores is meant to measure to what extent shorter translations are subset of longer translations.", "The impact on translation lengths is evaluated with the mean sentence-level length ratios between MT output and source (LR$^{src}$) and between MT output and reference (LR$^{ref}$)." ], [ "We performed experiments in two conditions: small data and larger data. In the small data condition we only use the MuST-C training set. In the large data condition, a baseline model is first trained on large data, then it is fine-tuned on the MuST-C training set using the proposed methods. Tables TABREF23 and TABREF26 lists the results for the small and large data conditions. For the two language directions they show BLEU and BLEU* scores, as well as the average length ratios." ], [ "The baselines generate translations longer than the source sentence side, with a length ratio of 1.05 for Italian and 1.11 for German. Decoding with length penalty (penalty) slightly decreases the length ratios but they are still far from our goal of LR$^{src}$=1.00.", "Fine-tuning. A comparison of the models trained from scratch (central portion of Table TABREF23) with their counterparts fine-tuned from the baseline (last portion of Table TABREF23) shows that the models in the first group generally generate shorter translations, but of worse quality. Additionally, the results with fine-tuning are not much different from the baseline. Existing models can be enhanced to produce shorter sentences, and little variation is observed in their translation quality.", "Length tokens. Fine-tuning with Len-Tok (Fourth section in Table TABREF23) gives a coarse-grained control over the length, while keeping BLEU scores similar to the baseline or slightly better. Decoding with the token normal leads to translations slightly shorter than the baseline for En-It (LR$^{src}$=1.05 and LR$^{ref}$=1.02), while the token small strongly reduces the translation lengths up to almost the source length (LR$^{src}$=1.01). In the opposite side, the token long generates longer translations which are slightly worse than the others (32.00). A similar behavior is observed for En-De, where the LR$^{src}$ goes from 1.12 to 1.07 when changing normal with short, and to 1.15 with long. The results with the token long are not interesting for our task and are given only for the sake of completeness.", "Length Encoding. The last section of Table TABREF23 lists the results of using length encoding (Len-Enc) relative (Rel) and absolute (Abs). The two encodings lead to different generated lengths, with Abs being always shorter than Rel. Unfortunately, these improvements in the lengths correspond to a significant degradation in translation quality, mostly due to truncated sentences." ], [ "Our Baselines for the large data condition generate sentences with length ratios over the source comparable to the small data condition (LR$^\\text{src}$ and LR$^\\text{ref}$), but with better translation quality: 35.46 BLEU points for En-It and 33.96 for En-De. Length penalty slightly reduces the length ratios, which results in a 0.3 BLEU points improvement in Italian and -0.3 in German because of the brevity penalty. In the latter case, the BLEU* is slightly better than the standard baseline output. Also for the large data condition, while the length penalty slightly helps to shorten the translations, its effect is minimal and insufficient for our goal.", "Length tokens. In En-It there is no noticeable difference in translation quality between the tokens normal and short, while there is a degradation of $\\sim 0.7$ points when using long. This last result is consistent with the ones observed before. Also in this case the token short does not degrade the BLEU score, and obtains the highest precision BLEU* with 36.22. In En-De we obtain the best results with token normal (34.46), which matches the length distribution of the references. The token short generates much shorter outputs (LR$^\\text{src}$=1.05), which are also much shorter than the reference (LR$^\\text{ref}=0.93$). Consequently the BLEU score degrades significantly (30.61), and also the BLEU* is 1 point lower than with the token normal. Longer translations can be generated with the token long, but they always come at the expense of lower quality.", "Length encoding. For En-It, Len-Enc Rel in Table TABREF26 achieves a LR$^\\text{src}$ of 1.01 with a slight degradation of $0.3$ BLEU points over the baseline, while in the case of Abs the degradation is higher (-1.6) and LR$^\\text{src}$ is similar (1.02). Also in En-De the degradation of Rel over the baseline is only -0.3, but the reduction in terms of LR$^\\text{src}$ is very small (1.11 vs 1.13). On the other side, Abs produces much shorter translations (1.03 LR$^\\text{src}$) at the expense of a significantly lower BLEU score (30.79). When computing the BLEU* score, the absolute encoding is only 0.45 points lower than the relative encoding (33.29 vs 33.74), but -0.8 lower than the baseline.", "Token + Encoding. So far, we have observed generally good results using the token method and translating with the tokens short and normal. while the length encoding generally produces a more predictable output length, in particular for the absolute variant. In the last experiment, we combine the two methods in order to have a system that can capture different styles (short, normal, long), as well as explicitly leveraging length information. The results listed in the last portion of Table TABREF26 (Tok+Enc) show that the relative encoding Rel produces better translations than Abs, but again it has less predictability in output length. For instance, in En-It the LR$^\\text{src}$ of Rel is 0.96 with token short and 1.02 with normal, while for En-De it is 1.01 with short and 1.08 with normal. On the other side, the Abs produces LR$^\\text{src}$ of 1.01 with both tokens in En-It and also with short in En-De, and it increases to only 1.03 with normal.", "Controlling output length. In order to achieve LR$^\\text{src}$ as close as possible to 1.0, we set the target length during generation equal to the source length when using the length encoding methods. However, one advantage of length encoding is the possibility to set the target length to modify the average output length. We illustrate this option by using the Tok+Enc Rel system for En-It, and translating with the tokens normal or short and different scaling factors for the target length. The results, listed in Table TABREF27, show that we are able to approach an LR$^{src}$ of 1.0 with both tokens and the BLEU score is not affected with token normal (35.45) or improves with token short (35.11).", "Discussion. Length token is an effective approach to generate translations of different lengths, but it does not allow a fine-grained control of the output lengths and its results depend on the partition of the training set into groups, which is a manual process. Length encoding allows to change the output length, but the two variants have different effects. Absolute encoding is more accurate but generates sentences with missing information. The relative encoding produces better translations than the absolute encoding, but its control over the translation length is more loose. The increased length stability is captured by the standard deviation of the length ratio with the source, which is $0.14$ for length tokens, $\\sim 0.11$ for relative encoding and $\\sim 0.07$ for absolute encoding. The advantage of the combined approach is that it can generate sentences with different style to fit different length groups, and the output length can also be tuned by modifying the target length, while no important quality degradation is observed. Additionally, the standard deviation of the lengths is the same as for the length encoding used." ], [ "After manually inspecting the outputs of the best performing models under the large data condition, we decided to run a human evaluation only for the En-It Len-Tok model. As our ultimate goal is to be able to generate shorter translations and as close as possible to the length of the source sentences, we focused the manual evaluation on the Short output class and aimed to verify possible losses in quality with respect to the baseline system. We ran a head-to-head evaluation on the first 10 sentences of each test talk, for a total of 270 sentences, by asking annotators to blindly rank the two system outputs (ties were also permitted) in terms of quality with respect to a reference translation. We collected three judgments for each output, from 19 annotators, for a total of 807 scores (one sentence had to be discarded). Inter-annotator agreement measured with Fleiss' kappa was 0.35 (= fair agreement). Results reported in Table TABREF32 confirm the small differences observed in BLEU scores: there are only a 4% more wins for the Baseline and almost 60% of ties. The small degradation in quality of the shorter translations is statistically significant ($p<0.05$), as well as their difference in length ($p<0.001$).", "Notice that the evaluation was quite severe towards the shorter translations, as even small changes of the meaning could affect the ranking. After the manual evaluation, we analyzed sentences in which shorter translations were unanimously judged equal or better than the standard translations. We hence tried to identify the linguistic skills involved in the generation of shorter translations, namely: (i) use of abbreviations, (ii) preference of simple verb tenses over compound tenses, (iii) avoidance of not relevant adjective, adverbs, pronouns and articles, (iv) use of paraphrases. Table TABREF33 shows examples of the application of the above strategies as found in the test set." ], [ "As an integration of Section 2, we try to provide a more complete picture on previous work with seq-to-seq models to control the output length for text summarization, and on the use of tokens to bias in different ways the output of NMT.", "In text summarization, BIBREF8 proposed methods to control output length either by modifying the search process or the seq-to-seq model itself, showing that the latter being more promising. BIBREF9 addressed the problem similarly to our token approach, by training the model on data bins of homogeneous output length and conditioning the output on a length token. They reported better performance than BIBREF8. Finally, BIBREF11 proposed the extension of the positional encoding of the transformer (cf. Section 2), reporting better performance than BIBREF8 and BIBREF9.", "The use of tokens to condition the output of NMT started with the multilingual models BIBREF15, BIBREF16, and was then further applied to control the use of the politeness form in English-German NMT BIBREF32, in the translation from English into different varieties of the same language BIBREF33, for personalizing NMT to user gender and vocabulary BIBREF34, and finally to perform NMT across different translation styles BIBREF35." ], [ "In this paper, we have proposed two solutions for the problem of controlling the output length of NMT. A first approach, inspired by multilingual NMT, allows a coarse-grained control over the length and no degradation in translation quality. A second approach, inspired by positional encoding, enables a fine-grained control with only a small error in the token count, but at the cost of a lower translation quality. A manual evaluation confirms the translation quality observed with BLEU score. In future work, we plan to design more flexible and context-aware evaluations which allow us to account for short translations that are not equivalent to the original but at the same time do not affect the overall meaning of the discourse." ] ] }
{ "question": [ "Do they conduct any human evaluation?", "What dataset do they use for experiments?", "How do they enrich the positional embedding with length information", "How do they condition the output to a given target-source class?", "Which languages do they focus on?", "What dataset do they use?", "Do they experiment with combining both methods?" ], "question_id": [ "22c36082b00f677e054f0f0395ed685808965a02", "85a7dbf6c2e21bfb7a3a938381890ac0ec2a19e0", "90bc60320584ebba11af980ed92a309f0c1b5507", "f52b2ca49d98a37a6949288ec5f281a3217e5ae8", "228425783a4830e576fb98696f76f4c7c0a1b906", "9d1135303212356f3420ed010dcbe58203cc7db4", "d8bf4a29c7af213a9a176eb1503ec97d01cc8f51" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "two", "two", "two" ], "topic_background": [ "", "", "", "", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "", "", "", "", "no", "no", "no" ], "search_query": [ "", "", "", "", "", "", "" ], "question_writer": [ "798ee385d7c8105b83b032c7acc2347588e09d61", "798ee385d7c8105b83b032c7acc2347588e09d61", "798ee385d7c8105b83b032c7acc2347588e09d61", "798ee385d7c8105b83b032c7acc2347588e09d61", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01." ], "highlighted_evidence": [ "We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01." ] } ], "annotation_id": [ "0f04331cbdb88dc33e06b6b970c11db7cc4e842d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "English$\\rightarrow $Italian/German portions of the MuST-C corpus", "As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder." ], "highlighted_evidence": [ "Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)." ] } ], "annotation_id": [ "d897b5cc9f257c8fd1a930a6bc1b7e1d73005efb" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "They introduce new trigonometric encoding which besides information about position uses additional length information (abs or relative).", "evidence": [ "Methods ::: Length Encoding Method", "Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length:", "where $i=1,\\ldots ,d/2$.", "Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers:", "where $q_N: [0, 1] \\rightarrow \\lbrace 0, 1, .., N\\rbrace $ is simply defined as $q_N(x) = \\lfloor {x \\times N}\\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9." ], "highlighted_evidence": [ "Methods ::: Length Encoding Method\nInspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length:\n\nwhere $i=1,\\ldots ,d/2$.\n\nSimilarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers:\n\nwhere $q_N: [0, 1] \\rightarrow \\lbrace 0, 1, .., N\\rbrace $ is simply defined as $q_N(x) = \\lfloor {x \\times N}\\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens." ] } ], "annotation_id": [ "6c4be2329714531078bea6390c6892868f51944e" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "They use three groups short/normal/long translation classes to learn length token, which is in inference used to bias network to generate desired length group.", "evidence": [ "Methods ::: Length Token Method", "Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\\text{min}$ and $t_\\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\\text{min}$ and $t_\\text{max}$ are in the normal group, the ones with ratio below $t_\\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group." ], "highlighted_evidence": [ "Methods ::: Length Token Method\nOur first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\\text{min}$ and $t_\\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\\text{min}$ and $t_\\text{max}$ are in the normal group, the ones with ratio below $t_\\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group." ] } ], "annotation_id": [ "f51792ec82eea4ff8587745ac8140a8357572bed" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "two translation directions (En-It and En-De)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01." ], "highlighted_evidence": [ "We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source.", "En-It, En-De in both directions" ] } ], "annotation_id": [ "498073e28e7f3074adbd65f4b3680a421b721175" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "English$\\rightarrow $Italian/German portions of the MuST-C corpus", "As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder." ], "highlighted_evidence": [ "Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)." ] } ], "annotation_id": [ "6bfc48103d84dc0223b89994e5583504b0fb8bf8" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Methods ::: Combining the two methods", "We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length." ], "highlighted_evidence": [ "Methods ::: Combining the two methods\nWe further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length." ] } ], "annotation_id": [ "223910aa36816d4bd67012d8c487b2f175bfea2e" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: German and Italian human and machine translations (MT) are usually longer than their English source (SRC). We investigate enhanced NMT (MT*) that can also generate translations shorter than the source length. Text in red exceeds the length of the source, while underlined words point out the different translation strategy of the enhanced NMT model.", "Figure 2: Training NMT with three length ratio classes permits to get outputs of different length at inference time.", "Figure 3: Transformer architecture with decoder input enriched with (relative) length embedding computed according to the desired target string length (12 characters in the example).", "Table 1: Train, validation and test data size in number of examples.", "Table 2: Train data category after assigning the length tokens (normal, short and long).", "Table 3: Performance of the baseline and models with length information trained from scratch and or by fine-tuning, in terms of BLEU, BLEU∗, mean length ratio of the output against the source (LRsrc) and the reference (LRref ). italics shows the best performing model under each category, while bold shows the wining strategy.", "Table 4: Large scale experiments comparing the baseline, length token, length encoding and their combination.", "Table 5: Results for En-It with Tok+Enc Rel by scaling the target length with different constant factors.", "Table 6: Manual evaluation on En-It (large data) ranking translation quality of the baseline (standard) and token short translation against the reference translation.", "Table 7: Examples of shorter translation fragments obtained by paraphrasing (italics), drop of words (red), and change of verb tense (underline)." ], "file": [ "2-Figure1-1.png", "2-Figure2-1.png", "4-Figure3-1.png", "4-Table1-1.png", "4-Table2-1.png", "5-Table3-1.png", "6-Table4-1.png", "6-Table5-1.png", "7-Table6-1.png", "7-Table7-1.png" ] }
1606.05286
Spectral decomposition method of dialog state tracking via collective matrix factorization
The task of dialog management is commonly decomposed into two sequential subtasks: dialog state tracking and dialog policy learning. In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate the true dialog state from noisy observations produced by the speech recognition and the natural language understanding modules. The state tracking task is primarily meant to support a dialog policy. From a probabilistic perspective, this is achieved by maintaining a posterior distribution over hidden dialog states composed of a set of context dependent variables. Once a dialog policy is learned, it strives to select an optimal dialog act given the estimated dialog state and a defined reward function. This paper introduces a novel method of dialog state tracking based on a bilinear algebric decomposition model that provides an efficient inference schema through collective matrix factorization. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset and we show that the proposed tracker gives encouraging results compared to the state-of-the-art trackers that participated in this standard benchmark. Finally, we show that the prediction schema is computationally efficient in comparison to the previous approaches.
{ "section_name": [ "Introduction", "Transactional dialog state tracking", "Generative Dialog State Tracking", "Discriminative Dialog State Tracking", "Spectral decomposition model for state tracking in slot-filling dialogs", "Learning method", "Prediction method", "Experimental settings and Evaluation", "Restaurant information domain", "Experimental results", "Related work", "Conclusion" ], "paragraphs": [ [ "The field of autonomous dialog systems is rapidly growing with the spread of smart mobile devices but it still faces many challenges to become the primary user interface for natural interaction through conversations. Indeed, when dialogs are conducted in noisy environments or when utterances themselves are noisy, correctly recognizing and understanding user utterances presents a real challenge. In the context of call-centers, efficient automation has the potential to boost productivity through increasing the probability of a call's success while reducing the overall cost of handling the call. One of the core components of a state-of-the-art dialog system is a dialog state tracker. Its purpose is to monitor the progress of a dialog and provide a compact representation of past user inputs and system outputs represented as a dialog state. The dialog state encapsulates the information needed to successfully finish the dialog, such as users' goals or requests. Indeed, the term “dialog state” loosely denotes an encapsulation of user needs at any point in a dialog. Obviously, the precise definition of the state depends on the associated dialog task. An effective dialog system must include a tracking mechanism which is able to accurately accumulate evidence over the sequence of turns of a dialog, and it must adjust the dialog state according to its observations. In that sense, it is an essential componant of a dialog systems. However, actual user utterances and corresponding intentions are not directly observable due to errors from Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU), making it difficult to infer the true dialog state at any time of a dialog. A common method of modeling a dialog state is through the use of a slot-filling schema, as reviewed in BIBREF0 . In slot-filling, the state is composed of a predefined set of variables with a predefined domain of expression for each of them. The goal of the dialog system is to efficiently instantiate each of these variables thereby performing an associated task and satisfying the corresponding intent of the user.", "Various approaches have been proposed to define dialog state trackers. The traditional methods used in most commercial implementations use hand-crafted rules that typically rely on the most likely result from an NLU module as described in BIBREF1 . However, these rule-based systems are prone to frequent errors as the most likely result is not always the correct one. Moreover, these systems often force the human customer to respond using simple keywords and to explicitly confirm everything they say, creating an experience that diverges considerably from the natural conversational interaction one might hope to achieve as recalled in BIBREF2 . More recent methods employ statistical approaches to estimate the posterior distribution over the dialog states allowing them to represent the uncertainty of the results of the NLU module. Statistical dialog state trackers are commonly categorized into one of two approaches according to how the posterior probability distribution over the state calculation is defined. In the first type, the generative approach uses a generative model of the dialog dynamic that describes how the sequence of utterances are generated by using the hidden dialog state and using Bayes' rule to calculate the posterior distribution of the state. It has been a popular approach for statistical dialog state tracking, since it naturally fits into the Partially Observable Markov Decision Process (POMDP) models as described in BIBREF3 , which is an integrated model for dialog state tracking and dialog strategy optimization. Using this generic formalism of sequential decision processes, the task of dialog state tracking is to calculate the posterior distribution over an hidden state given an history of observations. In the second type, the discriminative approach models the posterior distribution directly through a closed algebraic formulation as a loss minimization problem. Statistical dialog systems, in maintaining a distribution over multiple hypotheses of the true dialog state, are able to behave robustly even in the face of noisy conditions and ambiguity. In this paper, a statistical type of approach of state tracking is proposed by leveraging the recent progress of spectral decomposition methods formalized as bilinear algebraic decomposition and associated inference procedures. The proposed model estimates each state transition with respect to a set of observations and is able to compute the state transition through an inference procedure with a linear complexity with respect to the number of variables and observations.", "Roadmap: This paper is structured as follows, Section \"Generative Dialog State Tracking\" formally defines transactional dialogs and describes the associated problem of statistical dialog state tracking with both the generative and discriminative approaches. Section \"Spectral decomposition model for state tracking in slot-filling dialogs\" depicts the proposed decompositional model for coupled and temporal hidden variable models and the associated inference procedure based on Collective Matrix Factorization (CMF). Finally, Section \"Experimental settings and Evaluation\" illustrates the approach with experimental results obtained using a state of the art benchmark for dialog state tracking." ], [ "The dialog state tracking task we consider in this paper is formalized as follows: at each turn of a task-oriented dialog between a dialog system and a user, the dialog system chooses a dialog act $d$ to express and the user answers with an utterance $u$ . The dialog state at each turn of a given dialog is defined as a distribution over a set of predefined variables, which define the structure of the state as mentioned in BIBREF4 . This classic state structure is commonly called slot filling and the associated dialogs are commonly referred to as transactional. Indeed, in this context, the state tracking task consists of estimating the value of a set of predefined variables in order to perform a procedure or transaction which is, in fact, the purpose of the dialog. Typically, the NLU module processes the user utterance and generates an N-best list $o = \\lbrace <d_1, f_1>, \\ldots , <d_n, f_n>\\rbrace $ , where $d_i$ is the hypothesized user dialog act and $f_i$ is its confidence score. In the simplest case where no ASR and NLU modules are employed, as in a text based dialog system as proposed in BIBREF5 the utterance is taken as the observation using a so-called bag of words representation. If an NLU module is available, standardized dialog act schemas can be considered as observations as in BIBREF6 . Furthermore, if prosodic information is available by the ASR component of the dialog system as in BIBREF7 , it can also be considered as part of the observation definition. A statistical dialog state tracker maintains, at each discrete time step $t$ , the probability distribution over states, $b(s_t)$ , which is the system's belief over the state. The general process of slot-filling, transactional dialog management is summarized in Figure 1 . First, intent detection is typically an NLU problem consisting of identifying the task the user wants the system to accomplish. This first step determines the set of variables to instantiate during the second step, which is the slot-filling process. This type of dialog management assumes that a set of variables are required for each predefined intention. The slot filling process is a classic task of dialog management and is composed of the cyclic tasks of information gathering and integration, in other words – dialog state tracking. Finally, once all the variables have been correctly instantiated, a common practice in dialog systems is to perform a last general confirmation of the task desired by the user before finally executing the requested task. As an example used as illutration of the proposed method in this paper, in the case of the DSTC-2 challenge, presented in BIBREF8 , the context was taken from the restaurant information domain and the considered variables to instanciate as part of the state are {Area (5 possible values) ; FOOD (91 possible values) ; Name (113 possible values) ; Pricerange (3 possible values)}. In such framework, the purpose is to estimate as early as possible in the course of a given dialog the correct instantiation of each variable. In the following, we will assume the state is represented as a concatenation of zero-one encoding of the values for each variable defining the state. Furthermore, in the context of this paper, only the bag of words has been considered as an observation at a given turn but dialog acts or detected named entity provided by an SLU module could have also been incorporated as evidence.", "Two statistical approaches have been considered for maintaining the distribution over a state given sequential NLU output. First, the discriminative approach aims to model the posterior probability distribution of the state at time $t+1$ with regard to state at time $t$ and observations $z_{1:t}$ . Second, the generative approach attempts to model the transition probability and the observation probability in order to exploit possible interdependencies between hidden variables that comprise the dialog state." ], [ "A generative approach to dialog state tracking computes the belief over the state using Bayes' rule, using the belief from the last turn $b(s_{t-1})$ as a prior and the likelihood given the user utterance hypotheses $p(z_t|s_t)$ , with $z_t$ the observation gathered at time $t$ . In the prior work BIBREF4 , the likelihood is factored and some independence assumptions are made: ", "$$b_t \\propto \\sum _{s_{t-1},z_t} p(s_t|z_t, d_{t-1}, s_{t-1}) p(z_t|s_t) b(s_{t-1})$$ (Eq. 3) ", "Figure 2 depicts a typical generative model of a dialog state tracking process using a factorial hidden Markov model proposed by BIBREF9 . The shaded variables are the observed dialog turns and each unshaded variable represents a single variable describing the task dependent variables. In this family of approaches, scalability is considered as one of the main issues. One way to reduce the amount of computation is to group the states into partitions, as proposed in the Hidden Information State (HIS) model of BIBREF10 . Other approaches to cope with the scalability problem in dialog state tracking is to adopt a factored dynamic Bayesian network by making conditional independence assumptions among dialog state components, and then using approximate inference algorithms such as loopy belief propagation as proposed in BIBREF11 or a blocked Gibbs sampling as in BIBREF12 . To cope with such limitations, discriminative methods of state tracking presented in the next part of this section aim at directly model the posterior distribution of the tracked state using a choosen parametric form." ], [ "The discriminative approach of dialog state tracking computes the belief over a state via a trained parametric model that directly represents the belief $b(s_{t+1}) = p(s_{s+1} | s_t, z_t)$ . Maximum Entropy has been widely used in the discriminative approach as described in BIBREF13 . It formulates the belief as follows: ", "$$b(s) = P(s|x) = \\eta .e^{w^T\\phi (x,s)}$$ (Eq. 6) ", "where $\\eta $ is the normalizing constant, $x = (d^u_1, d^m_1, s_1, \\dots , d^u_t, d^m_t, s_t)$ is the history of user dialog acts, $d^u_i, i \\in \\lbrace 1,\\ldots ,t\\rbrace $ , the system dialog acts, $d^m_i, i \\in \\lbrace 1,\\ldots ,t\\rbrace $ , and the sequence of states leading to the current dialog turn at time $t$ . Then, $\\phi (.)$ is a vector of feature functions on $x$ and $s$ , and finally, $w$ is the set of model parameters to be learned from annotated dialog data. According to the formulation, the posterior computation has to be carried out for all possible state realizations in order to obtain the normalizing constant $\\eta $ . This is not feasible for real dialog domains, which can have a large number of variables and possible variable instantiations. So, it is vital to the discriminative approach to reduce the size of the state space. For example, BIBREF13 proposes to restrict the set of possible state variables to those that appeared in NLU results. More recently, BIBREF14 assumes conditional independence between dialog state variables to address scalability issues and uses a conditional random field to track each variable separately. Finally, deep neural models, performing on a sliding window of features extracted from previous user turns, have also been proposed in BIBREF15 . Of the current literature, this family of approaches have proven to be the most efficient for publicly available state tracking datasets. In the next section, we present a decompositional approach of dialog state tracking that aims at reconciling the two main approaches of the state of the art while leveraging on the current advances of low-rank bilinear decomposition models, as recalled in BIBREF16 , that seems particularly adapted to the sparse nature of dialog state tracking tasks." ], [ "In this section, the proposed model is presented and the learning and prediction procedures are detailed. The general idea consists in the decomposition of a matrix $M$ , composed of a set of turn's transition as rows and sparse encoding of the corresponding feature variables as columns. More precisely, a row of $M$ is composed with the concatenation of the sparse representation of (1) $s_{t}$ , a state at time $t$ (2) $s_{t+1}$ , a state at time $t+1$ (3) $z_t$ , a set of feature representating the observation. In the considered context, the bag of words composing the current turn is chosen as the observation. The parameter learning procedure is formalized as a matrix decomposition task solved through Alternating Least Square Ridge regression. The ridge regression task allows for an asymmetric penalization of the targeted variables of the state tracking task to perform. Figure 3 illustrates the collective matrix factorization task that constitutes the learning procedure of the state tracking model. The model introduces the component of the decomposed matrix to the form of latent variables $\\lbrace A, B, C\\rbrace $ , also called embeddings. In the next section, the learning procedure from dialog state transition data and the proper tracking algorithm are described. In other terms, each row of the matrix corresponds to the concatenation of a \"one-hot\" representation of a state description at time $t$ and a dialog turn at time $t$ and each column of the overall matrix $M$0 corresponds to a consider feature respectively of the state and dialog turn. Such type of modelization of the state tracking problem presents several advantages. First, the model is particularly flexible, the definition of the state and observation spaces are independent of the learning and prediction models and can be adapted to the context of tracking. Second, a bias by data can be applied in order to condition the transition model w.r.t separated matrices to decompose jointly as often proposed in multi-task learning as described in BIBREF17 and collective matrix factorization as detailed in BIBREF18 . Finally, the decomposition method is fast and parallelizable because it mainly leverages on core methods of linear algebra. From our knowledge, this proposition is the first attend to formalize and solve the state tracking task using a matrix decomposition approach." ], [ "For the sake of simplicity, the $\\lbrace B,C\\rbrace $ matrices are concatenated to $E$ , and $M$ is the concatenation of the matrices $\\lbrace S_t,S_{t+1},Z_t\\rbrace $ depicted in Figure 3 . Equation 9 defines the optimization task, i.e. the loss function, associated with the learning problem of latent variable search $\\lbrace A,E\\rbrace $ . ", "$$\\min _{A,E} || (M - AE ) W||_2^2 + \\lambda _a ||A||_2^2 + \\lambda _b ||E||_2^2\n\\hspace{5.0pt},$$ (Eq. 9) ", "where $\\lbrace \\lambda _a, \\lambda _b\\rbrace \\in \\mathbb {R}^2$ are regularization hyper-parameters and $W$ is a diagonal matrix that increases the weight of the state variables, $s_{t+1}$ in order bias the resulting parameters $\\lbrace A,E\\rbrace $ toward better predictive accuracy on these specific variables. This type of weighting approach has been shown to be as efficient in comparable generative discriminative trade-off tasks as mentioned in BIBREF19 and BIBREF20 . An Alternating Least Squares method that is a sequence of two convex optimization problems is used in order to perform the minimization task. First, for known $E$ , compute: ", "$$A^* = \\operatornamewithlimits{arg\\,min}_{A} || (M - AE ) W ||_2^2 + \\lambda _a ||A||_2^2\n\\hspace{5.0pt},$$ (Eq. 10) ", "then for a given $A$ , ", "$$E^* = \\operatornamewithlimits{arg\\,min}_{E} || (M - AE) W ||_2^2 + \\lambda _b ||E||_2^2$$ (Eq. 11) ", "By iteratively solving these two optimization problems, we obtain the following fixed-point regularized and weighted alternating least square algorithms where $t$ correspond to the current step of the overall iterative process: ", "$$A_{t+1} \\leftarrow (E_{t}^TWE_{t} + \\lambda _a\\mathbb {I})^{-1}E_{t}^TWM$$ (Eq. 12) ", "$$E_{t+1} \\leftarrow (A_{t}^TA_{t} + \\lambda _b\\mathbb {I})^{-1}A_{t}^TM$$ (Eq. 13) ", "As presented in Equation 12 , the $W$ matrix is only involved for the updating of $A$ because only the subset of the columns of $E$ , representing the features of the state to predict, are weighted differently in order to increase the importancd of the corresponding columns in the loss function. For the optimization of the latent representation composing $E$ , presented in Equation 13 , each call session's embeddings stored in $A$ hold the same weight, so in this second step of the algorithm, $W$ is actually an identity matrix and so does not appear." ], [ "The prediction process consists of (1) computing the embedding of a current transition by solving the corresponding least square problem based on the two variables $\\lbrace s_t,z_t\\rbrace $ that correspond to our current knowledge of the state at time $t$ and the set of observations extracted from the last turn that is composed with the system and user utterances, (2) estimating the missing values of interest, i.e. the likelihood of each value of each variable that constitutes the state at time $(t+1)$ , $s_{t+1}$ , by computing the cross-product between the transition embedding calculated in (1) and the corresponding column embeddings of $E$ , and of the value of each variable of $s_{t+1}$ . More precisely, we write this decomposition as ", "$$M = A.E^T$$ (Eq. 15) ", "where $M$ is the matrix of data to decompose and $.$ the matrix-matrix product operator. As in the previous section, $A$ has a row for each transition embedding, and $E$ has a column for each variable-value embedding in the form of a zero-one encoding. When a new row of observations $m_i$ for a new set of variables state $s_i$ and observations $z_i$ and $E$ is fixed, the purpose of the prediction task is to find the row $a_i$ of $A$ such that: ", "$$a_i.E^T \\approx m^T_i$$ (Eq. 16) ", "Even if it is generally difficult to require these to be equal, we can require that these last elements have the same projection into the latent space: ", "$$a_i^T.E^T.E = m_i^T.E$$ (Eq. 17) ", "Then, the classic closed form solution of a linear regression task can be derived: ", "$$a_i^T = m_i^T.E.(E^T.E)^{-1} \\\\\na_i = (E^T.E)^{-1}.E^T.m_i$$ (Eq. 18) ", "In fact, Equation 18 is the optimal value of the embedding of the transition $m_i$ , assuming a quadratic loss is used. Otherwise it is an approximation, in the case of a matrix decomposition of $M$ using a logistic loss for example. Note that, in equation 18 , $\n(E^T.E)^{-1}$ requires a matrix inversion, but for a low dimensional matrix (the size of the latent space). Several advantages can be identified in this approach. First, at learning time, alternative ridge regression is computationally efficient because a closed form solution exists at each step of the optimization process employed to infer the parameters, i.e the low rank matrices, of the model. Second, at decision time, the state tracking procedure consists of (1) computing the embedding $a$ of the current transition using the current state estimation $s_t$ and the current observation set $z_t$ and (2) computing the distribution over the state defined as a vector-matrix product between $a$ and the latent matrix $E$ . Finally, this inference method can be partially associated to the general technique of matrix completion. But, a proper matrix completion task would have required a matrix $M$ with missing value corresponding to the exhausive list of the possible triples ${s_t, s_{t+1}, z_t}$ , which is obviously intractable to represent and decompose." ], [ "In a first section, the dialog domain used for the evaluation of our dialog tracker is described and the different probability models used for the domain. In a second section, we present a first set of experimental results obtained through the proposed approach and its comparison to several reported results of approaches of the state of the art." ], [ "We used the DSTC-2 dialog domain as described in BIBREF21 in which the user queries a database of local restaurants by interacting with a dialog system. The dataset for the restaurant information domain were originally collected using Amazon Mechanical Turk. A usual dialog proceeds as follows: first, the user specifies his personal set of constraints concerning the restaurant he looks for. Then, the system offers the name of a restaurant that satisfies the constraints. User then accepts the offer, and requests for additional information about accepted restaurant. The dialog ends when all the information requested by the user are provided. In this context, the dialog state tracker should be able to track several types of information that composes the state like the geographic area, the food type, the name and the price range slots. In this paper, we restrict ourselves to tracking these variables, but our tracker can be easily setup to track others as well if they are properly specified. The dialog state tracker updates its belief turn by turn, receiving evidence from the NLU module with the actual utterance produced by the user. In this experiment, it has been chosen to restrict the output of the NLU module to the bag of word of the user utterances in order to be comparable the most recent approaches of state tracking like proposed in BIBREF5 that only use such information as evidence. One important interest in such approach is to dramatically simplify the process of state tracking by suppressing the NLU task. In fact, NLU is mainly formalized in current approaches as a supervised learning approach. The task of the dialog state tracker is to generate a set of possible states and their confidence scores for each slot, with the confidence score corresponding to the posterior probability of each variable state w.r.t the current estimation of the state and the current evidence. Finally, the dialog state tracker also maintains a special variable state, called None, which represents that a given variable composing the state has not been observed yet. For the rest of this section, we present experimental results of state tracking obtained in this dataset and we compare with state of the art generative and discriminative approaches." ], [ "As a comparison to the state of the art methods, Table 1 presents accuracy results of the best Collective Matrix Factorization model, with a latent space dimension of 350, which has been determined by cross-validation on a development set, where the value of each slot is instantiated as the most probable w.r.t the inference procedure presented in Section \"Spectral decomposition model for state tracking in slot-filling dialogs\" . In our experiments, the variance is estimated using standard dataset reshuffling. The same results are obtained for several state of the art methods of generative and discriminative state tracking on this dataset using the publicly available results as reported in BIBREF22 . More precisely, as provided by the state-of-the-art approaches, the accuracy scores computes $p(s^*_{t+1}|s_t,z_t)$ commonly name the joint goal. Our proposition is compared to the 4 baseline trackers provided by the DSTC organisers. They are the baseline tracker (Baseline), the focus tracker (Focus), the HWU tracker (HWU) and the HWU tracker with “original” flag set to (HWU+) respectively. Then a comparison to a maximum entropy (MaxEnt) proposed in BIBREF23 type of discriminative model and finally a deep neural network (DNN) architecture proposed in BIBREF24 as reported also in BIBREF22 is presented." ], [ "As depicted in Section \"Generative Dialog State Tracking\" , the litterature of the domain can mainly decomposed into three family of approaches, rule-based, generative and discriminative. In previous works on this topics, BIBREF25 formally used particle filters to perform inference in a Bayesian network modeling of the dialog state, BIBREF26 presented a generative tracker and showed how to train an observation model from transcribed data, BIBREF27 grouped indistinguishable dialog states into partitions and consequently performed dialog state tracking on these partitions instead of the individual states, BIBREF11 used a dynamic Bayesian network to represent the dialog model in an approximate form. So, most attention in the dialog state belief tracking literature has been given to generative Bayesian network models until recently as proposed in BIBREF28 and BIBREF11 . On the other hand, the successful use of discriminative models for belief tracking has recently been reported by BIBREF29 and BIBREF5 and was a major theme in the results of the recent edition of the Dialog State Tracking Challenge. In this paper, a latent decomposition type of approach is proposed in order to address this general problem of dialog system. Our method gives encouraging results in comparison to the state of the art dataset and also does not required complex inference at test time because, as detailed in Section \"Spectral decomposition model for state tracking in slot-filling dialogs\" , the tracking algorithm hold a linear complexity w.r.t the sum of realization of each considered variables defining the state to track which is what we believe is one of the main advantage of this method. Secondly collective matrix factorization paradigm also for data fusion and bias by data type of modeling as successfully performed in matrix factorization based recommender systems BIBREF30 ." ], [ "In this paper, a methodology and algorithm for efficient state tracking in the context of slot-filling dialogs has been presented. The proposed probabilistic model and inference algorithm allows efficient handling of dialog management in the context of classic dialog schemes that constitute a large part of task-oriented dialog tasks. More precisely, such a system allows efficient tracking of hidden variables defining the user goal using any kind of available evidence, from utterance bag-of-words to the output of a Natural Language Understanding module. Our current investigation on this subject are the beneficiary of distributional word representation as proposed in BIBREF31 to cope with the question of unknown words and unknown slots as suggested in BIBREF32 . In summary, the proposed approach differentiates itself by the following points from the prior art: (1) by producing a joint probability model of the hidden variable transition in a given dialog state and the observations that allow tracking the current beliefs about the user goals while explicitly considering potential interdependencies between state variables (2) by proposing the necessary computational framework, based on collective matrix factorization, to efficiently infer the distribution over the state variables in order to derive an adequate dialog policy of information seeking in this context. Finally, while transactional dialog tracking is mainly useful in the context of autonomous dialog management, the technology can also be used in dialog machine reading and knowledge extraction from human-to-human dialog corpora as proposed in the fourth edition of the Dialog State Tracking Challenge." ] ] }
{ "question": [ "What state-of-the-art models are compared against?" ], "question_id": [ "73abb173a3cc973ab229511cf53b426865a2738b" ], "nlp_background": [ "infinity" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "dialog" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "a deep neural network (DNN) architecture proposed in BIBREF24 ", "maximum entropy (MaxEnt) proposed in BIBREF23 type of discriminative model" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As a comparison to the state of the art methods, Table 1 presents accuracy results of the best Collective Matrix Factorization model, with a latent space dimension of 350, which has been determined by cross-validation on a development set, where the value of each slot is instantiated as the most probable w.r.t the inference procedure presented in Section \"Spectral decomposition model for state tracking in slot-filling dialogs\" . In our experiments, the variance is estimated using standard dataset reshuffling. The same results are obtained for several state of the art methods of generative and discriminative state tracking on this dataset using the publicly available results as reported in BIBREF22 . More precisely, as provided by the state-of-the-art approaches, the accuracy scores computes $p(s^*_{t+1}|s_t,z_t)$ commonly name the joint goal. Our proposition is compared to the 4 baseline trackers provided by the DSTC organisers. They are the baseline tracker (Baseline), the focus tracker (Focus), the HWU tracker (HWU) and the HWU tracker with “original” flag set to (HWU+) respectively. Then a comparison to a maximum entropy (MaxEnt) proposed in BIBREF23 type of discriminative model and finally a deep neural network (DNN) architecture proposed in BIBREF24 as reported also in BIBREF22 is presented." ], "highlighted_evidence": [ "Then a comparison to a maximum entropy (MaxEnt) proposed in BIBREF23 type of discriminative model and finally a deep neural network (DNN) architecture proposed in BIBREF24 as reported also in BIBREF22 is presented.\n\n" ] } ], "annotation_id": [ "0f1c509049f53c831e6715cbbe308ae4340e1b37" ], "worker_id": [ "f320efb1fbb744616e420aaf8da0f9622b75b2ed" ] } ] }
{ "caption": [ "Figure 1: Prototypical transactional dialog management process, also called slot-filling dialog management", "Figure 2: Generative Dialog State Tracking using a factorial HMM", "Figure 3: Spectral State Tracking, Collective Matrix Factorization model as inference procedure", "Table 1: Accuracy of the proposed model on the DSTC-2 test-set" ], "file": [ "4-Figure1-1.png", "4-Figure2-1.png", "6-Figure3-1.png", "9-Table1-1.png" ] }
2002.00876
Torch-Struct: Deep Structured Prediction Library
The literature on structured prediction for NLP describes a rich collection of distributions and algorithms over sequences, segmentations, alignments, and trees; however, these algorithms are difficult to utilize in deep learning frameworks. We introduce Torch-Struct, a library for structured prediction designed to take advantage of and integrate with vectorized, auto-differentiation based frameworks. Torch-Struct includes a broad collection of probabilistic structures accessed through a simple and flexible distribution-based API that connects to any deep learning model. The library utilizes batched, vectorized operations and exploits auto-differentiation to produce readable, fast, and testable code. Internally, we also include a number of general-purpose optimizations to provide cross-algorithm efficiency. Experiments show significant performance gains over fast baselines and case-studies demonstrate the benefits of the library. Torch-Struct is available at this https URL.
{ "section_name": [ "Introduction", "Related Work", "Motivating Case Study", "Library Design", "Technical Approach ::: Conditional Random Fields", "Technical Approach ::: Dynamic Programming and Semirings", "Optimizations", "Optimizations ::: a) Parallel Scan Inference", "Optimizations ::: b) Vectorized Parsing", "Optimizations ::: c) Semiring Matrix Operations", "Conclusion and Future Work", "Acknowledgements" ], "paragraphs": [ [ "Structured prediction is an area of machine learning focusing on representations of spaces with combinatorial structure, and algorithms for inference and parameter estimation over these structures. Core methods include both tractable exact approaches like dynamic programming and spanning tree algorithms as well as heuristic techniques such linear programming relaxations and greedy search.", "Structured prediction has played a key role in the history of natural language processing. Example methods include techniques for sequence labeling and segmentation BIBREF0, BIBREF4, discriminative dependency and constituency parsing BIBREF10, BIBREF8, unsupervised learning for labeling and alignment BIBREF11, BIBREF12, approximate translation decoding with beam search BIBREF9, among many others.", "In recent years, research into deep structured prediction has studied how these approaches can be integrated with neural networks and pretrained models. One line of work has utilized structured prediction as the final final layer for deep models BIBREF13, BIBREF14. Another has incorporated structured prediction within deep learning models, exploring novel models for latent-structure learning, unsupervised learning, or model control BIBREF15, BIBREF16, BIBREF17. We aspire to make both of these use-cases as easy to use as standard neural networks.", "The practical challenge of employing structured prediction is that many required algorithms are difficult to implement efficiently and correctly. Most projects reimplement custom versions of standard algorithms or focus particularly on a single well-defined model class. This research style makes it difficult to combine and try out new approaches, a problem that has compounded with the complexity of research in deep structured prediction.", "With this challenge in mind, we introduce Torch-Struct with three specific contributions:", "Modularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework.", "Completeness: a broad array of classical algorithms are implemented and new models can easily be added in Python.", "Efficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization.", "In this system description, we first motivate the approach taken by the library, then present a technical description of the methods used, and finally present several example use cases." ], [ "Several software libraries target structured prediction. Optimization tools, such as SVM-struct BIBREF18, focus on parameter estimation. Model libraries, such as CRFSuite BIBREF19 or CRF++ BIBREF20, implement inference for a fixed set of popular models, such as linear-chain CRFs. General-purpose inference libraries, such as PyStruct BIBREF21 or TurboParser BIBREF22, utilize external solvers for (primarily MAP) inference such as integer linear programming solvers and ADMM. Probabilistic programming languages, for example languages that integrate with deep learning such as Pyro BIBREF23, allow for specification and inference over some discrete domains. Most ambitiously, inference libraries such as Dyna BIBREF24 allow for declarative specifications of dynamic programming algorithms to support inference for generic algorithms. Torch-Struct takes a different approach and integrates a library of optimized structured distributions into a vectorized deep learning system. We begin by motivating this approach with a case study." ], [ "While structured prediction is traditionally presented at the output layer, recent applications have deployed structured models broadly within neural networks BIBREF15, BIBREF25, BIBREF16. Torch-Struct aims to encourage this general use case.", "To illustrate, we consider a latent tree model. ListOps BIBREF26 is a dataset of mathematical functions. Each data point consists of a prefix expression $x$ and its result $y$, e.g.", "Models such as a flat RNN will fail to capture the hierarchical structure of this task. However, if a model can induce an explicit latent $z$, the parse tree of the expression, then the task is easy to learn by a tree-RNN model $p(y | x, z)$ BIBREF16, BIBREF27.", "A popular approach is a latent-tree RL model which we briefly summarize. The objective is to maximize the probability of the correct prediction under the expectation of a prior tree model, $p(z|x ;\\phi )$,", "Computing the expectation is intractable so policy gradient is used. First a tree is sampled $\\tilde{z} \\sim p(z | x;\\phi )$, then the gradient with respect to $\\phi $ is approximated as,", "where $b$ is a variance reduction baseline. A common choice is the self-critical baseline BIBREF28,", "Finally an entropy regularization term is added to the objective encourage exploration of different trees, $ O + \\lambda \\mathbb {H}(p(z\\ |\\ x;\\phi ))$.", "Even in this brief overview, we can see how complex a latent structured learning problem can be. To compute these terms, we need 5 different properties of the tree model $p(z\\ | x; \\phi )$:", "[description]font=", "[itemsep=-2pt]", "Policy gradient, $\\tilde{z} \\sim p(z \\ |\\ x ; \\phi )$", "Score policy samples, $p(z \\ | \\ x; \\phi )$", "Backpropagation, $\\frac{\\partial }{\\partial \\phi } p(z\\ |\\ x; \\phi )$", "Self-critical, $\\arg \\max _z p(z \\ |\\ x;\\phi )$", "Objective regularizer, $\\mathbb {H}(p(z\\ |\\ x;\\phi ))$", "For structured models, each of these terms is non-trivial to compute. A goal of Torch-Struct is to make it seamless to deploy structured models for these complex settings. To demonstrate this, Torch-Struct includes an implementation of this latent-tree approach. With a minimal amount of user code, the implementation achieves near perfect accuracy on the ListOps dataset." ], [ "The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29. For each structured model in the library, we define a conditional random field (CRF) distribution object. From a user's standpoint, this object provides all necessary distributional properties. Given log-potentials (scores) output from a deep network $\\ell $, the user can request samples $z \\sim \\textsc {CRF}(\\ell )$, probabilities $\\textsc {CRF}(z;\\ell )$, modes $\\arg \\max _z \\textsc {CRF}(\\ell )$, or other distributional properties such as $\\mathbb {H}(\\textsc {CRF}(\\ell ))$. The library is agnostic to how these are utilized, and when possible, they allow for backpropagation to update the input network. The same distributional object can be used for standard output prediction as for more complex operations like attention or reinforcement learning.", "Figure FIGREF11 demonstrates this API for a binary tree CRF over an ordered sequence, such as $p(z \\ | \\ y ;\\phi )$ from the previous section. The distribution takes in log-potentials $\\ell $ which score each possible span in the input. The distribution converts these to probabilities of a specific tree. This distribution can be queried for predicting over the set of trees, sampling a tree for model structure, or even computing entropy over all trees.", "Table TABREF2 shows all of the structures and distributions implemented in Torch-Struct. While each is internally implemented using different specialized algorithms and optimizations, from the user's perspective they all utilize the same external distributional API, and pass a generic set of distributional tests. This approach hides the internal complexity of the inference procedure, while giving the user full access to the model." ], [ "We now describe the technical approach underlying the library. To establish notation first consider the implementation of a categorical distribution, Cat($\\ell $), with one-hot categories $z$ with $z_i = 1$ from a set $\\cal Z$ and probabilities given by the softmax,", "Define the log-partition as $A(\\ell ) = \\mathrm {LSE}(\\ell )$, i.e. log of the denominator, where $\\mathrm {LSE}$ is the log-sum-exp operator. Computing probabilities or sampling from this distribution, requires enumerating $\\cal Z$ to compute the log-partition $A$. A useful identity is that derivatives of $A$ yield category probabilities,", "Other distributional properties can be similarly extracted from variants of the log-partition. For instance, define $A^*(\\ell ) = \\log \\max _{j=1}^K \\exp \\ell _j$ then: $\\mathbb {I}(z^*_i = 1) = \\frac{\\partial }{\\partial \\ell _i} A^*(\\ell ) $.", "Conditional random fields, CRF($\\ell $), extend the softmax to combinatorial spaces where ${\\cal Z}$ is exponentially sized. Each $z$, is now represented as a binary vector over polynomial-sized set of parts, $\\cal P$, i.e. ${\\cal Z} \\subset \\lbrace 0, 1\\rbrace ^{|\\cal P|}$. Similarly log-potentials are now defined over parts $\\ell \\in \\mathbb {R}^{|\\cal P|}$. For instance, in Figure FIGREF11 each span is a part and the $\\ell $ vector is shown in the top-left figure. Define the probability of a structure $z$ as,", "Computing probabilities or sampling from this distribution, requires computing the log-partition term $A$. In general computing this term is now intractable, however for many core algorithms in NLP there are exist efficient combinatorial algorithms for this term (as enumerated in Table TABREF2).", "Derivatives of the log-partition again provide distributional properties. For instance, the marginal probabilities of parts are given by,", "Similarly derivatives of $A^*$ correspond to whether a part appears in the argmax structure. $\\mathbb {I}(z^*_p = 1) = \\frac{\\partial }{\\partial \\ell _p} A^*(\\ell ) $.", "While these gradient identities are well-known BIBREF30, they are not commonly deployed. Computing CRF properties is typically done through two-step specialized algorithms, such as forward-backward, inside-outside, or similar variants such as viterbi-backpointers BIBREF31. In our experiments, we found that using these identities with auto-differentiation on GPU was often faster, and much simpler, than custom two-pass approaches. Torch-Struct is thus designed around using gradients for distributional computations." ], [ "Torch-Struct is a collection of generic algorithms for CRF inference. Each CRF distribution object, $\\textsc {CRF}(\\ell )$, is constructed by providing $\\ell \\in \\mathbb {R}^{|{\\cal P}|}$ where the parts $\\cal P$ are specific to the type of distribution. Internally, each distribution is implemented through a single Python function for computing the log-partition function $A(\\ell )$. From this function, the library uses auto-differentiation and the identities from the previous section, to define a complete distribution object. The core models implemented by the library are shown in Table TABREF2.", "To make the approach concrete, we consider the example of a linear-chain CRF.", "latent](a)$z_1$; latent, right = of a](b)$z_2$; latent, right = of b](c)$z_3$; (a) – (b) – (c);", "The model has $C$ labels per node with a length $T=2$ edges utilizing a first-order linear-chain (Markov) model. This model has $2\\times C \\times C$ parts corresponding to edges in the chain, and thus requires $\\ell \\in \\mathbb {R}^{2\\times C \\times C}$. The log-partition function $A(\\ell )$ factors into two reduce computations,", "Computing this function left-to-right using dynamic programming yield the standard forward algorithm for sequence models. As we have seen, the gradient with respect to $\\ell $ produces marginals for each part, i.e. the probability of a specific labeled edge.", "We can further extend the same function to support generic semiring dynamic programming BIBREF34. A semiring is defined by a pair $(\\oplus , \\otimes )$ with commutative $\\oplus $, distribution, and appropriate identities. The log-partition utilizes $\\oplus , \\otimes = \\mathrm {LSE}, +$, but we can substitute alternatives.", "For instance, utilizing the log-max semiring $(\\max , +)$ in the forward algorithm yields the max score. As we have seen, its gradient with respect to $\\ell $ is the argmax sequence, negating the need for a separate argmax (Viterbi) algorithm. Some distributional properties cannot be computed directly through gradient identities but still use a forward-backward style compute structure. For instance, sampling requires first computing the log-partition term and then sampling each part, (forward filtering / backward sampling). We can compute this value by overriding each backpropagation operation for the $\\bigoplus $ to instead compute a sample.", "Table TABREF16 shows the set of semirings and backpropagation steps for computing different terms of interest. We note that many of the terms necessary in the case-study can be computed with variant semirings, negating the need for specialized algorithms." ], [ "Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming. Figure FIGREF17 shows the impact of these optimizations on the core algorithms." ], [ "The commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence. On parallel hardware, an appealing approach is a parallel scan ordering BIBREF35, typically used for computing prefix sums. To compute, $A(\\ell )$ in this manner we first pad the sequence length $T$ out to the nearest power of two, and then compute a balanced parallel tree over the parts, shown in Figure FIGREF21. Concretely each node layer would compute a semiring matrix multiplication, e.g. $ \\bigoplus _c \\ell _{t, \\cdot , c} \\otimes \\ell _{t^{\\prime }, c, \\cdot }$. Under this approach, we only need $O(\\log N)$ steps in Python and can use parallel GPU operations for the rest. Similar parallel approach can also be used for computing sequence alignment and semi-Markov models." ], [ "Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized. The log-partition for parsing is computed with the Inside algorithm. This algorithm must compute each width from 1 through T in serial; however it is important to parallelize each inner step. Assuming we have computed all inside spans of width less than $d$, computing the inside span of width $d$ requires computing for all $i$,", "In order to vectorize this loop over $i, j$, we reindex the chart. Instead of using a single chart $C$, we split it into two parts: one right-facing $C_r[i, d] = C[i, i+d]$ and one left facing, $C_l[i+d, T-d] = C[i, i+d]$. After this reindexing, the update can be written.", "Unlike the original, this formula can easily be computed as a vectorized semiring dot product. This allows use to compute $C_r[\\cdot , d]$ in one operation. Variants of this same approach can be used for all the parsing models employed." ], [ "The two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\\sum , \\times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \\times M$ and $M \\times O$, we can broadcast with $\\otimes $ to a tensor of size $N \\times M \\times O$ and then reduce dim $M$ by $\\bigoplus $ at a huge memory cost. To avoid this issue, we implement custom CUDA kernels targeting fast and memory efficient tensor operations. For log, this corresponds to computing,", "where $q = \\max _n T_{m,n} + U_{n, o}$. To optimize this operation on GPU we utilize the TVM language BIBREF36 to layout the CUDA loops and tune it to hardware." ], [ "We present Torch-Struct, a library for deep structured prediction. The library achieves modularity through its adoption of a generic distributional API, completeness by utilizing CRFs and semirings to make it easy to add new algorithms, and efficiency through core optimizations to vectorize important dynamic programming steps. In addition to the problems discussed so far, Torch-Struct also includes several other example implementations including supervised dependency parsing with BERT, unsupervised tagging, structured attention, and connectionist temporal classification (CTC) for speech. The full library is available at https://github.com/harvardnlp/pytorch-struct.", "In the future, we hope to support research and production applications employing structured models. We also believe the library provides a strong foundation for building generic tools for interpretablity, control, and visualization through its probabilistic API. Finally, we hope to explore further optimizations to make core algorithms competitive with highly-optimized neural network components." ], [ "We thank Yoon Kim, Xiang Lisa Li, Sebastian Gehrmann, Yuntian Deng, and Justin Chiu for discussion and feedback on the project. The project was supported by NSF CAREER 1845664, NSF 1901030, and research awards by Sony and AWS." ] ] }
{ "question": [ "Does API provide ability to connect to models written in some other deep learning framework?", "Is this library implemented into Torch or is framework agnostic?", "What baselines are used in experiments?", "What general-purpose optimizations are included?" ], "question_id": [ "1d9b953a324fe0cfbe8e59dcff7a44a2f93c568d", "093039f974805952636c19c12af3549aa422ec43", "8df89988adff57279db10992846728ec4f500eaa", "94edac71eea1e78add678fb5ed2d08526b51016b" ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29. For each structured model in the library, we define a conditional random field (CRF) distribution object. From a user's standpoint, this object provides all necessary distributional properties. Given log-potentials (scores) output from a deep network $\\ell $, the user can request samples $z \\sim \\textsc {CRF}(\\ell )$, probabilities $\\textsc {CRF}(z;\\ell )$, modes $\\arg \\max _z \\textsc {CRF}(\\ell )$, or other distributional properties such as $\\mathbb {H}(\\textsc {CRF}(\\ell ))$. The library is agnostic to how these are utilized, and when possible, they allow for backpropagation to update the input network. The same distributional object can be used for standard output prediction as for more complex operations like attention or reinforcement learning." ], "highlighted_evidence": [ "The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29." ] } ], "annotation_id": [ "83b0d2c9df28b611f74cbc625a6fa50df1bba8ae" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "It uses deep learning framework (pytorch)", "evidence": [ "With this challenge in mind, we introduce Torch-Struct with three specific contributions:", "Modularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework.", "Completeness: a broad array of classical algorithms are implemented and new models can easily be added in Python.", "Efficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization." ], "highlighted_evidence": [ "With this challenge in mind, we introduce Torch-Struct with three specific contributions:\n\nModularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework.\n\nCompleteness: a broad array of classical algorithms are implemented and new models can easily be added in Python.\n\nEfficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization." ] } ], "annotation_id": [ "363475920554b38997e8edef0aafd969ed8e7fcc" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Typical implementations of dynamic programming algorithms are serial in the length of the sequence", "Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized", "Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Optimizations ::: a) Parallel Scan Inference", "The commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence. On parallel hardware, an appealing approach is a parallel scan ordering BIBREF35, typically used for computing prefix sums. To compute, $A(\\ell )$ in this manner we first pad the sequence length $T$ out to the nearest power of two, and then compute a balanced parallel tree over the parts, shown in Figure FIGREF21. Concretely each node layer would compute a semiring matrix multiplication, e.g. $ \\bigoplus _c \\ell _{t, \\cdot , c} \\otimes \\ell _{t^{\\prime }, c, \\cdot }$. Under this approach, we only need $O(\\log N)$ steps in Python and can use parallel GPU operations for the rest. Similar parallel approach can also be used for computing sequence alignment and semi-Markov models.", "Optimizations ::: b) Vectorized Parsing", "Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized. The log-partition for parsing is computed with the Inside algorithm. This algorithm must compute each width from 1 through T in serial; however it is important to parallelize each inner step. Assuming we have computed all inside spans of width less than $d$, computing the inside span of width $d$ requires computing for all $i$,", "Optimizations ::: c) Semiring Matrix Operations", "The two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\\sum , \\times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \\times M$ and $M \\times O$, we can broadcast with $\\otimes $ to a tensor of size $N \\times M \\times O$ and then reduce dim $M$ by $\\bigoplus $ at a huge memory cost. To avoid this issue, we implement custom CUDA kernels targeting fast and memory efficient tensor operations. For log, this corresponds to computing," ], "highlighted_evidence": [ "Parallel Scan Inference\nThe commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence.", "Vectorized Parsing\nComputational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized.", "Semiring Matrix Operations\nThe two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\\sum , \\times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \\times M$ and $M \\times O$, we can broadcast with $\\otimes $ to a tensor of size $N \\times M \\times O$ and then reduce dim $M$ by $\\bigoplus $ at a huge memory cost." ] } ], "annotation_id": [ "41a5e7f9002bc00be615405addaa6e72f4201759" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Parallel Scan Inference", "Vectorized Parsing", "Semiring Matrix Operations" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Optimizations ::: a) Parallel Scan Inference", "Optimizations ::: b) Vectorized Parsing", "Optimizations ::: c) Semiring Matrix Operations", "Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming. Figure FIGREF17 shows the impact of these optimizations on the core algorithms." ], "highlighted_evidence": [ "a) Parallel Scan Inference", "b) Vectorized Parsing", "c) Semiring Matrix Operations", "Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming." ] } ], "annotation_id": [ "0f255bdea6c34801b2ab038ea6710f9481bc417a" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: Distribution of binary trees over an 1000- token sequence. Coloring shows the marginal probabilities of every span. Torch-Struct is an optimized collection of common CRF distributions used in NLP designed to integrate with deep learning frameworks.", "Table 1: Models and algorithms implemented in Torch-Struct. Notation is developed in Section 5. Parts are described in terms of sequence lengths N,M , label size C, segment length K, and layers / grammar size L,G. Lines of code (LoC) is from the log-partition (A(`)) implementation. T/S is the tokens per second of a batched computation, computed with batch 32, N = 25, C = 20,K = 5, L = 3 (K80 GPU run on Google Colab).", "Figure 2: Latent Tree CRF example. (a) Logpotentials ` for each part/span. (b) Marginals for CRF(`) computed by backpropagation. (c) Mode tree argmaxz CRF(z; `). (d) Sampled tree z ∼ CRF(`).", "Table 2: (Top) Semirings implemented in Torch-Struct. Backprop/Gradients gives overridden backpropagation computation and value computed by this combination. (Bot) Example of gradients from different semirings on sequence alignment with dynamic time warping.", "Figure 3: Speed impact of optimizations. Time is given in seconds for 10 runs with batch 16 executed on Google Colab. (a) Speed of a linear-chain forward with 20 classes for lengths up to 500. Compares left-to-right ordering to parallel scan. (b) Speed of CKY inside with lengths up to 80. Compares inner loop versus vectorization. (c) Speed of linear-chain forward of length 20 with up to 100 classes. Compares broadcast-reduction versus CUDA semiring kernel. (Baseline memory is exhausted after 100 classes.)", "Figure 4: Parallel scan implementation of the linearchain CRF inference algorithm. Here ⊕ ⊗ represents a semiring matrix operation and I is padding." ], "file": [ "1-Figure1-1.png", "2-Table1-1.png", "3-Figure2-1.png", "4-Table2-1.png", "6-Figure3-1.png", "6-Figure4-1.png" ] }
1906.10519
Embedding Projection for Targeted Cross-Lingual Sentiment: Model Comparisons and a Real-World Study
Sentiment analysis benefits from large, hand-annotated resources in order to train and test machine learning models, which are often data hungry. While some languages, e.g., English, have a vast array of these resources, most under-resourced languages do not, especially for fine-grained sentiment tasks, such as aspect-level or targeted sentiment analysis. To improve this situation, we propose a cross-lingual approach to sentiment analysis that is applicable to under-resourced languages and takes into account target-level information. This model incorporates sentiment information into bilingual distributional representations, by jointly optimizing them for semantics and sentiment, showing state-of-the-art performance at sentence-level when combined with machine translation. The adaptation to targeted sentiment analysis on multiple domains shows that our model outperforms other projection-based bilingual embedding methods on binary targeted sentiment tasks. Our analysis on ten languages demonstrates that the amount of unlabeled monolingual data has surprisingly little effect on the sentiment results. As expected, the choice of annotated source language for projection to a target leads to better results for source-target language pairs which are similar. Therefore, our results suggest that more efforts should be spent on the creation of resources for less similar languages to those which are resource-rich already. Finally, a domain mismatch leads to a decreased performance. This suggests resources in any language should ideally cover varieties of domains.
{ "section_name": [ "Targeted Sentiment Classification", "Cross-Lingual Approaches to Sentiment Analysis", "Bilingual Distributional Models and the Contributions of this Paper", "Previous Work", "Machine Translation Based Methods", "Bilingual Embedding Methods", "Sentiment Embeddings", "Targeted Sentiment Analysis", "Projecting Sentiment Across Languages", "Sentence-level Model", "Targeted Model", "Experiments", "Datasets and Resources", "Setting for Experiment 1: Sentence-level Classification", "Setting for Experiment 2: Targeted Classification", "Experiment 1: Sentence-level Classification", "Experiment 2: Targeted Classification", "Motivation", "Experimental Setup", "Results", "Discussion", "Conclusion" ], "paragraphs": [ [ "Opinions are everywhere in our lives. Every time we open a book, read the newspaper, or look at social media, we scan for opinions or form them ourselves. We are cued to the opinions of others, and often use this information to update our own opinions Asch1955,Das2014. This is true on the Internet as much as it is in our face-to-face relationships. In fact, with its wealth of opinionated material available online, it has become feasible and interesting to harness this data in order to automatically identify opinions, which had previously been far more expensive and tedious when the only access to data was offline.", "Sentiment analysis, sometimes referred to as opinion mining, seeks to create data-driven methods to classify the polarity of a text. The information obtained from sentiment classifiers can then be used for tracking user opinions in different domains Pang2002,Socher2013b,Nakov2013, predicting the outcome of political elections wang2012demo,bakliwal2013, detecting hate speech online Nahar2012,hartung-EtAl:2017:WASSA2017, as well as predicting changes in the stock market Pogolu2016.", "Sentiment analysis can be modeled as a classification task, especially at sentence- and document-level, or as a sequence-labeling task at target-level. Targeted sentiment analysis aims at predicting the polarity expressed towards a particular entity or sub-aspect of that entity. This is a more realistic view of sentiment, as polarities are directed towards targets, not spread uniformly across sentences or documents. Take the following example, where we mark the sentiment target with green, positive sentiment expressions with blue, and negative sentiment expressions with red.:", "The café near my house has great coffee but I", "never go there because the service is terrible.", "In this sentence, it is not stated what the sentiment towards the target “café” is, while the sentiment of the target “coffee” is positive and that of “service” is negative. In order to correctly classify the sentiment of each target, it is necessary to (1) detect the targets, (2) detect polarity expressions, and (3) resolve the relations between these.", "In order to model these relationships and test the accuracy of the learned models, hand-annotated resources are typically used for training machine learning algorithms. Resource-rich languages, e. g., English, have high-quality annotated data for both classification and sequence-labeling tasks, as well as for a variety of domains. However, under-resourced languages either completely lack annotated data or have only a few resources for specific domains or sentiment tasks. For instance, for aspect-level sentiment analysis, English has datasets available in the news domain Wiebe2005, product review domain HuandLiu2004,Ding2008,Pontiki2014,Pontiki2015, education domain Welch2016, medical domain Grasser2018, urban neighborhood domain Saeidi2016, and financial Maia2018 domain. Spanish, on the other hand, has only three datasets Agerri2013,Pontiki2016, while Basque and Catalan only have one each for a single domain Barnes2018a. The cost of annotating data can often be prohibitive as training native-speakers to annotate fine-grained sentiment is a long process. This motivates the need to develop sentiment analysis methods capable of leveraging data annotated in other languages." ], [ "Previous work on cross-lingual sentiment analysis (CLSA) offers a way to perform sentiment analysis in an under-resourced language that does not have any annotated data available. Most methods relied on the availability of large amounts of parallel data to transfer sentiment information across languages. Machine translation (MT), for example, has been the most common approach to cross-lingual sentiment analysis Banea2013,Almeida2015,Zhang2017. Machine translation, however, can be biased towards domains Hua2008,Bertoldi2009,Koehn2017, does not always preserve sentiment Mohammad2016, and requires millions of parallel sentences Gavrila2011,Vaswani2017, which places a limit on which languages can benefit from these approaches. The following example illustrates that MT does not preserve sentiment (hotel review in Basque, automatically translated via translate.google.com):", "Hotel $^{1}$ txukuna da, nahiko berria. Harreran zeuden langileen arreta $^{2}$ ez zen onena izan. Tren geltoki bat $^{3}$ du 5 minutura eta kotxez $^{4}$ berehala iristen da baina oinez $^{5}$ urruti samar dago.", "The hotel $^{1}$ is tidy, quite new. The care of the workers at reception $^{2}$ was not the best. It's 5 minutes away from a train station $^{3}$ and it's quick to reach the car $^{4}$ , but it's a short distance away.", "While the first two sentences are mostly well translated for the purposes of sentiment analysis, in the third, there are a number of reformulations and deletions that lead to a loss of information. It should read “It has a train station five minutes away and by car you can reach it quickly, but by foot it's quite a distance.” We can see that one of the targets has been deleted and the sentiment has flipped from negative to positive. Such common problems degrade the results of cross-lingual sentiment systems that use MT, especially at target-level.", "Although high quality machine translation systems exist between many languages and have been shown to enable cross-lingual sentiment analysis, for the vast majority of language pairs in the world there is not enough parallel data to create these high quality MT systems. This lack of parallel data coupled with the computational expense of MT means that approaches to cross-lingual sentiment analysis that do not require MT should be preferred. Additionally, most cross-lingual sentiment approaches using MT have concentrated on sentence- and document-level, and have not explored targeted or aspect-level sentiment tasks." ], [ "Recently, several bilingual distributional semantics models (bilingual embeddings) have been proposed and provide a useful framework for cross-lingual research without requiring machine translation. They are effective at generating features for bilingual dictionary induction Mikolov2013translation,Artetxe2016,Lample2017, cross-lingual text classification Prettenhofer2011b,Chandar2014, or cross-lingual dependency parsing Sogaard2015, among others. In this framework, words are represented as $n$ -dimensional vectors which are created on large monolingual corpora in order to (1) maximize the similarity of words that appear in similar contexts and use some bilingual regularization in order to (2) maximize the similarity of translation pairs. In this work, we concentrate on a subset of these bilingual embedding methods that perform a post-hoc mapping to a bilingual space, which we refer to as embedding projection methods. One of the main advantages of these methods is that they make better use of small amounts of parallel data than MT systems, even enabling unsupervised machine translation Artetxe2018,Lample2018.", "With this paper, we provide the first extensive evaluation of cross-lingual embeddings for targeted sentiment tasks. We formulate the task of targeted sentiment analysis as classification, given the targets from an oracle. The question we attempt to address is how to infer the polarity of a sentiment target in a language that does not have any annotated sentiment data or parallel corpora with a resource-rich language. In the following Catalan sentence, for example, how can we determine that the sentiment of “servei” is negative, while that of “menjar” is positive if we do not have annotated data in Catalan or parallel data for English-Catalan?", "El servei al restaurant va ser péssim. Al menys el menjar era bo.", "Specifically, we propose an approach which requires (1) minimal bilingual data and instead makes use of (2) high-quality monolingual word embeddings in the source and target language. We take an intermediate step by first testing this approach on sentence-level classification. After confirming that our approach performs well at sentence-level, we propose a targeted model with the same data requirements. The main contributions are that we", "compare projection-based cross-lingual methods to MT,", "extend previous cross-lingual approaches to enable targeted cross-lingual sentiment analysis with minimal parallel data requirements,", "compare different model architectures for cross-lingual targeted sentiment analysis,", "perform a detailed error analysis, and detailing the advantages and disadvantages of each method,", "and, finally, deploy the methods in a realistic case-study to analyze their suitability beyond applications on (naturally) limited language pairs.", "In addition, we make our code and data publicly available at https://github.com/jbarnesspain/targeted_blse to support future research. The rest of the article is organized as follows: In Section \"Previous Work\" , we detail related work and motivate the need for a different approach. In Section \"Projecting Sentiment Across Languages\" , we describe both the sentence-level and targeted projection approaches that we propose. In Section \"Experiments\" , we detail the resources and experimental setup for both sentence and targeted classification. In Section \"Results\" , we describe the results of the two experiments, as well as perform a detailed error analysis. In Section \"Case Study: Real World Deployment\" , we perform a case study whose purpose is to give a more qualitative view of the models. Finally, we discuss the implications of the results in Section \"Conclusion\" ." ], [ "Sentiment analysis has become an enormously popular task with a focus on classification approaches on individual languages, but there has not been as much work on cross-lingual approaches. In this section, we detail the most relevant work on cross-lingual sentiment analysis and lay the basis for the bilingual embedding approach we propose later." ], [ "Early work in cross-lingual sentiment analysis found that machine translation (MT) had reached a point of maturity that enabled the transfer of sentiment across languages. Researchers translated sentiment lexicons Mihalcea2007,Meng2012 or annotated corpora and used word alignments to project sentiment annotation and create target-language annotated corpora Banea2008,Duh2011a,Demirtas2013,Balahur2014d.", "Several approaches included a multi-view representation of the data Banea2010,Xiao2012 or co-training Wan2009,Demirtas2013 to improve over a naive implementation of machine translation, where only the translated version of the data is considered. There are also approaches which only require parallel data Meng2012,Zhou2016,Rasooli2017, instead of machine translation.", "All of these approaches, however, require large amounts of parallel data or an existing high quality translation tool, which are not always available. To tackle this issue, Barnes2016 explore cross-lingual approaches for aspect-based sentiment analysis, comparing machine translation methods and those that instead rely on bilingual vector representations. They conclude that MT approaches outperform current bilingual representation methods.", "Chen2016 propose an adversarial deep averaging network, which trains a joint feature extractor for two languages. They minimize the difference between these features across languages by learning to fool a language discriminator. This requires no parallel data, but does require large amounts of unlabeled data and has not been tested on fine-grained sentiment analysis." ], [ "Recently proposed bilingual embedding methods Hermann2014,Chandar2014,Gouws2015 offer a natural way to bridge the language gap. These particular approaches to bilingual embeddings, however, also require large parallel corpora in order to build the bilingual space, which gives no advantage over machine translation. Another approach to creating bilingual word embeddings, which we refer to as Projection-based Bilingual Embeddings, has the advantage of requiring relatively little parallel training data while taking advantage of larger amounts of monolingual data. In the following, we describe the most relevant approaches.", "Mikolov2013translation find that vector spaces in different languages have similar arrangements. Therefore, they propose a linear projection which consists of learning a rotation and scaling matrix. Artetxe2016,Artetxe2017 improve upon this approach by requiring the projection to be orthogonal, thereby preserving the monolingual quality of the original word vectors.", "Given source embeddings $S$ , target embeddings $T$ , and a bilingual lexicon $L$ , Artetxe2016 learn a projection matrix $W$ by minimizing the square of Euclidean distances ", "$$\\operatornamewithlimits{arg\\,min}_W \\sum _{i} ||S^{\\prime }W-T^{\\prime }||_{F}^{2}\\,,$$ (Eq. 13) ", "where $S^{\\prime } \\in S$ and $T^{\\prime } \\in T$ are the word embedding matrices for the tokens in the bilingual lexicon $L$ . This is solved using the Moore-Penrose pseudoinverse $S^{\\prime +} = (S^{\\prime T}S^{\\prime })^{-1}S^{\\prime T}$ as $ W =\nS^{\\prime +}T^{\\prime }$ , which can be computed using SVD. We refer to this approach as VecMap.", "Lample2017 propose a similar refined orthogonal projection method to Artetxe2017, but include an adversarial discriminator, which seeks to discriminate samples from the projected space $WS$ , and the target $T$ , while the projection matrix $W$ attempts to prevent this making the projection from the source space $WS$ as similar to the target space $T$ as possible.", "They further refine their projection matrix by reducing the hubness problem Dinu2015, which is commonly found in high-dimensional spaces. For each projected embedding $Wx$ , they define the $k$ nearest neighbors in the target space, $\\mathcal {N}_{T}$ , suggesting $k = 10$ . They consider the mean cosine similarity $r_{T}(Wx)$ between a projected embedding $Wx$ and its $k$ nearest neighbors ", "$$r_{T}(Wx) = \\frac{1}{k} \\sum _{y \\in \\mathcal {N}_{T}(Wx) } \\cos (Wx,y)$$ (Eq. 15) ", "as well as the mean cosine of a target word $y$ to its neighborhood, which they denote by $r_{S}$ .", "In order to decrease similarity between mapped vectors lying in dense areas, they introduce a cross-domain similarity local scaling term (CSLS) ", "$$\\textrm {CSLS}(Wx,y) = 2 \\cos (Wx,y) - r_{T}(Wx) - r_{S}(y)\\,,$$ (Eq. 16) ", "which they find improves accuracy, while not requiring any parameter tuning.", "Gouws2015taskspecific propose a method to create a pseudo-bilingual corpus with a small task-specific bilingual lexicon, which can then be used to train bilingual embeddings (Barista). This approach requires a monolingual corpus in both the source and target languages and a set of translation pairs. The source and target corpora are concatenated and then every word is randomly kept or replaced by its translation with a probability of 0.5. Any kind of word embedding algorithm can be trained with this pseudo-bilingual corpus to create bilingual word embeddings." ], [ "Maas2011 first explored the idea of incorporating sentiment information into semantic word vectors. They proposed a topic modeling approach similar to latent Dirichlet allocation in order to collect the semantic information in their word vectors. To incorporate the sentiment information, they included a second objective whereby they maximize the probability of the sentiment label for each word in a labeled document.", "Tang2014 exploit distantly annotated tweets to create Twitter sentiment embeddings. To incorporate distributional information about tokens, they use a hinge loss and maximize the likelihood of a true $n$ -gram over a corrupted $n$ -gram. They include a second objective where they classify the polarity of the tweet given the true $n$ -gram. While these techniques have proven useful, they are not easily transferred to a cross-lingual setting.", "Zhou2015 create bilingual sentiment embeddings by translating all source data to the target language and vice versa. This requires the existence of a machine translation system, which is a prohibitive assumption for many under-resourced languages, especially if it must be open and freely accessible. This motivates approaches which can use smaller amounts of parallel data to achieve similar results." ], [ "The methods discussed so far focus on classifying textual phrases like documents or sentences. Next to these approaches, others have concentrated on classifying aspects HuandLiu2004,Liu2012,Pontiki2014 or targets Zhang2015,Zhang2016,Tang2016 to assign them with polarity values.", "A common technique when adapting neural architectures to targeted sentiment analysis is to break the text into left context, target, and right context Zhang2015,Zhang2016, alternatively keeping the target as the final/beginning token in the respective contexts Tang2016. The model then extracts a feature vector from each context and target, using some neural architecture, and concatenates the outputs for classification.", "More recent approaches attempt to augment a neural network with memory to model these interactions Chen2017,Xue2018,Wang2018,Liu2018. Wang2017 explore methods to improve classification of multiple aspects in tweets, while Akhtar2018 attempt to use cross-lingual and multilingual data to improve aspect-based sentiment analysis in under-resourced languages.", "As mentioned before, MT has traditionally been the main approach for transferring information across language barriers BIBREF0 . But this is particularly problematic for targeted sentiment analysis, as changes in word order or loss of words created during translation can directly affect the performance of a classifier Lambert2015." ], [ "In this section, we propose a novel approach to incorporate sentiment information into bilingual embeddings, which we first test on sentence-level cross-lingual sentiment classification. We then propose an extension in order to adapt this approach to targeted cross-lingual sentiment classification. Our model, Bilingual Sentiment Embeddings (Blse), are embeddings that are jointly optimized to represent both (a) semantic information in the source and target languages, which are bound to each other through a small bilingual dictionary, and (b) sentiment information, which is annotated on the source language only. We only need three resources: (1) a comparably small bilingual lexicon, (2) an annotated sentiment corpus in the resource-rich language, and (3) monolingual word embeddings for the two involved languages." ], [ "In this section, we detail the projection objective, the sentiment objective, and finally the full objective for sentence-level cross-lingual sentiment classification. A sketch of the full sentence-level model is depicted in Figure 1 .", "We assume that we have two precomputed vector spaces $S = \\mathbb {R}^{v \\times d}$ and $T = \\mathbb {R}^{v^{\\prime } \\times d^{\\prime }}$ for our source and target languages, where $v$ ( $v^{\\prime }$ ) is the length of the source vocabulary (target vocabulary) and $d$ ( $d^{\\prime }$ ) is the dimensionality of the embeddings. We also assume that we have a bilingual lexicon $L$ of length $n$ which consists of word-to-word translation pairs $L$ = $\\lbrace (s_{1},t_{1}),\n(s_{2},t_{2}),\\ldots , (s_{n}, t_{n})\\rbrace $ which map from source to target.", "In order to create a mapping from both original vector spaces $S$ and $T$ to shared sentiment-informed bilingual spaces $\\mathbf {z}$ and $\\mathbf {\\hat{z}}$ , we employ two linear projection matrices, $M$ and $M^{\\prime }$ . During training, for each translation pair in $L$ , we first look up their associated vectors, project them through their associated projection matrix and finally minimize the mean squared error of the two projected vectors. This is similar to the approach taken by Mikolov2013translation , but includes an additional target projection matrix.", "The intuition for including this second matrix is that a single projection matrix does not support the transfer of sentiment information from the source language to the target language. Without $M^{\\prime }$ , any signal coming from the sentiment classifier (see Section UID27 ) would have no affect on the target embedding space $T$ , and optimizing $M$ to predict sentiment and projection would only be detrimental to classification of the target language. We analyze this further in Section UID63 . Note that in this configuration, we do not need to update the original vector spaces, which would be problematic with such small training data.", "The projection quality is ensured by minimizing the mean squared error ", "$$\\textrm {MSE} = \\dfrac{1}{n} \\sum _{i=1}^{n} (\\mathbf {z_{i}} - \\mathbf {\\hat{z}_{i}})^{2}\\,,$$ (Eq. 26) ", "where $\\mathbf {z_{i}} = S_{s_{i}} \\cdot M$ is the dot product of the embedding for source word $s_{i}$ and the source projection matrix and $\\mathbf {\\hat{z}_{i}} = T_{t_{i}} \\cdot M^{\\prime }$ is the same for the target word $t_{i}$ .", "We add a second training objective to optimize the projected source vectors to predict the sentiment of source phrases. This inevitably changes the projection characteristics of the matrix $M$ , and consequently $M^{\\prime }$ and encourages $M^{\\prime }$ to learn to predict sentiment without any training examples in the target language.", "In order to train $M$ to predict sentiment, we require a source-language corpus $C_{\\textrm {source}}= \\lbrace (x_{1}, y_{1}),\n(x_{2}, y_{2}), \\ldots , (x_{i}, y_{i})\\rbrace $ where each sentence $x_{i}$ is associated with a label $y_{i}$ .", "For classification, we use a two-layer feed-forward averaging network, loosely following Iyyer2015 . For a sentence $x_{i}$ we take the word embeddings from the source embedding $S$ and average them to $\\mathbf {a}_{i} \\in \\mathbb {R}^{d}$ . We then project this vector to the joint bilingual space $\\mathbf {z}_{i} = \\mathbf {a}_{i} \\cdot M$ . Finally, we pass $\\mathbf {z}_{i}$ through a softmax layer $P$ to obtain the prediction $\\hat{y}_{i} = \\textrm {softmax} ( \\mathbf {z}_{i} \\cdot P)$ .", "To train our model to predict sentiment, we minimize the cross-entropy error of the predictions", "$$H = - \\sum _{i=1}^{n} y_{i} \\log \\hat{y_{i}} - (1 - y_{i}) \\log (1 - \\hat{y_{i}})\\,.$$ (Eq. 29) ", "In order to jointly train both the projection component and the sentiment component, we combine the two loss functions to optimize the parameter matrices $M$ , $M^{\\prime }$ , and $P$ by ", "$$J =\\hspace{-14.22636pt}\\sum _{(x,y) \\in C_{\\textrm {source}}}\\hspace{2.84526pt}\\sum _{(s,t) \\in L}\\hspace{0.0pt}\\alpha H(x,y)\n+ (1 - \\alpha ) \\cdot \\textrm {MSE}(s,t)\\,,$$ (Eq. 31) ", "where $\\alpha $ is a hyperparameter that weights sentiment loss vs. projection loss.", "For inference, we classify sentences from a target-language corpus $C_{\\textrm {target}}$ . As in the training procedure, for each sentence, we take the word embeddings from the target embeddings $T$ and average them to $\\mathbf {a}_{i} \\in \\mathbb {R}^{d}$ . We then project this vector to the joint bilingual space $\\mathbf {\\hat{z}}_{i} = \\mathbf {a}_{i} \\cdot M^{\\prime }$ . Finally, we pass $\\mathbf {\\hat{z}}_{i}$ through a softmax layer $P$ to obtain the prediction $\\hat{y}_{i} = \\textrm {softmax} (\n\\mathbf {\\hat{z}}_{i} \\cdot P)$ ." ], [ "In our targeted model, we assume that the list of sentiment targets as they occur in the text is given. These can be extracted previously either by using domain knowledge Liu2005, by using a named entity recognizer Zhang2015 or by using a number of aspect extraction techniques Zhou2012. Given these targets, the task is reduced to classification. However, what remains is how to represent the target, to learn to subselect the information from the context which is relevant, how to represent this contextual information, and how to combine these representations in a meaningful way that enables us to classify the target reliably.", "Our approach to adapt the Blse model to targeted sentiment analysis, which we call Split (depicted in Figure 2 ), is similar to the method proposed by Zhang2016 for gated recurrent networks. For a sentence with a target $a$ , we split the sentence at $a$ in order to get a left and right context, $\\textrm {con}_\\ell (a)$ and $\\textrm {con}_r(a)$ respectively.", "Unlike the approach from Zhang2016, we do not use recurrent neural networks to create a feature vector, as Atrio2019 showed that, in cross-lingual setups, they overfit too much to word order and source-language specific information to perform well on our tasks. Therefore, we instead average each left context $\\textrm {con}_\\ell (a_i)$ , right context $\\textrm {con}_r(a_i)$ , and target $a_{i}$ separately. Although averaging is a simplified approach to create a compositional representation of a phrase, it has been shown to work well for sentiment Iyyer2015,Barnes2017. After creating a single averaged vector for the left context, right context, and target, we concatenate them and use these as input for the softmax classification layer $T \\in \\mathbb {R}^{d \\times 3}$ , where $d$ is the dimensionality of the input vectors. The model is trained on the source language sentiment data using $M$ to project, and then tested by replacing $M$ with $M^{^{\\prime }}$ , similar to the sentence-level model." ], [ "In this section, we describe the resources and datasets, as well as the experimental setups used in both the sentence-level (Experiment 1 in Subsection \"Setting for Experiment 1: Sentence-level Classification\" ) and targeted (Experiment 2 in Subsection \"Setting for Experiment 2: Targeted Classification\" ) experiments." ], [ "The number of datasets and resources for under-resourced languages are limited. Therefore, we choose a mixture of resource-rich and under-resourced languages for our experiments. We treat the resource-rich languages as if they were under-resourced by using similar amounts of parallel data.", "To evaluate our proposed model at sentence-level, we conduct experiments using four benchmark datasets and three bilingual combinations. We use the OpeNER English and Spanish datasets Agerri2013 and the MultiBooked Catalan and Basque datasets BIBREF1 . All datasets contain hotel reviews which are annotated for targeted sentiment analysis. The labels include Strong Negative ( $--$ ), Negative ( $-$ ), Positive ( $+$ ), and Strong Positive ( $++$ ). We map the aspect-level annotations to sentence level by taking the most common label and remove instances of mixed polarity. We also create a binary setup by combining the strong and weak classes. This gives us a total of six experiments. The details of the sentence-level datasets are summarized in Table 1 .", "For each of the experiments, we take 70 percent of the data for training, 20 percent for testing and the remaining 10 percent are used as development data for tuning meta-parameters.", "We use the following corpora to set up the experiments in which we train on a source language corpus $C_{S}$ and test on a target language corpus $C_{T}$ . Statistics for all of the corpora are shown in Table 3 . We include a binary classification setup, where neutral has been removed and strong positive and strong negative have been mapped to positive and negative, as well as a multiclass setup, where the original labels are used.", "OpeNER Corpora: The OpeNER corpora Agerri2013 are composed of hotel reviews, annotated for aspect-based sentiment. Each aspect is annotated with a sentiment label (Strong Positive, Positive, Negative, Strong Negative). We perform experiments with the English and Spanish versions.", "MultiBooked Corpora: The MultiBooked corpora Barnes2018a are also hotel reviews annotated in the same way as the OpeNER corpora, but in Basque and Catalan. These corpora allow us to observe how well each approach performs on low-resource languages.", "SemEval 2016 Task 5: We take the English and Spanish restaurant review corpora made available by the organizers of the SemEval event Pontiki2016. These corpora are annotated for three levels of sentiment (positive, neutral, negative).", "USAGE Corpora: The USAGE corpora Klinger2014a are Amazon reviews taken from a number of different items, and are available in English and German. Each aspect is annotated for three levels of sentiment (positive, neutral, negative). As the corpus has two sets of annotations available, we take the annotations from annotator 1 as the gold standard.", "For Blse, VecMap, Muse, and MT, we require monolingual vector spaces for each of our languages. For English, we use the publicly available GoogleNews vectors. For Spanish, Catalan, and Basque, we train skip-gram embeddings using the Word2Vec toolkit with 300 dimensions, subsampling of $10^{-4}$ , window of 5, negative sampling of 15 based on a 2016 Wikipedia corpus (sentence-split, tokenized with IXA pipes Agerri2014 and lowercased). The statistics of the Wikipedia corpora are given in Table 2 .", "For Blse, VecMap, Muse, and Barista, we also require a bilingual lexicon. We use the sentiment lexicon from HuandLiu2004 (to which we refer in the following as Hu and Liu) and its translation into each target language. We translate the lexicon using Google Translate and exclude multi-word expressions. This leaves a dictionary of 5700 translations in Spanish, 5271 in Catalan, and 4577 in Basque. We set aside ten percent of the translation pairs as a development set in order to check that the distances between translation pairs not seen during training are also minimized during training." ], [ "We compare Blse (Sections UID23 – UID30 ) to VecMap, Muse, and Barista (Section \"Previous Work\" ) as baselines, which have similar data requirements and to machine translation (MT) and monolingual (Mono) upper bounds which request more resources. For all models (Mono, MT, VecMap, Muse, Barista), we take the average of the word embeddings in the source-language training examples and train a linear SVM. We report this instead of using the same feed-forward network as in Blse as it is the stronger upper bound. We choose the parameter $c$ on the target language development set and evaluate on the target language test set.", "Upper Bound Mono. We set an empirical upper bound by training and testing a linear SVM on the target language data. Specifically, we train the model on the averaged embeddings from target language training data, tuning the $c$ parameter on the development data. We test on the target language test data.", "Upper Bound MT. To test the effectiveness of machine translation, we translate all of the sentiment corpora from the target language to English using the Google Translate API. Note that this approach is not considered a baseline, as we assume not to have access to high-quality machine translation for low-resource languages of interest.", "Baseline Unsup We compare with the unsupervised statistical machine translation approach proposed by artetxe2018emnlp. This approach uses a self-supervised method to create bilingual phrase embeddings which then populates a phrase table. Monolingual n-gram language models and an unsupervised variant of MERT are used to create a MT model which is improved through iterative backtranslation. We use the Wikipedia corpora from Section UID42 to create the unsupervised SMT system between English and the target languages and run the training proceedure with default parameters. Finally, we translate all test examples in the target languages to English.", "Baseline VecMap. We compare with the approach proposed by Artetxe2016 which has shown promise on other tasks, e. g., word similarity. In order to learn the projection matrix $W$ , we need translation pairs. We use the same word-to-word bilingual lexicon mentioned in Section UID23 . We then map the source vector space $S$ to the bilingual space $\\hat{S} = SW$ and use these embeddings.", "Baseline Muse. This baseline is similar to VecMap but incorporates and adversarial objective as well as a localized scaling objective, which further improve the orthogonal refinement so that the two language spaces are even more similar.", "Baseline Barista. The approach proposed by Gouws2015taskspecific is another appropriate baseline, as it fulfills the same data requirements as the projection methods. The bilingual lexicon used to create the pseudo-bilingual corpus is the same word-to-word bilingual lexicon mentioned in Section UID23 . We follow the authors' setup to create the pseudo-bilingual corpus. We create bilingual embeddings by training skip-gram embeddings using the Word2Vec toolkit on the pseudo-bilingual corpus using the same parameters from Section UID42 .", "Our method: BLSE. Our model, Blse, is implemented in Pytorch Pytorch and the word embeddings are initialized with the pretrained word embeddings $S$ and $T$ mentioned in Section UID42 . We use the word-to-word bilingual lexicon from Section UID46 , tune the hyperparameters $\\alpha $ , training epochs, and batch size on the target development set and use the best hyperparameters achieved on the development set for testing. ADAM Kingma2014a is used in order to minimize the average loss of the training batches.", "Ensembles. In order to evaluate to what extent each projection model adds complementary information to the machine translation approach, we create an ensemble of MT and each projection method (Blse, VecMap, Muse, Barista). A random forest classifier is trained on the predictions from MT and each of these approaches." ], [ "For the targeted classification experiment, we compare the same models mentioned above, but adapted to the setting using the Split method from Section \"Targeted Model\" .", "A simple majority baseline sets the lower bound, while the MT-based model serves as an upper bound. We assume our models to perform between these two, as we do not have access to the millions of parallel sentences required to perform high-quality MT and particularly aim at proposing a method which is less resource-hungry.", "We hypothesize that cross-lingual approaches are particularly error-prone when evaluative phrases and words are wrongly predicted. In such settings, it might be beneficial for a model to put emphasis on the target word itself and learn a prior distribution of sentiment for each target independent of the context. For example, if you assume that all mentions of Steven Segal are negative in movie reviews, it is possible to achieve good results Bird2009. On the other hand, it may be that there are not enough examples of target-context pairs, and that it is better to ignore the target and concentrate only on the contexts.", "To analyze this, we compare our model to two simplified versions. In addition, this approach enables us to gain insight in the source of relevant information. The first is Target-only, which means that we use the model in the same way as before but ignore the context completely. This serves as a tool to understand how much model performance originates from the target itself.", "In the same spirit, we use a Context-only model, which ignores the target by constraining the parameters of all target phrase embeddings to be the same. This approach might be beneficial over our initial model if the prior distribution between targets was similar and the context actually carries the relevant information.", "As the baseline for each projection method, we assume all targets in each sentence respectively to be of the same polarity (Sent). This is generally an erroneous assumption, but can give good results if all of the targets in a sentence have the same polarity. In addition, this baseline provides us with the information about whether the models are able to handle information from different positions in the text." ], [ "In Table 4 , we report the results of all four methods. Our method outperforms the other projection methods (the baselines VecMap, Muse, and Barista) on four of the six experiments substantially. It performs only slightly worse than the more resource-costly upper bounds (MT and Mono). This is especially noticeable for the binary classification task, where Blse performs nearly as well as machine translation and significantly better than the other methods. Unsup also performs similarly to Blse on the binary tasks, while giving stronger performance on the 4-class setup. We perform approximate randomization tests Yeh2000 with 10,000 runs and highlight the results that are statistically significant (*p $<$ 0.01) in Table 4 .", "In more detail, we see that MT generally performs better than the projection methods (79–69 $\\text{F}_1$ on binary, 52–44 on 4-class). Blse (75–69 on binary, 41–30 on 4-class) has the best performance of the projection methods and is comparable with MT on the binary setup, with no significant difference on binary Basque. VecMap (67–46 on binary, 35–21 on 4-class) and Barista (61–55 on binary, 40–34 on 4-class) are significantly worse than Blse on all experiments except Catalan and Basque 4-class. Muse (67–62 on binary, 45–34 on 4-class) performs better than VecMap and Barista. On the binary experiment, VecMap outperforms Barista on Spanish (67.1 vs. 61.2) and Catalan (60.7 vs. 60.1) but suffers more than the other methods on the four-class experiments, with a maximum $\\text{F}_1$ of 34.9. Barista is relatively stable across languages. Unsup performs well across experiments (76–65 on binary, 49–39 on 4-class), even performing better than MT on both Catalan tasks and Spanish 4-class.", "The Ensemble of MT and Blse performs the best, which shows that Blse adds complementary information to MT. Finally, we note that all systems perform worse on Basque. This is presumably due to the increased morphological complexity of Basque, as well as its lack of similarity to the source language English (Section UID102 ).", "We analyze three aspects of our model in further detail: 1) where most mistakes originate, 2) the effect of the bilingual lexicon, and 3) the effect and necessity of the target-language projection matrix $M^{\\prime }$ .", "In order to analyze where each model struggles, we categorize the mistakes and annotate all of the test phrases with one of the following error classes: vocabulary (voc), adverbial modifiers (mod), negation (neg), external knowledge (know) or other. Table 5 shows the results.", "Vocabulary: The most common way to express sentiment in hotel reviews is through the use of polar adjectives (as in “the room was great”) or the mention of certain nouns that are desirable (“it had a pool”). Although this phenomenon has the largest total number of mistakes (an average of 72 per model on binary and 172 on 4-class), it is mainly due to its prevalence. MT performed the best on the test examples which according to the annotation require a correct understanding of the vocabulary (81 $\\text{F}_1$ on binary /54 $\\text{F}_1$ on 4-class), with Blse (79/48) slightly worse. Muse (76/23), VecMap (70/35), and Barista (67/41) perform worse. This suggests that Blse is better than Muse, VecMap and Barista at transferring sentiment of the most important sentiment bearing words.", "Negation: Negation is a well-studied phenomenon in sentiment analysis Pang2002,Wiegand2010,Zhu2014,Reitan2015 . Therefore, we are interested in how these four models perform on phrases that include the negation of a key element, for example “In general, this hotel isn't bad\". We would like our models to recognize that the combination of two negative elements “isn't\" and “bad\" lead to a Positive label.", "Given the simple classification strategy, all models perform relatively well on phrases with negation (all reach nearly 60 $\\text{F}_1$ in the binary setting). However, while Blse performs the best on negation in the binary setting (82.9 $\\text{F}_1$ ), it has more problems with negation in the 4-class setting (36.9 $\\text{F}_1$ ).", "Adverbial Modifiers: Phrases that are modified by an adverb, e. g., the food was incredibly good, are important for the four-class setup, as they often differentiate between the base and Strong labels. In the binary case, all models reach more than 55 $\\text{F}_1$ . In the 4-class setup, Blse only achieves 27.2 $\\text{F}_1$ compared to 46.6 or 31.3 of MT and Barista, respectively. Therefore, presumably, our model does currently not capture the semantics of the target adverbs well. This is likely due to the fact that it assigns too much sentiment to functional words (see Figure 6 ). Muse performs poorly on modified examples (20.3 $\\text{F}_1$ ).", "External Knowledge Required: These errors are difficult for any of the models to get correct. Many of these include numbers which imply positive or negative sentiment (350 meters from the beach is Positive while 3 kilometers from the beach is Negative). Blse performs the best (63.5 $\\text{F}_1$ ) while MT performs comparably well (62.5). Barista performs the worst (43.6).", "Binary vs. 4-class: All of the models suffer when moving from the binary to 4-class setting; an average of 26.8 in macro $\\text{F}_1$ for MT, 31.4 for VecMap, 22.2 for Barista, 34.1 for Muse, and 36.6 for Blse. The vector projection methods (VecMap, Muse, and Blse) suffer the most, suggesting that they are currently more apt for the binary setting.", "We analyze how the number of translation pairs affects our model. We train on the 4-class Spanish setup using the best hyper-parameters from the previous experiment.", "Research into projection techniques for bilingual word embeddings Mikolov2013translation,Lazaridou2015,Artetxe2016 often uses a lexicon of the most frequent 8–10 thousand words in English and their translations as training data. We test this approach by taking the 10,000 word-to-word translations from the Apertium English-to-Spanish dictionary. We also use the Google Translate API to translate the NRC hashtag sentiment lexicon Mohammad2013 and keep the 22,984 word-to-word translations. We perform the same experiment as above and vary the amount of training data from 0, 100, 300, 600, 1000, 3000, 6000, 10,000 up to 20,000 training pairs. Finally, we compile a small hand translated dictionary of 200 pairs, which we then expand using target language morphological information, finally giving us 657 translation pairs. The macro $\\text{F}_1$ score for the Hu and Liu dictionary climbs constantly with the increasing translation pairs. Both the Apertium and NRC dictionaries perform worse than the translated lexicon by Hu and Liu, while the expanded hand translated dictionary is competitive, as shown in Figure 3 .", "While for some tasks, e. g., bilingual lexicon induction, using the most frequent words as translation pairs is an effective approach, for sentiment analysis, this does not seem to help. Using a translated sentiment lexicon, even if it is small, gives better results.", "The main motivation for using two projection matrices $M$ and $M^{\\prime }$ is to allow the original embeddings to remain stable, while the projection matrices have the flexibility to align translations and separate these into distinct sentiment subspaces. To justify this design decision empirically, we perform an experiment to evaluate the actual need for the target language projection matrix $M^{\\prime }$ : We create a simplified version of our model without $M^{\\prime }$ , using $M$ to project from the source to target and then $P$ to classify sentiment.", "The results of this model are shown in Figure 4 . The modified model does learn to predict in the source language, but not in the target language. This confirms that $M^{\\prime }$ is necessary to transfer sentiment in our model.", "Additionally, we provide an analysis of a similar model to ours, but which uses $M = \\mathbb {R}^{d, o}$ and $M^{\\prime } = \\mathbb {R}^{d^{\\prime }, o}$ , where $d$ ( $d^{\\prime }$ ) is the dimensionality of the original embeddings and $o$ is the label size, to directly model crosslingual sentiment, such that the final objective function is ", "$$J =\\hspace{-14.22636pt}\\sum _{(x,y) \\in C_{\\textrm {source}}}\\hspace{2.84526pt}\\sum _{(s,t) \\in L}\\hspace{0.0pt}\\alpha \\cdot H(x, y) + (1 - \\alpha ) \\cdot || M \\cdot s - M^{\\prime } \\cdot t ||$$ (Eq. 66) ", "thereby simplifying the model and removing the $P$ parameter. Table 6 shows that Blse outperforms this simplified model on all tasks.", "In order to understand how well our model transfers sentiment information to the target language, we perform two qualitative analyses. First, we collect two sets of 100 positive sentiment words and one set of 100 negative sentiment words. An effective cross-lingual sentiment classifier using embeddings should learn that two positive words should be closer in the shared bilingual space than a positive word and a negative word. We test if Blse is able to do this by training our model and after every epoch observing the mean cosine similarity between the sentiment synonyms and sentiment antonyms after projecting to the joint space.", "We compare Blse with VecMap and Barista by replacing the Linear SVM classifiers with the same multi-layer classifier used in Blse and observing the distances in the hidden layer. Figure 5 shows this similarity in both source and target language, along with the mean cosine similarity between a held-out set of translation pairs and the macro $\\text{F}_1$ scores on the development set for both source and target languages for Blse, Barista, and VecMap. From this plot, it is clear that Blse is able to learn that sentiment synonyms should be close to one another in vector space and antonyms should have a negative cosine similarity. While the other models also learn this to some degree, jointly optimizing both sentiment and projection gives better results.", "Secondly, we would like to know how well the projected vectors compare to the original space. Our hypothesis is that some relatedness and similarity information is lost during projection. Therefore, we visualize six categories of words in t-SNE, which projects high dimensional representations to lower dimensional spaces while preserving the relationships as best as possible Vandermaaten2008: positive sentiment words, negative sentiment words, functional words, verbs, animals, and transport.", "The t-SNE plots in Figure 6 show that the positive and negative sentiment words are rather clearly separated after projection in Blse. This indicates that we are able to incorporate sentiment information into our target language without any labeled data in the target language. However, the downside of this is that functional words and transportation words are highly correlated with positive sentiment.", "Finally, in order to analyze the sensitivity of the alpha parameter, we train Blse models for 30 epochs each with $\\alpha $ between 0 and 1. Figure 7 shows the average cosine similarity for the translation pairs, as well as macro $\\text{F}_1$ for both source and target language development data.", "Values near 0 lead to poor translation and consecuently poor target language transfer. There is a rather large “sweet spot” where all measures perform best and finally, the translation is optimized to the detriment of sentiment prediction in both source and target languages with values near 1.", "The experiments in this section have proven that it is possible to perform cross-lingual sentiment analysis without machine translation, and that jointly learning to project and predict sentiment is advantageous. This supports the growing trend of jointly training for multiple objectives Tang2014,Klinger2015,Ferreira2016.", "This approach has also been exploited within the framework of multi-task learning, where a model learns to perform multiple similar tasks in order to improve on a final task Collobert2011a. The main difference between the joint method proposed here and multi-task learning is that vector space projection and sentiment classification are not similar enough tasks to help each other. In fact, these two objectives compete against one another, as a perfect projection would not contain enough information for sentiment classification, and vice versa." ], [ "Table 7 shows the macro $\\text{F}_1$ scores for all cross-lingual approaches (Blse, VecMap, Muse, Barista, MT, Unsup) and all targeted approaches (Sent, Split, Context-only, and Target-only). The final column is the average over all corpora. The final row in each setup shows the macro $\\text{F}_1$ for a classifier that always chooses the majority class.", "Blse outperforms other projection methods on the binary setup, 63.0 macro averaged $\\text{F}_1$ across corpora versus 59.0, 57.9, and 51.4 for VecMap, Muse, and Barista, respectively. On the multiclass setup, however, Muse (32.2 $\\text{F}_1$ ) is the best, followed by VecMap (31.0), Barista (28.1) and Blse (23.7). Unsup performs well across all experiments, achieving the best results on OpeNER ES (73.2 on binary and 42.7 on multiclass) and SemEval binary (77.1). VecMap is never the best nor the worst approach. In general, Barista performs poorly on the binary setup, but slightly better on the multiclass, although the overall performance is still weak. These results are similar to those observed in Experiment 1 for sentence classification.", "The Split approach to ABSA improves over the Sent baseline on 33 of 50 experiments, especially on binary (21/25), while on multiclass it is less helpful (13/25). Both Sent and Split normally outperform Context-only or Target-only approaches. This confirms the intuition that it is important to take both context and target information for classification. Additionally, the Context-only approach always performs better than Target-only, which indicates that context is more important than the prior probability of an target being positive or negative.", "Unlike the projection methods, MT using only the Sent representation performs well on the OpeNER and MultiBooked datasets, while suffering more on the SemEval and USAGE datasets. This is explained by the percentage of sentences that contain contrasting polarities in each dataset: between 8 and 12% for the OpeNER and Multibooked datasets, compared to 29% for SemEval or 50% for USAGE. In sentences with multiple contrasting polarities, the Sent baseline performs poorly.", "Finally, the general level of performance of projection-based targeted cross-lingual sentiment classification systems shows that they still lag 10+ percentage points behind MT on binary (compare MT (72.9 $\\text{F}_1$ ) with Blse (63.0)), and 6+ percentage points on multiclass (MT (38.8) versus Muse (32.2)). The gap between MT and projection-based approaches is therefore larger on targeted sentiment analysis than at sentence-level.", "We perform a manual analysis of the targets misclassified by all systems on the OpeNER Spanish binary corpus (see Table 8 ), and found that the average length of misclassified targets is slightly higher than that of correctly classified targets, except for with VecMap. This indicates that averaging may have a detrimental effect as the size of the targets increases.", "With the MT upperbounds, there is a non-negligible amount of noise introduced by targets which have been incorrectly translated (0.05% OpeNER ES, 6% MultiBooked EU, 2% CA, 2.5% SemEval, 1% USAGE). We hypothesize that this is why MT with Context-only performs better than MT with Split. This motivates further research with projection-based methods, as they do not suffer from translation errors.", "The confusion matrices of the models on the SemEval task, shown in Figure 8 , show that on the multilabel task, models are not able to learn the neutral class. This derives from the large class imbalance found in the data (see Table 3 ). Similarly, models do not learn the Strong Negative class on the OpeNER and MultiBooked datasets." ], [ "The performance of machine learning models on different target languages depends on the amount of data available, the quality of the data, and characteristics of the target language, e. g., morphological complexity. In the following, we analyze these aspects. There has been previous work that has observed target-language specific differences in multilingual dependency parsing Zeljko2016, machine translation Johnson2017, and language modeling Cotterell2018,Gerz2018. We are not aware of any work in cross-lingual sentiment analysis that explores the relationship between target language and performance in such depth and aim at improving this situation in the following.", "Additionally, the effect of domain differences when performing cross-lingual tasks has not been studied in depth. Hangya2018 propose domain adaptation methods for cross-lingual sentiment classification and bilingual dictionary induction. They show that creating domain-specific cross-lingual embeddings improves the classification for English-Spanish. However, the source-language training data used to train the sentiment classifier is taken from the same domain as the target-language test data. Therefore, it is not clear what the effect of using source-language training data from different domains would be. We analyzed the model presented in Section \"Sentence-level Model\" in a domain adaptation setup, including the impact of domain differences Barnes2018c. The main result was that our model performs particularly well on more distant domains, while other approaches Chen2012,Ziser2017 performed better when the source and target domains were not too dissimilar.", "In the following, we transfer this analysis to the target-based projection model in a real-world case study which mimics a user searching for the sentiment on touristic attractions. In order to analyze how well these methods generalize to new languages and domains, we deploy the targeted Blse, Muse, VecMap and MT models on tweets in ten Western European languages with training data from three different domains. Additionally, we include experiments with the Unsup models for a subset of the languages. English is the source language in all experiments, and we test on each of the ten target languages and attempt to answer the following research questions:", "How much does the amount of monolingual data available to create the original embeddings effect the final results?", "How do features of the target language, i. e. similarity to source language or morphological complexity, affect the performance?", "How do domain mismatches between source-language training and target-language test data affect the performance?", "Section \"Discussion\" addresses our findings regarding these questions and demonstrates that 1) the amount of monolingual data does not correlate with classification results, 2) language similarity between the source and target languages based on word and character n-gram distributions predicts the performance of Blse on new datasets, and 3) domain mismatch has more of an effect on the multiclass setup than binary." ], [ "We collect tweets directed at a number of tourist attractions in European cities using the Twitter API in 10 European languages, including several under-resourced languages (English, Basque, Catalan, Galician, French, Italian, Dutch, German, Danish, Swedish, and Norwegian). We detail the data collection and annotation procedures in Section UID85 . For classification, we compare MT the best performing projection-based methods (Blse, Muse, VecMap) using the Split method, detailed further in Section UID94 . As we need monolingual embeddings for all projection-based approaches, we create skipgram embeddings from Wikipedia dumps, detailed in Section UID91 .", "As an experimental setting to measure the effectiveness of targeted cross-lingual sentiment models on a large number of languages, we collect and annotate small datasets from Twitter for each of the target languages, as well as a larger dataset to train the models in English. While it would be possible to only concentrate our efforts on languages with existing datasets in order to enable evaluation, this could give a distorted view of how well these models generalize. In order to reduce the possible ambiguity of the tourist attractions, we do not include those that have two or more obvious senses, e. g., Barcelona could refer either to the city or the football team.", "In order to obtain a varied sample of tweets with subjective opinions, we download tweets that contain mentions of these tourist attractions as well as one of several emoticons or keywords. This distant supervision technique has been used to create sentiment lexicons Mohammad2016, semi-supervised training data Felbo2017, and features for a classifier Turney2003. We then remove any tweets that are less than 7 words long or which contain more than 3 hashtags or mentions. This increases the probability that a tweet text contains sufficient information for our use case setting.", "We manually annotate all tweets for its polarity toward the target to insure the quality of the data. Note that we only annotate the sentiment towards the predefined list of targets, which leads to a single annotated target per tweet. Any tweets that have unclear polarity towards the target are assigned a neutral label. This produces the three class setup that is commonly used in the SemEval tasks Nakov2013,Nakov2016. Annotators were master's and doctoral students between 27 and 35 years old. All had either native or C1 level fluency in the languages of interest. Finally, for a subset of tweets in English, Catalan, and Basque two annotators classify each tweet. Table 11 shows three example tweets from English.", "Table 10 depicts the number of annotated targets for all languages, as well as inter-annotator agreement using Cohen's $\\kappa $ . The neutral class is the largest in all languages, followed by positive, and negative. These distributions are similar to those found in other Twitter crawled datasets Nakov2013,Nakov2016. We calculate pairwise agreement on a subset of languages using Cohen's $\\kappa $ . The scores reflect a good level of agreement (0.62, 0.60, and 0.61 for English, Basque, and Catalan, respectively).", "We collect Wikipedia dumps for ten languages; namely, Basque, Catalan, Galician, French, Italian, Dutch, German, Danish, Swedish, and Norwegian. We then preprocess them using the Wikiextractor script, and sentence and word tokenize them with either IXA pipes Agerri2014 (Basque, Galician, Italian, Dutch, and French), Freeling Padro2010 (Catalan), or NLTK Loper2002 (Norwegian, Swedish, Danish).", "For each language we create Skip-gram embeddings with the word2vec toolkit following the pipeline and parameters described in Section UID42 . This process gives us 300 dimensional vectors trained on similar data for all languages. We assume that any large differences in the embedding spaces derive from the size of the data and the characteristics of the language itself. Following the same criteria laid out in Section UID46 , we create projection dictionaries by translating the Hu and Liu dictionary HuandLiu2004 to each of the target languages and keeping only translations that are single word to single word. The statistics of all Wikipedia corpora, embeddings, and projection dictionaries are shown in Table 12 .", "Since we predetermine the sentiment target for each tweet, we can perform targeted experiments without further annotation. We use the Split models described in Section \"Targeted Model\" . Our model is the targeted Blse models described in Section \"Targeted Model\" . Additionally, we compare to the targeted Muse, VecMap, and MT models, as well as an Ensemble classifier that uses the predictions from Blse and MT before taking the largest predicted class for classification (see Section \"Setting for Experiment 1: Sentence-level Classification\" for details). Finally, we set a majority baseline by assigning the most common label (neutral) to all predictions. All models are trained for 300 epochs with a learning rate of 0.001 and $\\alpha $ of 0.3.", "We train the five models on the English data compiled during this study, as well as on the USAGE, and SemEval English data (the details can be found in Table 3 ) and test the models on the target-language test set." ], [ "Table 13 shows the macro $\\text{F}_1$ scores for all cross-lingual targeted sentiment approaches (Blse, Muse, VecMap, MT) trained on English data and tested on the target-language using the Split method proposed in \"Targeted Model\" . The final column is the average over all languages. Given the results from the earlier experiments, we hypothesize that MT should outperform Muse, VecMap and Blse for most of the languages.", "On the binary setup, Blse outperforms all other cross-lingual methods including MT and Unsup, with 56.0 macro averaged $\\text{F}_1$ across languages versus 48.7, 49.4, and 48.9 for Muse, VecMap, and MT respectively (54.1 across Basque and Catalan versus 46.0 for Unsup). Blse performs particularly well on Catalan (54.5), Italian (63.4), Swedish (65.3), and Danish (68.3). VecMap performs poorly on Galician (33.3), Italian (38.2), and Danish (43.4), but outperforms all other methods on Basque (56.4), Dutch (55.2) and Norwegian (59.0). MT performs worse than Blse and VecMap, although it does perform best for Galician (56.5). Unlike experiments in Section \"Sentence-level Model\" , the ensemble approach does not perform better than the individual classifiers and Muse leads to the classifier with the lowest performance overall. Unsup performs better than MT on both Basque and Catalan.", "On the multiclass setup, however, MT (36.6 $\\text{F}_1$ ) is the best, followed by VecMap (34.1), Blse (32.6), and Muse (26.1). Compared to the experiments on hotel reviews, the average differences between models is small (2.5 percentage points between MT and VecMap, and 1.5 between VecMap and Blse). Unsup performs better than MT on Basque (40.1), but worse on Catalan (28.5). Again, all methods outperform the majority baseline.", "On both the binary and multiclass setups, the best overall results are obtained by testing and training on data from the same domain (56.0 $\\text{F}_1$ for Blse and 36.6 $\\text{F}_1$ for MT). Training MT, Muse, and VecMap on the SemEval data performs better than training on USAGE, however.", "An initial error analysis shows that all models suffer greatly on the negative class. This seems to suggest that negative polarity towards a target is more difficult to determine within these frameworks. A significant amount of the tweets that have negative polarity towards a target also express positive or neutral sentiment towards other targets. The averaging approach to create the context vectors does not currently allow any of the models to exclude this information, leading to poor performance on these instances.", "Finally, compared to the experiments performed on hotel and product reviews in Section \"Experiments\" , the noisy data from Twitter is more difficult to classify. Despite the rather strong majority baseline (an average of 40.5 Macro $\\text{F}_1$ on binary), no model achieves more than an average of 56 Macro $\\text{F}_1$ on the binary task. A marked difference is that Blse and VecMap outperform MT on the binary setup. Unlike the previous experiment, Muse performs the worst on the multiclass setup. The other projection methods obtain multiclass results similar to the previous experiment (32.6–34.1 $\\text{F}_1$ here compared to 23.7–31.0 $\\text{F}_1$ previously)." ], [ "In this section, we present an error analysis. Specifically, Table 14 shows examples where Blse correctly predicts the polarity of a tweet that MT and Unsup incorrectly predict, and vice versa, as well as examples where all models are incorrect.", "In general, in examples where Blse outperforms MT and Unsup, the translation-based approaches often mistranslate important sentiment words, which leads to prediction errors. In the first Basque tweet, for example, “#txindoki igo gabe ere inguruaz goza daiteke... zuek joan tontorrera eta utzi arraroei gure kasa...”, Unsup incorrectly translates the most important sentiment word in the tweet “goza” (enjoy) to “overlook” and subsequently incorrectly predicts that the polarity towards txindoki is negative.", "Tweets that contain many out-of-vocabulary words or non-standard spelling (due to dialectal differences, informal writing, etc.), such as the third tweet in Table 14 , “kanpora jun barik ehko asko: anboto, txindoki”, are challenging for all models. In this example “jun” is a non-standard spelling of “joan” (go), “barik” is a Bizcayan Basque variant of “gabe” (without) , and “ehko” is an abbreviation of “Euskal Herriko” (Basque Country's). These lead to poor translations for MT and Unsup, but pose a similar out-of-vocabulary problem for Blse.", "In order to give a more qualitative view of the targeted model, Figure 9 shows t-sne projections of the bilingual vector space before and after training on the Basque binary task, following the same proceedure mentioned in Section UID68 . As in the sentence-level experiment, there is a separation of the positive and negative sentiment words, although it is less clear for targeted sentiment. This is not surprising, as a targeted model must learn not only the prior polarity of words, but how they interact with targets, leading to a more context-dependent representation of sentiment words.", "Finally, we further analyze the effects of three variables that are present in cross-lingual sentiment analysis: a) availability of monolingual unlabeled data, b) similarity of source and target languages, and c) domain shift between the source language training data and the target language test data.", "We pose the question of what the relationship is between the amount of available monolingual data to create the embedding spaces and the classification results of the models. If the original word embedding spaces are not of high quality, this could make it difficult for the projection-based models to create useful features. In order to test this, we perform ablation experiments by training target-language embeddings on varying amounts of data ( $1 \\times 10^{4}$ to $5 \\times 10^{9}$ tokens) and testing the models replacing the full target-language embeddings with these. We plot the performance of the models as a function of available monolingual data in Figure 10 .", "Figure 10 shows that nearly all models, with the exception of Norwegian, perform poorly with very limited monolingual training data ( $1\\times 10^{4}$ ) and improve, although erratically, with more training data. Interestingly, the models require little data to achieve results comparable to using the all tokens to train the embeddings. A statistical analysis of the amount of unlabeled data available and the performance of Blse, Muse, VecMap (Pearson's $r$ = $-0.14$ , $-0.27$ , $0.08$ , respectively) reveals no statistically significant correlation between them. This seems to indicate that all models are not sensitive to the amount of monolingual training data available in the target language.", "One hypothesis to different results across languages is that the similarity of the source and target language has an effect on the final classification of the models. In order to analyze this, we need a measure that models pairwise language similarity. Given that the features we use for classification are derived from distributional representations, we model similarity as a function of 1) universal POS-tag n-grams which represent the contexts used during training, and 2) character n-grams, which represent differences in morphology. POS-tag n-grams have previously been used to classify genre Fang2010, improve statistical machine translation Lioma2005, and the combination of POS-tag and character n-grams have proven useful features for identifying the native language of second language writers in English Kulmizev2017. This indicates that these are useful features for characterizing a language. In this section we calculate the pairwise similarity between all languages and then check whether this correlates with performance.", "After POS-tagging the test sentences obtained from Twitter using the universal part of speech tags Petrov2012, we calculate the normalized frequency distribution $P_{l}$ for the POS-tag trigrams and $C_{l}$ for character trigrams for each language $l$ in $L =\n\\lbrace \\textrm {Danish, Swedish, Norwegian, Italian, Basque, Catalan,\nFrench, Dutch, Galician,}$ ", " $\\textrm {German, English}\\rbrace $ . We then compute the pairwise cosine similarity between $\\cos (A, B) = \\frac{A\n\\cdot B}{||A|| \\: ||B||} $ where $A$ is the concatenation of $P_{l_{i}}$ and $C_{l_{i}}$ for language $l_{i}$ and $B$ is the concatenation of $P_{l_{j}}$ and $C_{l_{j}}$ for language $l_{j}$ .", "The pairwise similarities in Figure 11 confirm to expected similarities, and language families are clearly grouped (Romance, Germanic, Scandinavian, with Basque as an outlier that has no more than 0.47 similarity with any language). This confirms the use of our similarity metric for our purposes. We plot model performance as a function of language similarity in Figure 12 . To measure the correlation between language similarity and performance, we calculate Pearson's $r$ and find that for Blse there is a strong correlation between language similarity and performance, $r = 0.76$ and significance $p <\n0.01$ . Muse, VecMap and MT do not show these correlations ( $r$ = 0.41, 0.24, 0.14, respectively). For MT this may be due to robust machine translation available in less similar languages according to our metric, e. g., German-English. For Muse and VecMap, however, it is less clear why it does not follow the same trend as Blse.", "In this section, we determine the effect of source-language domain on the cross-lingual sentiment classification task. Specifically, we use English language training data from three different domains (Twitter, restaurant reviews, and product reviews) to train the cross-lingual classifiers, and then test on the target-language Twitter data. In monolingual sentiment analysis, one would expect to see a drop when moving to more distant domains.", "In order to analyze the effect of domain similarity further, we test the similarity of the domains of the source-language training data using Jensen-Shannon Divergence, which is a smoothed, symmetric version of the Kullback-Leibler Divergence, $D_{KL}(A||B) = \\sum _{i}^{N} a_{i} \\log \\frac{a_{i}}{b_{i}}$ . Kullback-Leibler Divergence measures the difference between the probability distributions $A$ and $B$ , but is undefined for any event $a_{i} \\in A$ with zero probability, which is common in term distributions. Jensen-Shannon Divergence is then $\nD_{JS}(A,B) = \\frac{1}{2} \\Big [ D_{KL}(A||B) + D_{KL}(B||A) \\Big ]\\,.\n$ ", "Our similarity features are probability distributions over terms $t\n\\in \\mathbb {R}^{|V|}$ , where $t_{i}$ is the probability of the $i$ -th word in the vocabulary $V$ . For each domain, we create frequency distributions of the most frequent 10,000 unigrams that all domains have in common and measure the divergence with $D_{JS}$ .", "The results shown in Table 15 indicate that both the SemEval and USAGE datasets are relatively distinct from the Twitter data described in Section UID85 , while they are more similar to each other. Additionally, we plot the results of all models with respect to the training domain in Figure 13 .", "We calculate Pearson's $r$ on the correlation between domain and model performance, shown in Table 16 . On the binary setup, the results show a negligible correlation for Blse (0.32), with no significant correlation for Muse, VecMap or MT. This suggests that the models are relatively robust to domain noise, or rather that there is so much other noise found in the approaches that domain is less relevant. On the multiclass setup, however, there is a significant effect for all models. This indicates that the multiclass models presented here are less robust than the binary models.", "Both the SemEval and USAGE corpora differ equally from the Twitter data given the metric defined here. The fact that models trained on SemEval tend to perform better than those trained on USAGE, therefore, seems to be due to the differences in label distribution, rather than to differences in domain. These label distributions are radically different in the multiclass setup, as the English Twitter data has a 30/50/20 distribution over Positive, Neutral, and Negative labels (67/1/32 and 68/4/28 for USAGE and SemEval, respectively). Both undersampling and oversampling help, but the performance is still worse than training on in-domain data.", "The case study which we presented in this section showed results of deploying the models from Section \"Projecting Sentiment Across Languages\" to real world Twitter data, which we collect and annotate for targeted sentiment analysis. The analysis of different phenomena revealed that for binary targeted sentiment analysis, Blse performs better than machine translation on noisy data from social media, although it is sensitive to differences between source and target languages. Finally, there is little correlation between performance on the cross-lingual sentiment task and the amount of unlabeled monolingual data used to create the original embeddings spaces which goes against our expectations.", "Unlike the experiments in Section \"Sentence-level Model\" , the ensemble classifier employed here was not able to improve the results. We assume that the small size of the datasets in this experiment does not enable the classifier to learn which features are useful in certain contexts.", "One common problem that appears when performing targeted sentiment analysis on noisy data from Twitter is that many of the targets of interest are ambiguous, which leads to false positives. Even with relatively unambiguous targets like “Big Ben”, there are a number of entities that can be referenced; Ben Rothlisberger (an American football player), an English language school in Barcelona, and many others. In order to deploy a full sentiment analysis system on Twitter data, it will be necessary to disambiguate these mentions before classifying the tweets, either as a preprocessing step or jointly.", "In sentiment analysis, it is not yet common to test a model on multiple languages, despite the fact that current state-of-the-art models are often theoretically language-agnostic. This section shows that good performance in one language does not guarantee that a model transfers well to other languages, even given similar resources. We hope that future work in sentiment analysis will make better use of the available test datasets." ], [ "With this article, we have presented a novel projection-based approach to targeted cross-lingual sentiment analysis. The central unit of the proposed method is Blse which enables the transfer of annotations from a source language to a non-annotated target language. The only input it relies on are word embeddings (which can be trained without manual labeling by self-annotation) and a comparably small translation dictionary which connects the semantics of the source and the target language.", "In the binary classification setting (automatic labeling of sentences or documents), Blse constitutes a novel state of the art on several language and domain pairs. For a more fine-grained classification to four sentiment labels, Barista and Muse perform slightly better. The predictions in all settings are complementary to the strong upper bound of employing machine translations: in an ensemble, even this resource-intense approach is inferior.", "The transfer from classification to target-level analysis revealed additional challenges. The performance is lower, particularly for the 4-class setting. Our analyses show that mapping of sentence predictions to the aspects mentioned in each sentence with a machine translation model is a very challenging empirical upper bound – the difference in performance compared to projection-based methods is greater here than for the sentence-classification setting. However, we showed that in resource-scarce environments, Blse constitutes the current state of the art for binary target-level sentiment analysis when incorporated in a deep learning architecture which is informed about the aspect. Muse performs better in the same architecture for the 4-class setting.", "Our analysis further showed that the neural network needs to be informed about both the aspect and the context – limiting the information to a selection of these sentence parts strongly underperforms the combined setting. That also demonstrates that the model does not rely on prior distributions of aspect mentions.", "The final experiment in the paper is a real-world deployment of the target-level sentiment analysis system in multilingual setting with 10 languages, where the assumption is that the only supervision is available in English (which is not part of the target languages). We learned here that it is important to have access to in-domain data (even for cross-lingual projection), especially in the multiclass setting. Binary classification however, which might often be sufficient for real-world applications, is more robust to domain changes. Further, machine translation is less sensitive to language dissimilarities, unlike projection-based methods. The amount of available unlabeled data to create embeddings plays a role in the final performance of the system, although only to a minor extent.", "The current performance of the projection-based techniques still lags behind state-of-the-art MT approaches on most tasks, indicating that there is still much work to be done. While general bilingual embedding techniques do not seem to incorporate enough sentiment information, they are able to retain the semantics of their word vectors to a large degree even after projection. We hypothesize that the ability to retain the original semantics of the monolingual spaces leads to Muse performing better than MT on multiclass targeted sentiment analysis. The joint approach introduced in this work suffers from the degradation of the original semantics space, while optimizing the sentiment information. Moving from a similarity-based loss to a ranking loss, where the model must predict a ranked list of most similar translations could improve the model, but would require further resource development cross-lingually, as a simple bilingual dictionary would not provide enough information.", "One problem that arises when using bilingual embeddings instead of machine translation is that differences in word order are no longer handled BIBREF2 . Machine translation models, on the other hand, always include a reordering element. Nonetheless, there is often a mismatch between the real source language word order and the translated word order. In this work, we avoided the problem by using a bag-of-embeddings representation, but Barnes2017 found that the bag-of-embeddings approach does not perform as well as approaches that take word order into account, e. g., Lstms or Cnns. We leave the incorporation of these classifiers into our framework for future work.", "Unsupervised machine translation Artetxe2018,Lample2018,artetxe2018emnlp shows great promise for sentence-level classification. Like MT, however, it performs worse on noisy data, such as tweets. Therefore, users who want to apply targeted cross-lingual approaches to noisy data should consider currently consider using embedding projection methods, such as Blse. Future work on adapting unsupervised machine translation to noisy text may provide another solution for low-resource NLP.", "The authors thank Patrik Lambert, Toni Badia, Amaia Oliden, Itziar Etxeberria, Jessie Kief, Iris Hübscher, and Arne Øhm for helping with the annotation of the resources used in this research. This work has been partially supported by the DFG Collaborative Research Centre SFB 732 and a SGR-DTCL Predoctoral Scholarship." ] ] }
{ "question": [ "what baseline do they compare to?" ], "question_id": [ "9c4ed8ca59ba6d240f031393b01f634a9dc3615d" ], "nlp_background": [ "two" ], "topic_background": [ "unfamiliar" ], "paper_read": [ "somewhat" ], "search_query": [ "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "VecMap", "Muse", "Barista" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We compare Blse (Sections UID23 – UID30 ) to VecMap, Muse, and Barista (Section \"Previous Work\" ) as baselines, which have similar data requirements and to machine translation (MT) and monolingual (Mono) upper bounds which request more resources. For all models (Mono, MT, VecMap, Muse, Barista), we take the average of the word embeddings in the source-language training examples and train a linear SVM. We report this instead of using the same feed-forward network as in Blse as it is the stronger upper bound. We choose the parameter $c$ on the target language development set and evaluate on the target language test set." ], "highlighted_evidence": [ "We compare Blse (Sections UID23 – UID30 ) to VecMap, Muse, and Barista (Section \"Previous Work\" ) as baselines, which have similar data requirements and to machine translation (MT) and monolingual (Mono) upper bounds which request more resources." ] } ], "annotation_id": [ "0f3e76f2f87e107765340c4bffef80796aee7322" ], "worker_id": [ "2b413669fd1e681656c8d43a27df86e649065edf" ] } ] }
{ "caption": [ "Figure 1: Bilingual Sentiment Embedding Model (Blse)", "Figure 2: The Split adaptation of our Blse model to targeted sentiment analysis. At test time, we replace the matrix M with the matrix M ′ .", "Table 1: Statistics for the OpeNER English (EN) and Spanish (ES) as well as the MultiBooked Catalan (CA) and Basque (EU) datasets.", "Table 2: Statistics for the Wikipedia corpora and monolingual vector spaces.", "Table 3: Number of aspect-polarity tuples for the targeted datasets.", "Table 4: Macro F1 of four models trained on English and tested on Spanish (ES), Catalan (CA), and Basque (EU). The bold numbers show the best results for each metric per column and the highlighted numbers show where Blse is better than the other projection methods, VecMap, Muse, and Barista (* p < 0.01).", "Table 5: Error analysis for different phenomena for the binary (bi) and multi-class (4) setups. See text for explanation of error classes.", "Figure 3: Macro F1 for translation pairs in the Spanish 4-class setup. Training with the expanded hand translated lexicon and machine-translated Hu and Liu lexicon gives a macro F1 that grows constantly with the number of translation pairs. Despite having several times more training data, the Apertium and NRC translation dictionaries do not perform as well.", "Figure 4: Blse model (solid lines) compared to a variant without target language projection matrix M ′ (dashed lines). “Translation” lines show the average cosine similarity between translation pairs. The remaining lines show F1 scores for the source and target language with both variants of Blse. The modified model cannot learn to predict sentiment in the target language (red lines). This illustrates the need for the second projection matrix M ′.", "Table 6: An empirical comparison of Blse and a simplified model which directly projects the embeddings to the sentiment classes. Blse outperforms the simplified model on all tasks.", "Figure 5: Average cosine similarity between a subsample of translation pairs of same polarity (“sentiment synonyms”) and of opposing polarity (“sentiment antonyms”) in both target and source languages in each model. The x-axis shows training epochs. We see that Blse is able to learn that sentiment synonyms should be close to one another in vector space and sentiment antonyms should not.", "Figure 6: t-SNE-based visualization of the Spanish vector space before and after projection with Blse. There is a clear separation of positive and negative words after projection, despite the fact that we have used no labeled data in Spanish.", "Figure 7: An analysis of the α parameter of Blse showing cosine similarity of translation pairs and macro F1 for source and target development data. The optimal values range from 1× 10−6 to 1× 10−3.", "Table 7: Macro F1 results for all corpora and techniques. We denote the best performing", "Table 8: Average length of tokens of correctly and incorrectly classified targets on the OpeNER Spanish binary corpus.", "Figure 8: Confusion matrices for all Split models on the SemEval task.", "Table 9: Touristic targets used as tweet search criteria.", "Table 10: Statistics of Tweet corpora collected for the deployment study, as well as interannotator agreement for English, Basque, and Catalan calculated with Cohen’s κ.", "Table 11: Three example tweets in English. The underlined phrases are the targets.", "Table 12: Statistics of Wikipedia corpora, embeddings, and projection dictionaries (M denotes million, k denotes thousand).", "Table 13: Macro F1 of targeted cross-lingual models on Twitter data in 10 target languages. Twitter refers to models that have been trained on the English data mentioned in Table 10, while USAGE and SemEval are trained on the English data from the datasets mentioned in Section 4.1.2.", "Figure 9: t-SNE-based visualization of the Basque vector space before and after projection with the targeted Blse. The positive and negative sentiment words are separated, although it is less clearly defined at target-level.", "Table 14: Examples where Blse is better and worse than MT and Unsup. We show the original tweet in Blse, the automatic translation in MT and Unsup, and reference translations (Ref.). The label column shows the prediction of each model and the reference", "Figure 10: Performance of Blse (Macro F1) on the binary sentiment task with training and test on Twitter as a function of amount of monolingual data available to train the monolingual embeddings in each language.", "Figure 11: Cosine similarity of 3-gram POS-tag and 3-gram character frequency.", "Figure 12: Performance (Macro F1) on the binary task as a function of cosine similarity between POS-tag and character trigram distributions in the source language (EN) and the target languages.", "Figure 13: Performance of all models (Macro F1) on the binary and multiclass task when trained on different source language data. For each target language, we show a boxplot for all models trained on In-domain Twitter data (light green), USAGE product reviews (light blue), and SemEval restaurant reviews (pink). In the multiclass setup, we can see the in-domain data gives better results than the out-of-domain training data. This trend is not found in the binary setup, suggesting that binary classification is more robust to domain changes than multiclass classification.", "Table 15: Domain similarity of English training data measured as Jennson-Shannon divergence between the most common 10,000 unigrams.", "Table 16: Pearson’s r and p values for correlations between domain and performance of each model. On the binary setup, there is no statistically significant effect of domain, while on the multiclass setup, all results are statistically significant (p > 0.01, with Pearson’s r )." ], "file": [ "8-Figure1-1.png", "10-Figure2-1.png", "11-Table1-1.png", "12-Table2-1.png", "13-Table3-1.png", "16-Table4-1.png", "17-Table5-1.png", "18-Figure3-1.png", "20-Figure4-1.png", "20-Table6-1.png", "21-Figure5-1.png", "22-Figure6-1.png", "23-Figure7-1.png", "24-Table7-1.png", "25-Table8-1.png", "26-Figure8-1.png", "28-Table9-1.png", "29-Table10-1.png", "29-Table11-1.png", "30-Table12-1.png", "31-Table13-1.png", "33-Figure9-1.png", "34-Table14-1.png", "35-Figure10-1.png", "36-Figure11-1.png", "37-Figure12-1.png", "38-Figure13-1.png", "38-Table15-1.png", "39-Table16-1.png" ] }
1905.13413
Improving Open Information Extraction via Iterative Rank-Aware Learning
Open information extraction (IE) is the task of extracting open-domain assertions from natural language sentences. A key step in open IE is confidence modeling, ranking the extractions based on their estimated quality to adjust precision and recall of extracted assertions. We found that the extraction likelihood, a confidence measure used by current supervised open IE systems, is not well calibrated when comparing the quality of assertions extracted from different sentences. We propose an additional binary classification loss to calibrate the likelihood to make it more globally comparable, and an iterative learning process, where extractions generated by the open IE model are incrementally included as training samples to help the model learn from trial and error. Experiments on OIE2016 demonstrate the effectiveness of our method. Code and data are available at https://github.com/jzbjyb/oie_rank.
{ "section_name": [ "Introduction", "Neural Models for Open IE", "Problem Formulation", "Model Architecture and Decoding", "Iterative Rank-Aware Learning", "Binary Classification Loss", "Iterative Learning", "Experimental Settings", "Evaluation Results", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "Open information extraction (IE, sekine2006demand, Banko:2007:OIE) aims to extract open-domain assertions represented in the form of $n$ -tuples (e.g., was born in; Barack Obama; Hawaii) from natural language sentences (e.g., Barack Obama was born in Hawaii). Open IE started from rule-based BIBREF0 and syntax-driven systems BIBREF1 , BIBREF2 , and recently has used neural networks for supervised learning BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 .", "A key step in open IE is confidence modeling, which ranks a list of candidate extractions based on their estimated quality. This is important for downstream tasks, which rely on trade-offs between the precision and recall of extracted assertions. For instance, an open IE-powered medical question answering (QA) system may require its assertions in higher precision (and consequently lower recall) than QA systems for other domains. For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5 . However, we observe that this often yields sub-optimal ranking results, with incorrect extractions of one sentence having higher likelihood than correct extractions of another sentence. We hypothesize this is due to the issue of a disconnect between training and test-time objectives. Specifically, the system is trained solely to raise likelihood of gold-standard extractions, and during training the model is not aware of its test-time behavior of ranking a set of system-generated assertions across sentences that potentially include incorrect extractions.", "To calibrate open IE confidences and make them more globally comparable across different sentences, we propose an iterative rank-aware learning approach, as outlined in fig:arch. Given extractions generated by the model as training samples, we use a binary classification loss to explicitly increase the confidences of correct extractions and decrease those of incorrect ones. Without adding additional model components, this training paradigm naturally leads to a better open IE model, whose extractions can be further included as training samples. We further propose an iterative learning procedure that gradually improves the model by incrementally adding extractions to the training data. Experiments on the OIE2016 dataset BIBREF8 indicate that our method significantly outperforms both neural and non-neural models." ], [ "We briefly revisit the formulation of open IE and the neural network model used in our paper." ], [ "Given sentence $\\mathbf {s}=(w_1, w_2, ..., w_n)$ , the goal of open IE is to extract assertions in the form of tuples $\\mathbf {r}=(\\mathbf {p}, \\mathbf {a}_1, \\mathbf {a}_2, ..., \\mathbf {a}_m)$ , composed of a single predicate and $m$ arguments. Generally, these components in $\\mathbf {r}$ need not to be contiguous, but to simplify the problem we assume they are contiguous spans of words from $\\mathbf {s}$ and there is no overlap between them.", "Methods to solve this problem have recently been formulated as sequence-to-sequence generation BIBREF4 , BIBREF5 , BIBREF6 or sequence labeling BIBREF3 , BIBREF7 . We adopt the second formulation because it is simple and can take advantage of the fact that assertions only consist of words from the sentence. Within this framework, an assertion $\\mathbf {r}$ can be mapped to a unique BIO BIBREF3 label sequence $\\mathbf {y}$ by assigning $O$ to the words not contained in $\\mathbf {r}$ , $B_{p}$ / $I_{p}$ to the words in $\\mathbf {p}$ , and $B_{a_i}$ / $I_{a_i}$ to the words in $\\mathbf {a}_i$ respectively, depending on whether the word is at the beginning or inside of the span.", "The label prediction $\\hat{\\mathbf {y}}$ is made by the model given a sentence associated with a predicate of interest $(\\mathbf {s}, v)$ . At test time, we first identify verbs in the sentence as candidate predicates. Each sentence/predicate pair is fed to the model and extractions are generated from the label sequence." ], [ "Our training method in sec:ours could potentially be used with any probabilistic open IE model, since we make no assumptions about the model and only the likelihood of the extraction is required for iterative rank-aware learning. As a concrete instantiation in our experiments, we use RnnOIE BIBREF3 , BIBREF9 , a stacked BiLSTM with highway connections BIBREF10 , BIBREF11 and recurrent dropout BIBREF12 . Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $\n\\mathbf {x}_t = [\\mathbf {W}_{\\text{emb}}(w_t), \\mathbf {W}_{\\text{mask}}(w_t = v)].\n$ ", "The probability of the label at each position is calculated independently using a softmax function: $\nP(y_t|\\mathbf {s}, v) \\propto \\text{exp}(\\mathbf {W}_{\\text{label}}\\mathbf {h}_t + \\mathbf {b}_{\\text{label}}),\n$ ", "where $\\mathbf {h}_t$ is the hidden state of the last layer. At decoding time, we use the Viterbi algorithm to reject invalid label transitions BIBREF9 , such as $B_{a_2}$ followed by $I_{a_1}$ .", "We use average log probability of the label sequence BIBREF5 as its confidence: ", "$$c(\\mathbf {s}, v, \\hat{\\mathbf {y}}) = \\frac{\\sum _{t=1}^{|\\mathbf {s}|}{\\log {P(\\hat{y_t}|\\mathbf {s}, v)}}}{|\\mathbf {s}|}.$$ (Eq. 7) ", "The probability is trained with maximum likelihood estimation (MLE) of the gold extractions. This formulation lacks an explicit concept of cross-sentence comparison, and thus incorrect extractions of one sentence could have higher confidence than correct extractions of another sentence." ], [ "In this section, we describe our proposed binary classification loss and iterative learning procedure." ], [ "To alleviate the problem of incomparable confidences across sentences, we propose a simple binary classification loss to calibrate confidences to be globally comparable. Given a model $\\theta ^\\prime $ trained with MLE, beam search is performed to generate assertions with the highest probabilities for each predicate. Assertions are annotated as either positive or negative with respect to the gold standard, and are used as training samples to minimize the hinge loss: ", "$$\\hspace{-2.84526pt}\\hat{\\theta } = \\underset{\\theta }{\\operatornamewithlimits{arg\\,min}}\\hspace{-8.53581pt}\\underset{\\begin{array}{c}\\mathbf {s} \\in \\mathcal {D}\\\\ v, \\hat{\\mathbf {y}} \\in g_{\\theta ^\\prime }(\\mathbf {s})\\end{array}}{\\operatorname{\\mathbb {E}}}\\hspace{-11.38109pt}\\max {(0,1-t \\cdot c_{\\theta }(\\mathbf {s}, v, \\hat{\\mathbf {y}}))},$$ (Eq. 9) ", "where $\\mathcal {D}$ is the training sentence collection, $g_{\\theta ^\\prime }$ represents the candidate generation process, and $t \\in \\lbrace 1,-1\\rbrace $ is the binary annotation. $c_{\\theta }(\\mathbf {s}, v, \\hat{\\mathbf {y}})$ is the confidence score calculated by average log probability of the label sequence.", "The binary classification loss distinguishes positive extractions from negative ones generated across different sentences, potentially leading to a more reliable confidence measure and better ranking performance." ], [ "Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision. Ideally, the resulting model after one-round of training becomes better not only at confidence modeling, but also at assertion generation, suggesting that extractions of higher quality can be added as training samples to continue this training process iteratively. The resulting iterative learning procedure (alg:iter) incrementally includes extractions generated by the current model as training samples to optimize the binary classification loss to obtain a better model, and this procedure is continued until convergence. [t] training data $\\mathcal {D}$ , initial model $\\theta ^{(0)}$ model after convergence $\\theta $ $t \\leftarrow 0$ # iteration", " $\\mathcal {E} \\leftarrow \\emptyset $ # generated extractions", "not converge $\\mathcal {E} \\leftarrow \\mathcal {E} \\cup \\lbrace (\\mathbf {s}, v, \\hat{\\mathbf {y}})|v,\\hat{\\mathbf {y}} \\in g_{\\theta ^{(t)}}(\\mathbf {s}), \\forall \\mathbf {s} \\in \\mathcal {D}\\rbrace $ ", " $\\theta ^{(t+1)} \\leftarrow \\underset{\\theta }{\\operatornamewithlimits{arg\\,min}}\\hspace{-8.53581pt}\\underset{(\\mathbf {s}, v, \\hat{\\mathbf {y}})\\in \\mathcal {E}}{\\operatorname{\\mathbb {E}}}\\hspace{-8.53581pt}\\max {(0,1-t \\cdot c_{\\theta }(\\mathbf {s}, v, \\hat{\\mathbf {y}}))}$ ", " $t \\leftarrow t+1$ Iterative learning. " ], [ "We use the OIE2016 dataset BIBREF8 to evaluate our method, which only contains verbal predicates. OIE2016 is automatically generated from the QA-SRL dataset BIBREF13 , and to remove noise, we remove extractions without predicates, with less than two arguments, and with multiple instances of an argument. The statistics of the resulting dataset are summarized in tab:data.", "We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score. An extraction is judged as correct if the predicate and arguments include the syntactic head of the gold standard counterparts.", "We compare our method with both competitive neural and non-neural models, including RnnOIE BIBREF3 , OpenIE4, ClausIE BIBREF2 , and PropS BIBREF14 .", "Our implementation is based on AllenNLP BIBREF15 by adding binary classification loss function on the implementation of RnnOIE. The network consists of 4 BiLSTM layers (2 forward and 2 backward) with 64-dimensional hidden units. ELMo BIBREF16 is used to map words into contextualized embeddings, which are concatenated with a 100-dimensional predicate indicator embedding. The recurrent dropout probability is set to 0.1. Adadelta BIBREF17 with $\\epsilon =10^{-6}$ and $\\rho =0.95$ and mini-batches of size 80 are used to optimize the parameters. Beam search size is 5." ], [ "tab:expmain lists the evaluation results. Our base model (RnnOIE, sec:oie) performs better than non-neural systems, confirming the advantage of supervised training under the sequence labeling setting. To test if the binary classification loss (E.q. 9 , sec:ours) could yield better-calibrated confidence, we perform one round of fine-tuning of the base model with the hinge loss ( $+$ Binary loss in tab:expmain). We show both the results of using the confidence (E.q. 7 ) of the fine-tuned model to rerank the extractions of the base model (Rerank Only), and the end-to-end performance of the fine-tuned model in assertion generation (Generate). We found both settings lead to improved performance compared to the base model, which demonstrates that calibrating confidence using binary classification loss can improve the performance of both reranking and assertion generation. Finally, our proposed iterative learning approach (alg:iter, sec:ours) significantly outperforms non-iterative settings.", "We also investigate the performance of our iterative learning algorithm with respect to the number of iterations in fig:iter. The model obtained at each iteration is used to both rerank the extractions generated by the previous model and generate new extractions. We also report results of using only positive samples for optimization. We observe the AUC and F1 of both reranking and generation increases simultaneously for the first 6 iterations and converges after that, which demonstrates the effectiveness of iterative training. The best performing iteration achieves AUC of 0.125 and F1 of 0.315, outperforming all the baselines by a large margin. Meanwhile, using both positive and negative samples consistently outperforms only using positive samples, which indicates the necessity of exposure to the errors made by the system.", "tab:casererank compares extractions from RnnOIE before and after reranking. We can see the order is consistent with the annotation after reranking, showing the additional loss function's efficacy in calibrating the confidences; this is particularly common in extractions with long arguments. tab:casegen shows a positive extraction discovered after iterative training (first example), and a wrong extraction that disappears (second example), which shows that the model also becomes better at assertion generation.", "Why is the performance still relatively low? We randomly sample 50 extractions generated at the best performing iteration and conduct an error analysis to answer this question. To count as a correct extraction, the number and order of the arguments should be exactly the same as the ground truth and syntactic heads must be included, which is challenging considering that the OIE2016 dataset has complex syntactic structures and multiple arguments per predicate.", "We classify the errors into three categories and summarize their proportions in tab:err. “Overgenerated predicate” is where predicates not included in ground truth are overgenerated, because all the verbs are used as candidate predicates. An effective mechanism should be designed to reject useless candidates. “Wrong argument” is where extracted arguments do not coincide with ground truth, which is mainly caused by merging multiple arguments in ground truth into one. “Missing argument” is where the model fails to recognize arguments. These two errors usually happen when the structure of the sentence is complicated and coreference is involved. More linguistic information should be introduced to solve these problems." ], [ "We propose a binary classification loss function to calibrate confidences in open IE. Iteratively optimizing the loss function enables the model to incrementally learn from trial and error, yielding substantial improvement. An error analysis is performed to shed light on possible future directions." ], [ "This work was supported in part by gifts from Bosch Research, and the Carnegie Bosch Institute." ] ] }
{ "question": [ "How does this compare to traditional calibration methods like Platt Scaling?", "What's the input representation of OpenIE tuples into the model?" ], "question_id": [ "ca7e71131219252d1fab69865804b8f89a2c0a8f", "d77c9ede2727c28e0b5a240b2521fd49a19442e0" ], "nlp_background": [ "two", "two" ], "topic_background": [ "familiar", "familiar" ], "paper_read": [ "no", "no" ], "search_query": [ "information extraction", "information extraction" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "No reliability diagrams are provided and no explicit comparison is made between confidence scores or methods.", "evidence": [ "Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision. Ideally, the resulting model after one-round of training becomes better not only at confidence modeling, but also at assertion generation, suggesting that extractions of higher quality can be added as training samples to continue this training process iteratively. The resulting iterative learning procedure (alg:iter) incrementally includes extractions generated by the current model as training samples to optimize the binary classification loss to obtain a better model, and this procedure is continued until convergence. [t] training data $\\mathcal {D}$ , initial model $\\theta ^{(0)}$ model after convergence $\\theta $ $t \\leftarrow 0$ # iteration", "A key step in open IE is confidence modeling, which ranks a list of candidate extractions based on their estimated quality. This is important for downstream tasks, which rely on trade-offs between the precision and recall of extracted assertions. For instance, an open IE-powered medical question answering (QA) system may require its assertions in higher precision (and consequently lower recall) than QA systems for other domains. For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5 . However, we observe that this often yields sub-optimal ranking results, with incorrect extractions of one sentence having higher likelihood than correct extractions of another sentence. We hypothesize this is due to the issue of a disconnect between training and test-time objectives. Specifically, the system is trained solely to raise likelihood of gold-standard extractions, and during training the model is not aware of its test-time behavior of ranking a set of system-generated assertions across sentences that potentially include incorrect extractions.", "We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score. An extraction is judged as correct if the predicate and arguments include the syntactic head of the gold standard counterparts." ], "highlighted_evidence": [ "Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision.", "For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5 ", "We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score." ] } ], "annotation_id": [ "23c5a7ddd1f154488e822601198303f3e02cc4f7" ], "worker_id": [ "74eea9f3f4f790836045fcc75d0b3f5156901499" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "word embeddings", "evidence": [ "Our training method in sec:ours could potentially be used with any probabilistic open IE model, since we make no assumptions about the model and only the likelihood of the extraction is required for iterative rank-aware learning. As a concrete instantiation in our experiments, we use RnnOIE BIBREF3 , BIBREF9 , a stacked BiLSTM with highway connections BIBREF10 , BIBREF11 and recurrent dropout BIBREF12 . Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $ \\mathbf {x}_t = [\\mathbf {W}_{\\text{emb}}(w_t), \\mathbf {W}_{\\text{mask}}(w_t = v)]. $" ], "highlighted_evidence": [ "Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $ \\mathbf {x}_t = [\\mathbf {W}_{\\text{emb}}(w_t), \\mathbf {W}_{\\text{mask}}(w_t = v)]. $" ] } ], "annotation_id": [ "250e402e903ac21b69fd0cc88469064e3efc5d04" ], "worker_id": [ "74eea9f3f4f790836045fcc75d0b3f5156901499" ] } ] }
{ "caption": [ "Figure 1: Iterative rank-aware learning.", "Table 1: Dataset statistics.", "Table 2: Case study of reranking effectiveness. Red for predicate and blue for arguments.", "Figure 2: AUC and F1 at different iterations.", "Table 4: AUC and F1 on OIE2016.", "Table 5: Proportions of three errors." ], "file": [ "1-Figure1-1.png", "3-Table1-1.png", "4-Table2-1.png", "4-Figure2-1.png", "4-Table4-1.png", "5-Table5-1.png" ] }
1909.07863
Character-Centric Storytelling
Sequential vision-to-language or visual storytelling has recently been one of the areas of focus in computer vision and language modeling domains. Though existing models generate narratives that read subjectively well, there could be cases when these models miss out on generating stories that account and address all prospective human and animal characters in the image sequences. Considering this scenario, we propose a model that implicitly learns relationships between provided characters and thereby generates stories with respective characters in scope. We use the VIST dataset for this purpose and report numerous statistics on the dataset. Eventually, we describe the model, explain the experiment and discuss our current status and future work.
{ "section_name": [ "Introduction", "Related work", "Data", "Data ::: Character extraction", "Data ::: Character analysis", "Model", "Model ::: Character semantics", "Model ::: Encoder", "Model ::: Decoder", "Experiments", "Experiments ::: Method 1", "Experiments ::: Method 2", "Discussion", "Conclusion" ], "paragraphs": [ [ "Visual storytelling and album summarization tasks have recently been of focus in the domain of computer vision and natural language processing. With the advent of new architectures, solutions for problems like image captioning and language modeling are getting better. Therefore it is only natural to work towards storytelling; deeper visual context yielding a more expressive style language, as it could potentially improve various applications involving tasks using visual descriptions and visual question answering. BIBREF0.", "Since the release of the VIST visual storytelling dataset BIBREF1, there have been numerous approaches modeling the behavior of stories, leveraging and extending successful sequence-to-sequence based image captioning architectures. Some of them primarily addressed means of incorporating image-sequence feature information into a narrative generating network BIBREF2, BIBREF3, while others focused on model learning patterns and behavioral orientations with changes in back-propagation methods BIBREF4, BIBREF5. Motivated by these works we now want to understand the importance of characters and their relationships in visual storytelling.", "Specifically, we extract characters from the VIST dataset, analyze their influence across the dataset and exploit them for paying attention to relevant visual segments during story-generation. We report our findings, discuss the directions of our ongoing work and suggest recommendations for using characters as semantics in visual storytelling." ], [ "BIBREF1 published the VIST dataset along with a baseline sequence-to-sequence learning model that generates stories for image sequences in the dataset. Gradually, as a result of the 2018 storytelling challenge, there have been other works on VIST. Most of them extended the encoder-decoder architecture introduced in the baseline publication by adding attention mechanisms BIBREF3, learning positionally dependent parameters BIBREF2 and using reinforcement learning based methods BIBREF4, BIBREF5.", "To our best knowledge, there are no prior works making use of characters for visual storytelling. The only work that uses any additional semantics for story generation is BIBREF5. They propose a hierarchical model structure which first generates a “semantic topic\" for each image in the sequence and then uses that information during the generation phase. The core module of their hierarchical model is a Semantic Compositional Network (SCN) BIBREF6, a recurrent neural network variant generating text conditioned on the provided semantic concepts.", "Unlike traditional attention mechanisms, the SCN assembles the information on semantics directly into the neural network cell. It achieves this by extending the gate and state weight matrices to adhere to additional semantic information provided for the language generation phase. Inspired by the results SCN achieved for image and video captioning, we use it for storytelling. The semantic concepts we use are based on character frequencies and their co-occurrence information extracted from the stories of the VIST dataset.", "Our expectation is that the parameters of the language decoder network generating the story are dependent on the character semantics and would learn to capture linguistic patterns while simultaneously learning mappings to respective visual features of the image sequence." ], [ "We used the Visual storytelling (VIST) dataset comprising of image sequences obtained from Flickr albums and respective annotated descriptions collected through Amazon Mechanical Turk BIBREF1. Each sequence has 5 images with corresponding descriptions that together make up for a story. Furthermore, for each Flickr album there are 5 permutations of a selected set of its images. In the overall available data there are 40,071 training, 4,988 validation, and 5,050 usable testing stories." ], [ "We extracted characters out of the VIST dataset. To this end, we considered that a character is either “a person\" or “an animal\". We decided that the best way to do this would be by making use of the human-annotated text instead of images for the sake of being diverse (e.g.: detection on images would yield “person\", as opposed to father).", "The extraction takes place as a two-step process:", "Identification of nouns: We first used a pretrained part-of-speech tagger BIBREF7 to identify all kinds of nouns in the annotations. Specifically, these noun categories are NN – common, singular or mass, NNS – noun, common, plural, NNP – noun, proper, singular, and NNPS – noun, proper, plural.", "Filtering for hypernyms: WordNet BIBREF8 is a lexical database over the English language containing various semantic relations and synonym sets. Hypernym is one such semantic relation constituting a category into which words with more specific meanings fall. From among the extracted nouns, we thereby filtered those words that have their lowest common hypernym as either “person\" or “animal\"." ], [ "We analyzed the VIST dataset from the perspective of the extracted characters and observed that 20,405 training, 2,349 validation and 2,768 testing data samples have at least one character present among their stories. This is approximately 50% of the data samples in the entire dataset. To pursue the prominence of relationships between these characters, we analyzed these extractions for both individual and co-occurrence frequencies.", "We found a total of 1,470 distinct characters with 1,333 in training, 387 in validation and 466 in the testing splits. This can be considered as an indication to the limited size of the dataset because the number of distinct characters within each split is strongly dependent on the respective size of that split.", "Figure FIGREF3 plots the top 30 most frequent characters in the training split of the dataset. Apart from the character “friends\" there is a gradual decrease in the occurrence frequencies of the other characters from “mom\" to “grandmother\". Similarly, in Figure FIGREF4, which plots the top 30 most co-occurring character pairs, (“dad\", “mom\"), (“friend\", “friends\") pairs occur drastically more number of times than other pairs in the stories. This can lead to an inclination bias of the story generator towards these characters owing to the data size limitations we discussed.", "In the process of detecting characters, we observed also that $\\sim $5000 distinct words failed on WordNet due to their misspellings (“webxites\"), for being proper nouns (“cathrine\"), for being an abbreviation (“geez\"), and simply because they were compound words (“sing-a-long\"). Though most of the models ignore these words based on a vocabulary threshold value (typically 3), we would like to comment that language model creation without accounting for these words could adversely affect the behavior of narrative generation." ], [ "Our model in Figure FIGREF6 follows the encoder-decoder structure. The encoder module incorporates the image sequence features, obtained using a pretrained convolutional network, into a subject vector. The decoder module, a semantically compositional recurrent network (SCN) BIBREF6, uses the subject vector along with character probabilities and generates a relevant story." ], [ "The relevant characters with respect to each data-sample are obtained as a preprocessing step. We denote characters extracted from the human-annotated stories of respective image-sequences as active characters. We then use these active characters to obtain other characters which could potentially influence the narrative to be generated. We denote these as passive characters and they can be obtained using various methods. We describe some methods we tried in Section SECREF5. The individual frequencies of these relevant characters, active and passive are then normalized by the vocabulary size and constitute the character probabilities." ], [ "Images of a sequence are initially passed through a pretrained ResNet network BIBREF9, for obtaining their features. The features extracted are then provided to the encoder module, which is a simple recurrent neural network employed to learn parameters for incorporating the subjects in the individual feature sets into a subject vector." ], [ "We use the SCN-LSTM variant of the recurrent neural network for the decoder module as shown in Figure FIGREF10. The network extends each weight matrix of the conventional LSTM to be an ensemble of a set of tag-dependent weight matrices, subjective to the character probabilities. Subject vector from the encoder is fed into the LSTM to initialize the first step. The LSTM parameters utilized when decoding are weighted by the character probabilities, for generating a respective story.", "Gradients $\\nabla $, propagated back to the network, nudge the parameters $W$ to learn while adhering to respective character probabilities $\\vec{cp}$:", "Consequently, the encoder parameters move towards incorporating the image-sequence features better." ], [ "We report the current status of our work and the intended directions of progress we wish to make using the designed model. All experiments were performed on the VIST dataset.", "As mentioned in Section SECREF5, passive characters can be selected by conditioning their relationships on several factors. We explain two such methods:" ], [ "In the first method we naïvely select all the characters co-occurring with respective active characters. Subsequently, probabilities for these passive characters are co-occurrence counts normalized by the corpus vocabulary size. This method enables the model to learn parameters on the distribution of character relationships." ], [ "In the second approach, we conditionally select a limited number of characters that collectively co-occur most with the respective active characters. This is visualized in Figure FIGREF13. The selected passive characters “girlfriend\", “father\" and “son\" collectively co-occur in the most co-occurring characters of the active characters. $K$ in this case is a tunable hyperparameter." ], [ "Both methods we are experimenting with exhibit different initial traits. We are currently working towards analyzing the character relationships learned by the models and understanding the abstract concepts that get generated as a result of such learning. We do not report any generated stories and evaluations yet as we consider that to be premature without proper examination. However, we feel the training process metrics are encouraging and provide us with enough intuition for pursuing the proposed approach to its fullest scope." ], [ "We have extracted, analyzed and exploited characters in the realm of storytelling using the VIST dataset. We have provided a model that can make use of the extracted characters to learn their relationships and thereby generate grounded and subjective narratives for respective image sequences. For future work we would like to make the encoder semantically compositional by extracting visual tags and also explore ways to improve learning of character relationships while avoiding overfitting." ] ] }
{ "question": [ "What statistics on the VIST dataset are reported?" ], "question_id": [ "a9610cbcca813f4376fbfbf21cc14689c7fbd677" ], "nlp_background": [ "zero" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "computer vision" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "In the overall available data there are 40,071 training, 4,988 validation, and 5,050 usable testing stories." ], "yes_no": null, "free_form_answer": "", "evidence": [ "We used the Visual storytelling (VIST) dataset comprising of image sequences obtained from Flickr albums and respective annotated descriptions collected through Amazon Mechanical Turk BIBREF1. Each sequence has 5 images with corresponding descriptions that together make up for a story. Furthermore, for each Flickr album there are 5 permutations of a selected set of its images. In the overall available data there are 40,071 training, 4,988 validation, and 5,050 usable testing stories." ], "highlighted_evidence": [ "We used the Visual storytelling (VIST) dataset comprising of image sequences obtained from Flickr albums and respective annotated descriptions collected through Amazon Mechanical Turk BIBREF1. Each sequence has 5 images with corresponding descriptions that together make up for a story. Furthermore, for each Flickr album there are 5 permutations of a selected set of its images. In the overall available data there are 40,071 training, 4,988 validation, and 5,050 usable testing stories." ] } ], "annotation_id": [ "0fb4bdc1c9e4e5c0f5f9d97660b1a8511f3bae0a" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1: Character frequencies (training split)", "Figure 2: Characters co-occurrence frequencies (training split)", "Figure 3: The model follows the encoder-decoder structure. Additional character semantics passed to the decoder module regulate its state parameters.", "Figure 4: (Gan et al., 2016), v and s denote the visual and semantic features respectively. Each triangle symbol represents an ensemble of tag dependent weight matrices", "Figure 5: Conditional on collective co-occurrences" ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "3-Figure3-1.png", "4-Figure4-1.png", "4-Figure5-1.png" ] }
1906.07234
Combining Adversarial Training and Disentangled Speech Representation for Robust Zero-Resource Subword Modeling
This study addresses the problem of unsupervised subword unit discovery from untranscribed speech. It forms the basis of the ultimate goal of ZeroSpeech 2019, building text-to-speech systems without text labels. In this work, unit discovery is formulated as a pipeline of phonetically discriminative feature learning and unit inference. One major difficulty in robust unsupervised feature learning is dealing with speaker variation. Here the robustness towards speaker variation is achieved by applying adversarial training and FHVAE based disentangled speech representation learning. A comparison of the two approaches as well as their combination is studied in a DNN-bottleneck feature (DNN-BNF) architecture. Experiments are conducted on ZeroSpeech 2019 and 2017. Experimental results on ZeroSpeech 2017 show that both approaches are effective while the latter is more prominent, and that their combination brings further marginal improvement in across-speaker condition. Results on ZeroSpeech 2019 show that in the ABX discriminability task, our approaches significantly outperform the official baseline, and are competitive to or even outperform the official topline. The proposed unit sequence smoothing algorithm improves synthesis quality, at a cost of slight decrease in ABX discriminability.
{ "section_name": [ "Introduction", "General framework", "Speaker-invariant feature learning by FHVAEs", "Speaker adversarial multi-task learning", "Subword unit inference and smoothing", "Dataset and evaluation metric", "System setup", "Experimental results", "Dataset and evaluation metrics", "Conclusions", "Acknowledgements" ], "paragraphs": [ [ "Nowadays speech processing is dominated by deep learning techniques. Deep neural network (DNN) acoustic models (AMs) for the tasks of automatic speech recognition (ASR) and speech synthesis have shown impressive performance for major languages such as English and Mandarin. Typically, training a DNN AM requires large amounts of transcribed data. For a large number of low-resource languages, for which very limited or no transcribed data are available, conventional methods of acoustic modeling are ineffective or even inapplicable.", "In recent years, there has been an increasing research interest in zero-resource speech processing, i.e., only a limited amount of raw speech data (e.g. hours or tens of hours) are given while no text transcriptions or linguistic knowledge are available. The Zero Resource Speech Challenges (ZeroSpeech) 2015 BIBREF0 , 2017 BIBREF1 and 2019 BIBREF2 precisely focus on this area. One problem tackled by ZeroSpeech 2015 and 2017 is subword modeling, learning frame-level speech representation that is discriminative to subword units and robust to linguistically-irrelevant factors such as speaker change. The latest challenge ZeroSpeech 2019 goes a step further by aiming at building text-to-speech (TTS) systems without any text labels (TTS without T) or linguistic expertise. Specifically, one is required to build an unsupervised subword modeling sub-system to automatically discover phoneme-like units in the concerned language, followed by applying the learned units altogether with speech data from which the units are inferred to train a TTS. Solving this problem may partially assist psycholinguists in understanding young children's language acquisition mechanism BIBREF2 .", "This study addresses unsupervised subword modeling in ZeroSpeech 2019, which is also referred to as acoustic unit discovery (AUD). It is an essential problem and forms the basis of TTS without T. The exact goal of this problem is to represent untranscribed speech utterances by discrete subword unit sequences, which is slightly different from subword modeling in the contexts of ZeroSpeech 2017 & 2015. In practice, it can be formulated as an extension to the previous two challenges. For instance, after learning the subword discriminative feature representation at frame-level, the discrete unit sequences can be inferred by applying vector quantization methods followed by collapsing consecutive repetitive symbolic patterns. In the previous two challenges, several unsupervised representation learning approaches were proposed for comparison, such as cluster posteriorgrams (PGs) BIBREF3 , BIBREF4 , BIBREF5 , DNN bottleneck features BIBREF6 , BIBREF7 , autoencoders (AEs) BIBREF8 , BIBREF9 , variational AEs (VAEs) BIBREF10 , BIBREF11 and siamese networks BIBREF12 , BIBREF13 , BIBREF14 .", "One major difficulty in unsupervised subword modeling is dealing with speaker variation. The huge performance degradation caused by speaker variation reported in ZeroSpeech 2017 BIBREF1 implies that speaker-invariant representation learning is crucial and remains to be solved. In ZeroSpeech 2019, speaker-independent subword unit inventory is highly desirable in building a TTS without T system. In the literature, many works focused on improving the robustness of unsupervised feature learning towards speaker variation. One direction is to apply linear transform methods. Heck et al. BIBREF5 estimated fMLLR features in an unsupervised manner. Works in BIBREF6 , BIBREF15 estimated fMLLR using a pre-trained out-of-domain ASR. Chen et al. BIBREF7 applied vocal tract length normalization (VTLN). Another direction is to employ DNNs. Zeghidour et al. BIBREF13 proposed to train subword and speaker same-different tasks within a triamese network and untangle linguistic and speaker information. Chorowski et al. BIBREF11 defined a speaker embedding as a condition of VAE decoder to free the encoder from capturing speaker information. Tsuchiya et al. BIBREF16 applied speaker adversarial training in a task related to the zero-resource scenario but transcription for a target language was used in model training.", "In this paper, we propose to extend our recent research findings BIBREF10 on applying disentangled speech representation learned from factorized hierarchical VAE (FHVAE) models BIBREF17 to improve speaker-invariant subword modeling. The contributions made in this study are in several aspects. First, the FHVAE based speaker-invariant learning is compared with speaker adversarial training in the strictly unsupervised scenario. Second, the combination of adversarial training and disentangled representation learning is studied. Third, our proposed approaches are evaluated on the latest challenge ZeroSpeech 2019, as well as on ZeroSpeech 2017 for completeness. To our best knowledge, direct comparison of the two approaches and their combination has not been studied before." ], [ "The general framework of our proposed approaches is illustrated in Figure FIGREF2 . Given untranscribed speech data, the first step is to learn speaker-invariant features to support frame labeling. The FHVAE model BIBREF17 is adopted for this purpose. FHVAEs disentangle linguistic content and speaker information encoded in speech into different latent representations. Compared with raw MFCC features, FHVAE reconstructed features conditioned on latent linguistic representation are expected to keep linguistic content unchanged and are more speaker-invariant. Details of the FHVAE structure and feature reconstruction methods are described in Section SECREF3 .", "The reconstructed features are fed as inputs to Dirichlet process Gaussian mixture model (DPGMM) BIBREF18 for frame clustering, as was done in BIBREF3 . The frame-level cluster labels are regarded as pseudo phone labels to support supervised DNN training. Motivated by successful applications of adversarial training BIBREF19 in a wide range of domain invariant learning tasks BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , this work proposes to add an auxiliary adversarial speaker classification task to explicitly target speaker-invariant feature learning. After speaker adversarial multi-task learning (AMTL) DNN training, softmax PG representation from pseudo phone classification task is used to infer subword unit sequences. The resultant unit sequences are regarded as pseudo transcriptions for subsequent TTS training." ], [ "The FHVAE model formulates the generation process of sequential data by imposing sequence-dependent and sequence-independent priors to different latent variables BIBREF17 . It consists of an inference model INLINEFORM0 and a generation model INLINEFORM1 . Let INLINEFORM2 denote a speech dataset with INLINEFORM3 sequences. Each INLINEFORM4 contains INLINEFORM5 speech segments INLINEFORM6 , where INLINEFORM7 is composed of fixed-length consecutive frames. The FHVAE model generates a sequence INLINEFORM8 from a random process as follows: (1) An s-vector INLINEFORM9 is drawn from a prior distribution INLINEFORM10 ; (2) Latent segment variables INLINEFORM11 and latent sequence variables INLINEFORM12 are drawn from INLINEFORM13 and INLINEFORM14 respectively; (3) Speech segment INLINEFORM15 is drawn from INLINEFORM16 . Here INLINEFORM17 denotes standard normal distribution, INLINEFORM18 and INLINEFORM19 are parameterized by DNNs. The joint probability for INLINEFORM20 is formulated as, DISPLAYFORM0 ", "Since the exact posterior inference is intractable, the FHVAE introduces an inference model INLINEFORM0 to approximate the true posterior, DISPLAYFORM0 ", "Here INLINEFORM0 and INLINEFORM1 are all diagonal Gaussian distributions. The mean and variance values of INLINEFORM2 and INLINEFORM3 are parameterized by two DNNs. For INLINEFORM4 , during FHVAE training, a trainable lookup table containing posterior mean of INLINEFORM5 for each sequence is updated. During testing, maximum a posteriori (MAP) estimation is used to infer INLINEFORM6 for unseen test sequences. FHVAEs optimize the discriminative segmental variational lower bound which was defined in BIBREF17 . It contains a discriminative objective to prevent INLINEFORM7 from being the same for all utterances.", "After FHVAE training, INLINEFORM0 encodes segment-level factors e.g. linguistic information, while INLINEFORM1 encodes sequence-level factors that are relatively consistent within an utterance. By concatenating training utterances of the same speaker into a single sequence for FHVAE training, the learned INLINEFORM2 is expected to be discriminative to speaker identity. This work considers applying s-vector unification BIBREF10 to generate reconstructed feature representation that keeps linguistic content unchanged and is more speaker-invariant than the original representation. Specifically, a representative speaker with his/her s-vector (denoted as INLINEFORM3 ) is chosen from the dataset. Next, for each speech segment INLINEFORM4 of an arbitrary speaker INLINEFORM5 , its corresponding latent sequence variable INLINEFORM6 inferred from INLINEFORM7 is transformed to INLINEFORM8 , where INLINEFORM9 denotes the s-vector of speaker INLINEFORM10 . Finally the FHVAE decoder reconstructs speech segment INLINEFORM11 conditioned on INLINEFORM12 and INLINEFORM13 . The features INLINEFORM14 form our desired speaker-invariant representation." ], [ "Speaker adversarial multi-task learning (AMTL) simultaneously trains a subword classification network ( INLINEFORM0 ), a speaker classification network ( INLINEFORM1 ) and a shared-hidden-layer feature extractor ( INLINEFORM2 ), where INLINEFORM3 and INLINEFORM4 are set on top of INLINEFORM5 , as illustrated in Figure FIGREF2 . In AMTL, the error is reversely propagated from INLINEFORM6 to INLINEFORM7 such that the output layer of INLINEFORM8 is forced to learn speaker-invariant features so as to confuse INLINEFORM9 , while INLINEFORM10 tries to correctly classify outputs of INLINEFORM11 into their corresponding speakers. At the same time, INLINEFORM12 learns to predict the correct DPGMM labels of input features, and back-propagate errors to INLINEFORM13 in a usual way.", "Let INLINEFORM0 and INLINEFORM1 denote the network parameters of INLINEFORM2 and INLINEFORM3 , respectively. With the stochastic gradient descent (SGD) algorithm, these parameters are updated as, p p - Lpp, s s - Lss,", "h h -[Lph - Lsh], where INLINEFORM0 is the learning rate, INLINEFORM1 is the adversarial weight, INLINEFORM2 and INLINEFORM3 are the loss values of subword and speaker classification tasks respectively, both in terms of cross-entropy. To implement Eqt. ( SECREF6 ), a gradient reversal layer (GRL) BIBREF19 was designed to connect INLINEFORM4 and INLINEFORM5 . The GRL acts as identity transform during forward-propagation and changes the sign of loss during back-propagation. After training, the output of INLINEFORM6 is speaker-invariant and subword discriminative bottleneck feature (BNF) representation of input speech. Besides, the softmax output representation of INLINEFORM7 is believed to carry less speaker information than that without performing speaker adversarial training." ], [ "Subword unit sequences for the concerned untranscribed speech utterances are inferred from softmax PG representation of INLINEFORM0 in the speaker AMTL DNN. For each input frame to the DNN, the DPGMM label with the highest probability in PG representation is regarded as the subword unit assigned to this frame. These frame-level unit labels are further processed by collapsing consecutive repetitive labels to form pseudo transcriptions.", "We observed non-smoothness in the inferred unit sequences by using the above methods, i.e., frame-level unit labels that are isolated without temporal repetition. Considering that ground-truth phonemes generally span at least several frames, these non-smooth labels are unwanted. This work proposes an empirical method to filter out part of the non-smooth unit labels, which is summarized in Algorithm SECREF7 .", "[h] Frame-level unit labels INLINEFORM0 Pseudo transcription INLINEFORM1 INLINEFORM2 }, where INLINEFORM3 , INLINEFORM4 for INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 ; INLINEFORM9 INLINEFORM10 Unit sequence smoothing" ], [ "ZeroSpeech 2017 development dataset consists of three languages, i.e. English, French and Mandarin. Speaker information for training sets are given while unknown for test sets. The durations of training sets are INLINEFORM0 and INLINEFORM1 hours respectively. Detailed information of the dataset can be found in BIBREF1 .", "The evaluation metric is ABX subword discriminability. Basically, it is to decide whether INLINEFORM0 belongs to INLINEFORM1 or INLINEFORM2 if INLINEFORM3 belongs to INLINEFORM4 and INLINEFORM5 belongs to INLINEFORM6 , where INLINEFORM7 and INLINEFORM8 are speech segments, INLINEFORM9 and INLINEFORM10 are two phonemes that differ in the central sound (e.g., “beg”-“bag”). Each pair of INLINEFORM11 and INLINEFORM12 is spoken by the same speaker. Depending on whether INLINEFORM13 and INLINEFORM14 are spoken by the same speaker, ABX error rates for across-/within-speaker are evaluated separately." ], [ "The FHVAE model is trained with merged training sets of all three target languages. Input features are fixed-length speech segments of 10 frames. Each frame is represented by a 13-dimensional MFCC with cepstral mean normalization (CMN) at speaker level. During training, speech utterances spoken by the same speaker are concatenated to a single training sequence. During the inference of hidden variables INLINEFORM0 and INLINEFORM1 , input segments are shifted by 1 frame. To match the length of latent variables with original features, the first and last frame are padded. To generate speaker-invariant reconstructed MFCCs using the s-vector unification method, a representative speaker is selected from training sets. In this work the English speaker “s4018” is chosen. The encoder and decoder networks of the FHVAE are both 2-layer LSTM with 256 neurons per layer. Latent variable dimensions for INLINEFORM2 and INLINEFORM3 are 32. FHVAE training is implemented by using an open-source tool BIBREF17 .", "The FHVAE based speaker-invariant MFCC features with INLINEFORM0 and INLINEFORM1 are fed as inputs to DPGMM clustering. Training data for the three languages are clustered separately. The numbers of clustering iterations for English, French and Mandarin are INLINEFORM2 and 1400. After clustering, the numbers of clusters are INLINEFORM3 and 314. The obtained frame labels support multilingual DNN training. DNN input features are MFCC+CMVN. The layer-wise structure of INLINEFORM4 is INLINEFORM5 . Nonlinear function is sigmoid, except the linear BN layer. INLINEFORM6 contains 3 sub-networks, one for each language. The sub-network contains a GRL, a feed-forward layer (FFL) and a softmax layer. The GRL and FFL are 1024-dimensional. INLINEFORM7 also contains 3 sub-networks, each having a 1024-dimensional FFL and a softmax layer. During AMTL DNN training, the learning rate starts from INLINEFORM8 to INLINEFORM9 with exponential decay. The number of epochs is 5. Speaker adversarial weight INLINEFORM10 ranges from 0 to INLINEFORM11 . After training, BNFs extracted from INLINEFORM12 are evaluated by the ABX task. DNN is implemented using Kaldi BIBREF24 nnet3 recipe. DPGMM is implemented using tools developed by BIBREF18 .", "DPGMM clustering towards raw MFCC features is also implemented to generate alternative DPGMM labels for comparison. In this case, the numbers of clustering iterations for the three languages are INLINEFORM0 and 3000. The numbers of clusters are INLINEFORM1 and 596. The DNN structure and training procedure are the same as mentioned above.", "FHVAE model training and speaker-invariant MFCC reconstruction are performed following the configurations in ZeroSpeech 2017. The unit dataset is used for training. During MFCC reconstruction, a male speaker for each of the two languages is randomly selected as the representative speaker for s-vector unification. Our recent research findings BIBREF10 showed that male speakers are more suitable than females in generating speaker-invariant features. The IDs of the selected speakers are “S015” and “S002” in English and Surprise respectively. In DPGMM clustering, the numbers of clustering iterations are both 320. Input features are reconstructed MFCCs+ INLINEFORM0 + INLINEFORM1 . After clustering, the numbers of clusters are 518 and 693. The speaker AMTL DNN structure and training procedure follow configurations in ZeroSpeech 2017. One difference is the placement of adversarial sub-network INLINEFORM2 . Here INLINEFORM3 is put on top of the FFL in INLINEFORM4 instead of on top of INLINEFORM5 . Besides, the DNN is trained in a monolingual manner. After DNN training, PGs for voice and test sets are extracted. BNFs for test set are also extracted. Adversarial weights INLINEFORM6 ranging from 0 to INLINEFORM7 with a step size of INLINEFORM8 are evaluated on English test set.", "The TTS model is trained with voice dataset and their subword unit sequences inferred from PGs. TTS training is implemented using tools BIBREF27 in the same way as in the baseline. The trained TTS synthesizes speech waveforms according to unit sequences inferred from test speech utterances. Algorithm SECREF7 is applied to voice set and optionally applied to test set." ], [ "Average ABX error rates on BNFs over three target languages with different values of INLINEFORM0 are shown in Figure FIGREF11 .", "In this Figure, INLINEFORM0 denotes that speaker adversarial training is not applied. From the dashed (blue) lines, it can be observed that speaker adversarial training could reduce ABX error rates in both across- and within-speaker conditions, with absolute reductions of INLINEFORM1 and INLINEFORM2 respectively. The amount of improvement is in accordance with the findings reported in BIBREF16 , despite that BIBREF16 exploited English transcriptions during training. The dash-dotted (red) lines show that when DPGMM labels generated by reconstructed MFCCs are employed in DNN training, the positive impact of speaker adversarial training in across-speaker condition is relatively limited. Besides, negative impact is observed in within-speaker condition. From Figure FIGREF11 , it can be concluded that for the purpose of improving the robustness of subword modeling towards speaker variation, frame labeling based on disentangled speech representation learning is more prominent than speaker adversarial training.", "ABX error rates on subword unit sequences, PGs and BNFs with different values of INLINEFORM0 evaluated on English test set are shown in Figure FIGREF16 .", "Algorithm SECREF7 is not applied at this stage. It is observed that speaker adversarial training could achieve INLINEFORM0 and INLINEFORM1 absolute error rate reductions on PG and BNF representations. The unit sequence representation does not benefit from adversarial training. Therefore, the optimal INLINEFORM2 for unit sequences is 0. The performance gap between frame-level PGs and unit sequences measures the phoneme discriminability distortion caused by the unit inference procedure in this work.", "We fix INLINEFORM0 to train the TTS model, and synthesize test speech waveforms using the trained TTS. Experimental results of our submission systems are summarized in Table TABREF17 .", "In this Table, “+SM” denotes applying sequence smoothing towards test set unit labels. Compared with the official baseline, our proposed approaches could significantly improve unit quality in terms of ABX discriminability. Our system without applying SM achieves INLINEFORM0 and INLINEFORM1 absolute error rate reductions in English and Surprise sets. If SM is applied, while the ABX error rate increases, improvements in all the other evaluation metrics are observed. This implies that for the goal of speech synthesis, there is a trade off between quality and quantity of the learned subword units. Besides, our ABX performance is competitive to, or even better than the supervised topline.", "Our systems do not outperform baseline in terms of synthesis quality. One possible explanation is that our learned subword units are much more fine-grained than those in the baseline AUD, making the baseline TTS less suitable for our AUD system. In the future, we plan to investigate on alternative TTS models to take full advantage of our learned subword units." ], [ "ZeroSpeech 2019 BIBREF2 provides untranscribed speech data for two languages. English is used for development while the surprise language (Indonesian) BIBREF25 , BIBREF26 is used for test only. Each language pack consists of training and test sets. The training set consists of a unit discovery dataset for building unsupervised subword models, and a voice dataset for training the TTS system. Details of ZeroSpeech 2019 datasets are listed in Table TABREF13 .", "There are two categories of evaluation metrics in ZeroSpeech 2019. The metrics for text embeddings, e.g. subword unit sequences, BNFs and PGs, are ABX discriminability and bitrate. Bitrate is defined as the amount of information provided in the inferred unit sequences. The metrics for synthesized speech waveforms are character error rate (CER), speaker similarity (SS, 1 to 5, larger is better) and mean opinion score (MOS, 1 to 5, larger is better), all evaluated by native speakers." ], [ "This study tackles robust unsupervised subword modeling in the zero-resource scenario. The robustness towards speaker variation is achieved by combining speaker adversarial training and FHVAE based disentangled speech representation learning. Our proposed approaches are evaluated on ZeroSpeech 2019 and ZeroSpeech 2017. Experimental results on ZeroSpeech 2017 show that both approaches are effective while the latter is more prominent, and that their combination brings further marginal improvement in across-speaker condition. Results on ZeroSpeech 2019 show that our approaches achieve significant ABX error rate reduction to the baseline system. The proposed unit sequence smoothing algorithm improves synthesis quality, at a cost of slight decrease in ABX discriminability." ], [ "This research is partially supported by the Major Program of National Social Science Fund of China (Ref:13&ZD189), a GRF project grant (Ref: CUHK 14227216) from Hong Kong Research Grants Council and a direct grant from CUHK Research Committee." ] ] }
{ "question": [ "What is the performance difference in performance in unsupervised feature learning between adverserial training and FHVAE-based disentangled speech represenation learning?" ], "question_id": [ "64ab2b92e986e0b5058bf4f1758e849f6a41168b" ], "nlp_background": [ "infinity" ], "topic_background": [ "unfamiliar" ], "paper_read": [ "no" ], "search_query": [ "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "0fc687f6d31b9dd5828bd8b28cbef135d1dd1ea7" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: General framework of our proposed approaches", "Figure 2: Average ABX error rates on BNF over 3 languages", "Table 2: Comparison of baseline, topline and our submission", "Figure 3: ABX error rates on unit sequence, PG and BNF with different adversarial weights evaluated on English test set" ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Table2-1.png", "4-Figure3-1.png" ] }
1806.04535
Automatic Target Recovery for Hindi-English Code Mixed Puns
In order for our computer systems to be more human-like, with a higher emotional quotient, they need to be able to process and understand intrinsic human language phenomena like humour. In this paper, we consider a subtype of humour - puns, which are a common type of wordplay-based jokes. In particular, we consider code-mixed puns which have become increasingly mainstream on social media, in informal conversations and advertisements and aim to build a system which can automatically identify the pun location and recover the target of such puns. We first study and classify code-mixed puns into two categories namely intra-sentential and intra-word, and then propose a four-step algorithm to recover the pun targets for puns belonging to the intra-sentential category. Our algorithm uses language models, and phonetic similarity-based features to get the desired results. We test our approach on a small set of code-mixed punning advertisements, and observe that our system is successfully able to recover the targets for 67% of the puns.
{ "section_name": [ "Introduction", "Puns", "Code-mixing", "Methodology", "Classification", "Dataset", "Model", "Results and discussion", "Conclusion and Future work", "Acknowledgements" ], "paragraphs": [ [ "Humour is one of the most complex and intriguing phenomenon of the human language. It exists in various forms, across space and time, in literature and culture, and is a valued part of human interactions. Puns are one of the simplest and most common forms of humour in the English language. They are also one of the most widespread forms of spontaneous humour BIBREF0 and have found their place in casual conversations, literature, online comments, tweets and advertisements BIBREF1 , BIBREF2 . Puns are a hugely versatile and commonly used literary device and it is essential to include them in any comprehensive approach to computational humour.", "In this paper, we consider Hindi-English code-mixed puns and aim to automatically recover their targets. The target of a pun is its phonologically similar counterpart, the relationship to which and whose resolution (recovery) in the mind of the listener/hearer induces humour. For example, in the pun “The life of a patient of hypertension is always at steak.\" the word “steak\" is the pun with target “stake\".", "With India being a diverse linguistic region, there is an ever increasing usage of code-mixed Hindi-English language (along with various others) because bilingualism and even multilingualism are quite common. Consequently, we have also seen an increase in the usage of code-mixed language in online forums, advertisements etc. Code-mixed humour, especially puns have become increasingly popular because being able to use the same punning techniques but with two languages in play has opened up numerous avenues for new and interesting wordplays. With the increasing popularity and acceptance for the usage of code-mixed language, it has become important that computers are also able to process it and even decipher complex phenomena like humour. Traditional Word Sense Disambiguation (WSD) based methods cannot be used in target recovery of code-mixed puns, because they are no longer about multiple senses of a single word but about two words from two different languages. Code-switching comes with no markers, and the punning word may not even be a word in either of the languages being used. Sometimes words from the two languages can be combined to form a word which only a bilingual speaker would understand. Hence, this task on such data calls for a different set of strategies altogether. We approach this problem in two parts. First, we analyze the types of structures in code-mixed puns and classify them into two categories namely intra-sequential and intra-word. Second, we develop a four stage pipeline to achieve our goal - Language Identification, Pun Candidate Identification, Context Lookup and Phonetic Distance Minimization. We then test our approach on a small dataset and note that our method is successfully able to recover targets for a majority of the puns.", "To the best of our knowledge, this is a first attempt at dealing with code-mixed puns. The outline of the paper is as follows: Section 2 gives a brief description of the background and prior work on puns - both in the field of linguistics and in the field of computational humour, along with a brief introduction to the field of code-mixing. Section 3 defines our problem statement, our classification model on code-mixed puns, the dataset we use to test our approach, and our proposed model for the task of automatic target recovery of Hindi-English code-mixed puns. In Section 4, we analyse the performance of our model on a set of puns, and discuss the various error cases. Finally, we conclude in Section 5 with a review of our research contributions and an outline of our plans for future work." ], [ "Puns are a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect BIBREF3 . Puns where the two meanings share the same pronunciation are known as homophonic or perfect puns, while those relying on similar but non-identical sounding words are known as heterophonic BIBREF4 or imperfect puns BIBREF5 . In this paper, we study automatic target recoverability of English-Hindi code mixed puns - which are more commonly imperfect puns, but may also be perfect puns in some cases.", "Zwicky and Zwicky zwicky1986imperfect, Sobkowiak sobkowiak1991metaphonology extensively studied various phonological variations in imperfect puns such as strong asymmetry in phoneme substitution. They note that puns show more frequent changes in vowels than in consonants because of their smaller role in target recoverability.", "Puns have received attention in the field of computational humour, both in generation of puns and their understanding.", "Generation: One of the earliest attempts at generating humour was by Lessard and Levin lessard1992computational, when they built an antonym-based system to generate Tom Swifties. Since then, we have seen various other attempts at the task with different strategies. JAPE was a system which exploited framing and phonetic relationships to automatically generate funny punning riddles, or more specifically phonologically ambiguous riddles, having noun phrase punchlines BIBREF6 . Venour venour1999computational built a system which generated HCPPs (Homonym Common Phrase Pun), simple 2 sentence puns based on associations between words occurring in common phrases. WisCraic was a system built by McKay mckay2002generation, which generated simple one-sentence puns based on semantic associations of words. Valitutti et al. valitutti2008textual attempted to automatically generate advertisements by punning on familiar expressions, with an affective connotation.", "Identification and understanding: Hempelmann hempelmann2003paronomasic studied target recoverability, arguing that a good model for it provides necessary groundwork for effective automatic pun generation. He worked on a theory which models prominent factors in punning such as phonological similarity and studied how these measures could be used to evaluate possible imperfect puns given an input word and a set of target words.", "Yokogawa yokogawa2002japanese analyzed ungrammatical Japanese puns and generated target candidates by replacing ungrammatical parts of the sentence by similar expressions. Taylor and Mazlack taylor2004computationally worked on computational recognition of word-play in the restricted domain of Knock-Knock jokes. Jaech et al. jaech2016phonological developed a computational model for target recovery of puns using techniques for automatic speech recognition, and learned phone edit probabilities in puns. Miller and Gurevych Miller2015AutomaticDO, Miller et al.miller2017semeval describe different methods on pun identification and disambiguation. Word Sense Disambiguation (WSD) based techniques are most common among the methods used.", "To the best of our knowledge no prior work has been attempted on code-mixed puns." ], [ "Code-mixing is the mixing of two or more languages or language varieties. Code-mixing is now recognized as a natural part of bilingual and multilingual language use. Significant linguistic efforts have been made to understand the sociological and conversational necessity behind code-switching BIBREF7 ; for example, to understand whether it is an act of identity in a social group, or a consequence of a lack of competence in either of the languages. These papers distinguish between inter-sentence, intra-sentence and intra-word code mixing.", "Different types of language mixing phenomena have been discussed and defined by several linguists, with some making clear distinctions between phenomena based on certain criteria, while others use `code-mixing’ or `code-switching’ as umbrella terms to include any type of language mixing — see, e.g., Muysken muysken1995code or Gafaranga and Torras gafaranga2002interactional. In this paper, we use both these terms ‘code-mixing’ and `code-switching' interchangeably.", "Coming to the work on automatic analysis of code-mixed languages, there have been studies on detecting code mixing in spoken language as well as different types of short texts, such as information retrieval queries BIBREF8 , SMS messages BIBREF9 , BIBREF10 , social media data BIBREF11 and online conversations BIBREF12 . These scholars have carried out experiments for the task of language identification using language models, dictionaries, logistic regression classification, Conditional Random Fields, SVMs, and noted that approaches using contextual knowledge were most robust. King and Abney king2013labeling used weakly semi-supervised methods to perform word-level language identification.", "We however, use a dictionary based approach for the language identification task. While working with puns, ambiguity in language identification can be an important marker for identifying the pun, so it is more important for us to recognize all possible ambiguities rather than picking just one depending on probabilities. This ability to recognize ambiguities, and the simplicity of a dictionary-based language identification model makes it suited for this task." ], [ "We focus on the task of automatically disambiguating or recovering Hindi-English code mixed puns. For this purpose, it is first necessary to understand what these puns are." ], [ "For the purposes of this research, we only consider puns where the ambiguity or the wordplay lies in the code-switching i.e, the pun word and its target are from different languages. For example the pun \"Rivers can't hear because woh behri hoti hai.\" is a sentence with the pun being behri (meaning deaf) and its target being beh rahi (meaning flowing). Here, while the sentence is code-mixed, the pun word and the target both belong to the same language. We do not consider such puns for the present study.", "We analyze the structure of code-mixed puns with the pun word and its target belonging to different languages and propose two broad categories to classify them in - puns where the code-mixing is intra-sentential and the other where it is intra-word. Both these categories are explained below, while we evaluate only on the former category.", "Intra-sentential code-mixing is where code-switching occurs within a sentence. Here, the language varies at the word level. Also, each word of the sentence belongs to one or the other language. Table 1 gives examples of puns belonging to this category.", "In this category, code mixing is present within a word. New words are formed using Portmanteau or Blending where two or more syllables/phonemes from different languages are blended together to form a single word, resulting in a word which is phonetically similar to the target word. Table 2 illustrates examples of intra-word code-mixed puns." ], [ "Most puns we hear or use in everyday conversations are rarely recorded. One of the most common resources to find recorded puns are advertisements, for example the highly creative and frequently released Amul advertisements in India BIBREF1 . Most of these are contextually integrated BIBREF0 with an image. While such puns may lose their humour out of context, it is still possible to recover their targets, so using these does not affect our task in any way", "To create a dataset to test our model on, we collected 518 advertisements released by Amul in the years 2014, 2015, 2017 and 2018, from their official web page. Of these, 333 were puns, including 121 code-mixed puns as defined in Section 3.1. We extracted the text of these 121 code-mixed puns and asked 3 people to disambiguate them, given just the advertisement text. All three annotators were university students in 22-23 years age group, native Hindi speakers with bilingual fluency in English. The annotators were asked to identify the location of the pun in each of the advertisements and write down the target of the pun. Any disagreements between annotators were resolved by mutual discussion.", "In a few cases where puns were identified to have multiple targets, we kept all such possibilities in our dataset. A few puns were identified to be non-recoverable because of the lack of contextual knowledge, while a few puns had multiple pun locations. We removed both these types from our dataset, which left us with 110 puns.", "Finally, we divided these 110 annotated puns into the two categories as defined in Section 3.1 thereby getting 51 advertisements categorized as intra-sentential code-mixed puns, and the rest as intra-word code-mixed puns. We use the former as our test data." ], [ "For preprocessing the text we give as input to our system, we first tokenize the advertisement text using NLTK's BIBREF13 tokenizer and remove all punctuations. We then give the resultant tokens as input to our model, which is a 4 step process as described below:", "At this step, we aim to identify the language of each of the tokens in the input text by classifying them into one of the 5 categories: English, Hindi, Named Entity (NE), Out of Vocabulary (OOV), or Ambiguous (words that could belong to both English and Hindi).", "We use a dictionary-based lookup method to classify a word in English or Hindi. Since the input is in Roman script, to recognize Hindi words, we use a list of 30k transliterated Hindi words in Roman to their Devanagari counterparts BIBREF14 . For the English language, we collected news data from the archives of a leading Indian Newspaper, The Hindu. Data from 2012-2018 under the tags National, International, Sports, Cinema, Television was collected, amounting to 12,600 articles with 200k sentences and around 38k unique words. We use this data to build an English dictionary. Also, we used NLTK's BIBREF13 Named Entity Recognition module on the same data to get a dictionary of Named Entities.", "We first try to classify all tokens as English, Hindi and NE using these dictionaries. Then, words which are found in both English and Hindi are marked as Ambiguous. The words which do not fall into any of these are classified as OOV.", "We now identify all possible punning locations in the text. For this, we consider words on the boundaries of language change as candidates for pun locations. Then, all NEs and OOV words are added to the list of pun candidates as well. Third, if any Ambiguous words exist in the text, we consider it once as English and once as Hindi for the next steps.", "In this step, we contextually lookup all the candidate locations using left context and right context to get a list of all words that may occur at that position. We use bi-gram language models we built using Knesser-Ney smoothing BIBREF15 . We used the data mentioned in the previous step to build the language model for English, and 100k sentences from Hindi monolingual data from BIBREF16 to build the language models for English and Hindi respectively. As it is highly likely that the left and the right context at a pun location belong to different languages, we look at each of those separately instead of taking an intersection of the left and the right context.", "Lastly, at each pun location, we calculate the similarity of the word at that location with all the words that can occur at that location depending on the context and pick the most similar words as the possible targets.", "To compare words belonging to two different languages on a phonetic basis, we convert both of them to WX notation BIBREF17 , which denotes a standard way to represent Indian languages in the Roman script. We transliterate our identified Hindi words from Devanagari to WX notation. To convert English words to the same notation, we use the CMU phonetic dictionary , which uses a 39 phoneme set to represent North American pronunciations of English words. We build a mapping between this phoneme set and WX notation. Whenever there was no exact parallel between CMU pronouncing dictionary's notation and WX, we used the word's Indian English pronunciation to find the closest match.", "Once we converted all to WX notation, we use a modified version of Levenshtein Distance BIBREF18 to find most similar words. In this normalized version of Levenshtein distance, we account for a few features like aspirations (for example, /p/,/ph/) which are non-phonemic in English, vowel elongations, rhyme, same beginning or ending sounds.", "In case of an OOV word, since it cannot be converted to WX notation due to non-availability of any phonetic transcription, we simply find the words with the least orthographic distance when written in Roman script, using a similar measure as used for phonetic distance with a few more normalizations (for example, considering 'w' and 'v' as similar)." ], [ "We test the model explained in the previous section on our test dataset described in Section 3.2 and note that this method is correctly able to recover targets for 34 out of these 51 puns, or around 67% of the puns, which are very encouraging results for this complex task. Examples where the system performed successfully are given in Table 3 .", "We do a thorough error analysis below for the cases our method fails for." ], [ "To conclude, in this paper, we present a first-ever work on target recovery code-mixed puns. We study various puns where the word-play is a result of code-switching, and classify them into 2 categories - puns with intra-sentential code mixing and those with intra-word code mixing. We then propose a methodology to recover the targets for puns belonging to the former category, using only monolingual language data. We test our proposed approach on a small manually annotated dataset, and we see that our system was able to successfully recover 67% of the puns from the set.", "In the future, we want to perform a more comprehensive evaluation of this approach on a larger, more diverse set of puns. We want to improve and extend our approach to be able to recover intra-word code-mixed puns along with the intra-sentential ones that it handles right now. After that, the system should be extended to be able to recover all kinds of puns in code-mixed language, regardless of whether the pun itself is monolingual or code-mixed." ], [ "We thank the anonymous reviewers for their comments that helped improve this paper." ] ] }
{ "question": [ "What are puns?", "What are the categories of code-mixed puns?" ], "question_id": [ "bcd6befa65cab3ffa6334c8ecedd065a4161028b", "479fc9e6d6d80e69f425d9e82e618e6b7cd12764" ], "nlp_background": [ "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Puns are a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect BIBREF3 . Puns where the two meanings share the same pronunciation are known as homophonic or perfect puns, while those relying on similar but non-identical sounding words are known as heterophonic BIBREF4 or imperfect puns BIBREF5 . In this paper, we study automatic target recoverability of English-Hindi code mixed puns - which are more commonly imperfect puns, but may also be perfect puns in some cases." ], "highlighted_evidence": [ "Puns are a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect BIBREF3 ." ] } ], "annotation_id": [ "eed1806ed0ea6052a8ea8a587cdfb94a67a97256" ], "worker_id": [ "7fa8d8b1eb8a1630feb99a8e11ebfa501ac5bc3c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "intra-sequential and intra-word" ], "yes_no": null, "free_form_answer": "", "evidence": [ "With India being a diverse linguistic region, there is an ever increasing usage of code-mixed Hindi-English language (along with various others) because bilingualism and even multilingualism are quite common. Consequently, we have also seen an increase in the usage of code-mixed language in online forums, advertisements etc. Code-mixed humour, especially puns have become increasingly popular because being able to use the same punning techniques but with two languages in play has opened up numerous avenues for new and interesting wordplays. With the increasing popularity and acceptance for the usage of code-mixed language, it has become important that computers are also able to process it and even decipher complex phenomena like humour. Traditional Word Sense Disambiguation (WSD) based methods cannot be used in target recovery of code-mixed puns, because they are no longer about multiple senses of a single word but about two words from two different languages. Code-switching comes with no markers, and the punning word may not even be a word in either of the languages being used. Sometimes words from the two languages can be combined to form a word which only a bilingual speaker would understand. Hence, this task on such data calls for a different set of strategies altogether. We approach this problem in two parts. First, we analyze the types of structures in code-mixed puns and classify them into two categories namely intra-sequential and intra-word. Second, we develop a four stage pipeline to achieve our goal - Language Identification, Pun Candidate Identification, Context Lookup and Phonetic Distance Minimization. We then test our approach on a small dataset and note that our method is successfully able to recover targets for a majority of the puns." ], "highlighted_evidence": [ " First, we analyze the types of structures in code-mixed puns and classify them into two categories namely intra-sequential and intra-word. " ] } ], "annotation_id": [ "0feacb5d838410ce6d4eaba2f4a93f35423da33a" ], "worker_id": [ "35491e1e579f6d147f4793edce4c1a80ab2410e7" ] } ] }
{ "caption": [ "Table 2: Examples of intra-word code-mixed puns", "Table 1: Examples of intra-sentential code-mixed puns", "Figure 1: This figure illustrates, taking Pun1 as example, our model and the 4 major steps it comprises: 1. Language Identification, 2. Identification of Candidate Pun Locations, 3. Context Lookup and 4. Phonetic Distance minimization.", "Table 3: Examples of puns successfully recovered by our system", "Table 5: Example for error case 2, where the pun is based on the pronunciation of an abbreviation.", "Table 6: Example for error case 3, where the target does not exist in the language model." ], "file": [ "3-Table2-1.png", "3-Table1-1.png", "4-Figure1-1.png", "4-Table3-1.png", "5-Table5-1.png", "5-Table6-1.png" ] }
2003.05995
CRWIZ: A Framework for Crowdsourcing Real-Time Wizard-of-Oz Dialogues
Large corpora of task-based and open-domain conversational dialogues are hugely valuable in the field of data-driven dialogue systems. Crowdsourcing platforms, such as Amazon Mechanical Turk, have been an effective method for collecting such large amounts of data. However, difficulties arise when task-based dialogues require expert domain knowledge or rapid access to domain-relevant information, such as databases for tourism. This will become even more prevalent as dialogue systems become increasingly ambitious, expanding into tasks with high levels of complexity that require collaboration and forward planning, such as in our domain of emergency response. In this paper, we propose CRWIZ: a framework for collecting real-time Wizard of Oz dialogues through crowdsourcing for collaborative, complex tasks. This framework uses semi-guided dialogue to avoid interactions that breach procedures and processes only known to experts, while enabling the capture of a wide variety of interactions. The framework is available at https://github.com/JChiyah/crwiz
{ "section_name": [ "Introduction", "Related Work", "System Overview", "Data Collection", "Data Collection ::: Implementation", "Data Collection ::: Deployment", "Data Analysis", "Data Analysis ::: Subjective Data", "Data Analysis ::: Single vs Multiple Wizards", "Data Analysis ::: Limitations", "Data Analysis ::: Future Work", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "Recent machine learning breakthroughs in dialogue systems and their respective components have been made possible by training on publicly available large scale datasets, such as ConvAI BIBREF0, bAbI BIBREF1 and MultiWoZ BIBREF2, many of which are collected on crowdsourcing services, such as Amazon Mechanical Turk and Figure-eight. These data collection methods have the benefits of being cost-effective, time-efficient to collect and scalable, enabling the collection of large numbers of dialogues.", "Where this crowdsourcing method has its limitations is when specific domain expert knowledge is required, rather than general conversation. These tasks include, for example, call centre agents BIBREF3 or clerks with access to a database, as is required for tourism information and booking BIBREF2. In the near future, there will be a demand to extend this to workplace-specific tasks and procedures. Therefore, a method of gathering crowdsourced dialogue data is needed that ensures compliance with such procedures, whilst providing coverage of a wide variety of dialogue phenomena that could be observed in deployment of a trained dialogue system.", "Wizard-of-Oz data collections in the past have provided such a mechanism. However, these have traditionally not been scalable because of the scarcity of Wizard experts or the expense to train up workers. This was the situation with an initial study reported in BIBREF4, which was conducted in a traditional lab setting and where the Wizard (an academic researcher) had to learn, through training and reading manuals, how best to perform operations in our domain of emergency response.", "We present the CRWIZ Intelligent Wizard Interface that enables a crowdsourced Wizard to make intelligent, relevant choices without such intensive training by providing a restricted list of valid and relevant dialogue task actions, which changes dynamically based on the context, as the interaction evolves.", "Prior crowdsourced wizarded data collections have divided the dialogue up into turns and each worker's job consists of one turn utterance generation given a static dialogue context, as in the MultiWoZ dataset BIBREF2. However, this can limit naturalness of the dialogues by restricting forward planning, collaboration and use of memory that humans use for complex multi-stage tasks in a shared dynamic environment/context.", "Our scenario is such a complex task. Specifically, our scenario relates to using robotics and autonomous systems on an offshore energy platform to resolve an emergency and is part of the EPSRC ORCA Hub project BIBREF5. The ORCA Hub vision is to use teams of robots and autonomous intelligent systems to work on offshore energy platforms to enable cheaper, safer and more efficient working practices. An important part of this is ensuring safety of robots in complex, dynamic and cluttered environments, co-operating with remote operators. With this data collection method reported here, we aim to automate a conversational Intelligent Assistant (Fred), who acts as an intermediary between the operator and the multiple robotic systems BIBREF6, BIBREF7. Emergency response is clearly a high-stakes situation, which is difficult to emulate in a lab or crowdsourced data collection environment. Therefore, in order to foster engagement and collaboration, the scenario was gamified with a monetary reward given for task success.", "In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction. We then perform a data collection and compare our dataset to a similar dataset collected in a more controlled lab setting with a single Wizard BIBREF4 and discuss the advantages/disadvantages of both approaches. Finally, we present future work. Our contributions are as follows:", "The release of a platform for the CRWIZ Intelligent Wizard Interface to allow for the collection of dialogue data for longer complex tasks, by providing a dynamic selection of relevant dialogue acts.", "A survey of existing datasets and data collection platforms, with a comparison to the CRWIZ data collection for Wizarded crowdsourced data in task-based interactions." ], [ "Table TABREF3 gives an overview of prior work and datasets. We report various factors to compare to the CRWIZ dataset corresponding to columns in Table TABREF3: whether or not the person was aware they were talking to a bot; whether each dialogue had a single or multiple participants per role; whether the data collection was crowdsourced; and the modality of the interaction and the domain. As we see from the bottom row, none of the datasets reported in the table meet all the criteria we are aiming for, exemplifying the need for a new and novel approach.", "Collecting large amounts of dialogue data can be very challenging as two interlocutors are required to create a conversation. If one of the partners in the conversation is a machine as in BIBREF0, the challenge becomes slightly easier since only one partner is lacking. However, in most cases these datasets are aimed at creating resources to train the conversational system itself. Self-authoring the dialogues BIBREF16 or artificially creating data BIBREF1 could be a solution to rapidly collect data, but this solution has been shown to produce low quality unnatural data BIBREF17.", "One way to mitigate the necessity of pairing two users simultaneously is to allow several participants to contribute to the dialogue, one turn at the time. This approach has been used both in task-oriented BIBREF10, BIBREF2, BIBREF9 and chitchat BIBREF17. This means that the same dialogue can be authored by several participants. However, this raises issues in terms of coherence and forward-planning. These can be addressed by carefully designing the data collection to provide the maximum amount of information to the participants (e.g. providing the task, personality traits of the bot, goals, etc.) but then this adds to cognitive load, time, cost and participant fatigue.", "Pairing is a valid option, which has been used in a number of recent data collections in various domains, such as navigating in a city BIBREF13, playing a negotiation game BIBREF14, talking about a person BIBREF18, playing an image game BIBREF8 or having a chat about a particular image that is shown to both participants BIBREF21, BIBREF22. Pairing frameworks exist such as Slurk BIBREF23. Besides its pairing management feature, Slurk is designed in order to allow researchers to modify it and implement their own data collection rapidly.", "The scenarios for the above-mentioned data collections are mostly intuitive tasks that humans do quite regularly, unlike our use-case scenario of emergency response. Role playing is one option. For example, recent work has tried to create datasets for non-collaborative scenarios BIBREF24, BIBREF25, requesting participants to incarnate a particular role during the data collection. This is particularly challenging when the recruitment is done via a crowdsourcing platform. In BIBREF25, the motivation for the workers to play the role is intrinsic to the scenario. In this data collection, one of the participants tries to persuade their partner to contribute to a charity with a certain amount of money. As a result of their dialogue, the money that the persuadee committed to donate was actually donated to a charity organising. However, for scenarios such as ours, the role playing requires a certain expertise and it is questionable whether the desired behaviour would be achieved simply by letting two non-experts converse with free text.", "Therefore, in recent data collections, there have been a number of attempts to control the data quality in order to produce a desired behaviour. For example, in BIBREF15, the data collection was done with a limited number of subjects who performed the task several days in a row, behaving both as the Wizard and the customer of a travel agency. The same idea was followed in BIBREF12, where a number of participants took part in the data collection over a period of 6 months and, in BIBREF3, BIBREF19 where a limited number of subjects were trained to be the Wizard. This quality control, however, naturally comes with the cost of recruiting and paying these subjects accordingly.", "The solution we propose in this paper tries to minimise these costs by increasing the pool of Wizards to anyone wanting to collaborate in the data collection, by providing them the necessary guidance to generate the desired dialogue behaviour. This is a valuable solution for collecting dialogues in domains where specific expertise is required and the cost of training capable Wizards is high. We required fine-grained control over the Wizard interface so as to be able to generate more directed dialogues for specialised domains, such as emergency response for offshore facilities. By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages:", "A guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge.", "Providing several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios." ], [ "The CRWIZ Intelligent Wizard Interface resides on Slurk BIBREF23, an interaction server built for conducting dialogue experiments and data collections. Slurk handles the pairing of participants and provides a basic chat layout amongst other features. Refer to BIBREF23 for more information on the pairing of participants and the original chat layout. Our chat layout remains similar to Slurk with an important difference. In our scenario, we assign each new participant a role (Operator or Wizard) and, depending on this role, the participant sees different game instructions and chat layout schemes. These are illustrated in Figures FIGREF8 and FIGREF11, for the Operator and Wizard respectively. The main components are described in turn below: 1) The Intelligent Wizard Interface; 2) dialogue structure; and 3) system-changing actions.", "Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. Figure FIGREF11 shows this interface with several actions available to be used in our data collection.", "Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.", "The CRWIZ framework is domain-agnostic, but the data collected with it corresponds to the emergency response domain.", "System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions:", "Verbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.", "Non-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection.", "Submitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. This is reflected in the Wizard interface.", "The advantage of the CRWIZ framework is that it can easily be adapted to different domains and procedures by simply modifying the dialogue states loaded at initialisation. These files are in YAML format and have a simple structure that defines their NLG templates (the FSM will pick one template at random if there is more than one) and the states that it can transition to. Note, that some further modifications may be necessary if the scenario is a slot-filling dialogue requiring specific information at various stages.", "Once the dialogue between the participants finishes, they receive a code in the chat, which can then be submitted to the crowdsourcing platform for payment. The CRWIZ framework generates a JSON file in its log folder with all the information regarding the dialogue, including messages sent, FSM transitions, world state at each action, etc. Automatic evaluation metrics and annotations are also appended such as number of turns per participant, time taken or if one of the participants disconnected. Paying the crowdworkers can be done by just checking that there is a dialogue file with the token that they entered." ], [ "We set up a crowdsourced data collection through Amazon Mechanical Turk, in which two participants chatted with each other in a setting involving an emergency at an offshore facility. As mentioned above, participants had different roles during the interaction: one of them was an Operator of the offshore facility whereas the other one acted as an Intelligent Emergency Assistant. Both of them had the same goal of resolving the emergency and avoiding evacuation at all costs, but they had different functions in the task:", "The Operator was responsible for the facility and had to give instructions to the Emergency Assistant to perform certain actions, such as deploying emergency robots. Participants in the role of Operator were able to chat freely with no restrictions and were additionally given a map of the facility and a list of available robots (see Figure FIGREF8).", "The Emergency Assistant had to help the Operator handle the emergency by providing guidance and executing actions. Participants in the role of Emergency Assistant had predefined messages depending on the task progress. They had to choose between one of the options available, depending on which made sense at the time, but they also had the option to write their own message if necessary. The Emergency Assistant role mimics that of the Wizard in a Wizard-of-Oz experiment (see Figure FIGREF11).", "The participants had a limited time of 6 minutes to resolve the emergency, which consisted of the following sub-tasks: 1) identify and locate the emergency; 2) resolve the emergency; and 3) assess the damage caused. They had four robots available to use with different capabilities: two ground robots with wheels (Husky) and two Quadcopter UAVs (Unmanned Aerial Vehicles). For images of these robots, see Figure FIGREF8. Some robots could inspect areas whereas others were capable of activating hoses, sprinklers or opening valves. Both participants, regardless of their role, had a list with the robots available and their capabilities, but only the Emergency Assistant could control them. This control was through high-level actions (e.g. moving a robot to an area, or ordering the robot to inspect it) that the Emergency Assistant had available as buttons in their interface, as shown in Figure FIGREF11. For safety reasons that might occur in the real world, only one robot could be active doing an action at any time. The combinations of robots and capabilities meant that there was not a robot that could do all three steps of the task mentioned earlier (inspect, resolve and assess damage), but the robots could be used in any order allowing for a variety of ways to resolve the emergency.", "Participants would progress through the task when certain events were triggered by the Emergency Assistant. For instance, inspecting the area affected by an alarm would trigger the detection of the emergency. After locating the emergency, other dialogue options and commands would open up for the Emergency Assistant. In order to give importance to the milestones in the dialogue, these events were also signalled by GIFs (short animated video snippets) in the chat that both participants could see (e.g. a robot finding a fire), as in Figure FIGREF12. The GIFs were added for several reasons: to increase participant engagement and situation awareness, to aid in the game and to show progress visually. Note that there was no visual stimuli in the original WoZ study BIBREF4 but they were deemed necessary here to help the remote participants contextualise the scenario. These GIFs were produced using a Digital Twin simulation of the offshore facility with the various types of robots. See BIBREF26 for details on the Digital Twin." ], [ "The dialogue structure for the Emergency Assistant (the Wizard) followed a dialogue flow previously used for the original lab-based Wizard-of-Oz study BIBREF4 but which was slightly modified and simplified for this crowdsourced data collection. In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely.", "The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations. The Emergency Assistant dialogue options show various speaking styles, with a more assertive tone (“I am sending Husky 1 to east tower”) or others with more collaborative connotations (“Which robot do you want to send?” or “Husky 1 is available to send to east tower”). Refer to BIBREF4 for more details. Furthermore, neither participants were restricted in the number of messages that they could send and we did not require a balanced number of turns between them. However, there were several dialogue transitions that required an answer or authorisation from the Operator, so the FSM would lock the dialogue state until the condition was met. As mentioned earlier, the commands to control the robots are also transitions of the FSM, so they were not always available.", "The Emergency Assistant interface contains a button to get a hint if they get stuck at any point of the conversation. This hint mechanism, when activated, highlights one of the possible dialogue options or robot buttons. This highlighted transition was based on the observed probability distribution of transitions from BIBREF4 to encourage more collaborative interaction than a single straight answer.", "As in the real world, robot actions during the task were simulated to take a certain period of time, depending on the robot executing it and the action. The Emergency Assistant had the option to give status updates and progress reports during this period. Several dialogue options were available for the Emergency Assistant whilst waiting. The time that robots would take to perform actions was based on simulations run on a Digital Twin of the offshore facility implemented in Gazebo BIBREF26. Specifically, we pre-simulated typical robot actions, with the robot's progress and position reflected in the Wizard interface with up-to-date dialogue options for the Emergency Assistant. Once the robot signals the end of their action, additional updated dialogue options and actions are available for the Emergency Assistant. This simulation allowed us to collect dialogues with a realistic embedded world state." ], [ "We used Amazon Mechanical Turk (AMT) for the data collection. We framed the task as a game to encourage engagement and interaction. The whole task, (a Human Intelligence Task (HIT) in AMT) consisted of the following:", "Reading an initial brief set of instructions for the overall task.", "Waiting for a partner for a few seconds before being able to start the dialogue.", "When a partner was found, they were shown the instructions for their assigned role. As these were different, we ensured that they both took around the same time. The instructions had both a text component and a video explaining how to play, select dialogues, robots, etc.", "Playing the game to resolve the emergency. This part was limited to 6 minutes.", "Filling a post-task questionnaire about partner collaboration and task ease.", "The participants received a game token after finishing the game that would allow them to complete the questionnaire and submit the task. This token helped us link their dialogue to the responses from the questionnaire.", "Several initial pilots helped to define the total time required as 10 minutes for all the steps above. We set the HIT in AMT to last 20 minutes to allow additional time should any issues arise. The pilots also helped setting the payment for the workers. Initially, participants were paid a flat amount of $1.4 per dialogue. However, we found that offering a tiered payment tied to the length of the dialogue and bonus for completing the task was the most successful and cost-effective method to foster engagement and conversation:", "$0.5 as base for attempting the HIT, reading the instructions and completing the questionnaire.", "$0.15 per minute during the game, for a maximum of $0.9 for the 6 minutes.", "$0.2 additional bonus if the participants were able to successfully avoid the evacuation of the offshore facility.", "The pay per worker was therefore $1.4 for completing a whole dialogue and $1.6 for those who resolved the emergency for a 10-minute HIT. This pay is above the Federal minimum wage in the US ($7.25/hr or $0.12/min) at the time of the experiment.", "The post-task questionnaire had four questions rated in 7-point rating scales that are loosely based on the PARADISE BIBREF27 questions for spoken dialogue systems:", "Partner collaboration: “How helpful was your partner?” on a scale of 1 (not helpful at all) to 7 (very helpful).", "Information ease: “In this conversation, was it easy to get the information that I needed?” on a scale of 1 (no, not at all) to 7 (yes, completely).", "Task ease: “How easy was the task?” on a scale of 1 (very easy) to 7 (very difficult).", "User expertise: “In this conversation, did you know what you could say or do at each point of the dialog?” on a scale of 1 (no, not at all) to 7 (yes, completely).", "At the end, there was also an optional entry to give free text feedback about the task and/or their partner." ], [ "For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). All the dialogues were manually checked by one of the authors and those where the workers were clearly not partaking in the task or collaborating were removed from the dataset. The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4." ], [ "Table TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest.", "Mann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting.", "Regarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“." ], [ "In Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment.", "Perhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts.", "The task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate." ], [ "It is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use." ], [ "In future work, we want to expand and improve the platform. Dialogue system development can greatly benefit from better ways of obtaining data for rich task-oriented domains such as ours. Part of fully exploiting the potential of crowdsourcing services lies in having readily available tools that help in the generation and gathering of data. One such tool would be a method to take a set of rules, procedures or business processes and automatically convert to a FSM, in a similar way to BIBREF28, ready to be uploaded to the Wizard interface.", "Regarding quality and coherence, dialogues are particularly challenging to automatically rate. In our data collection, there was not a correct or wrong dialogue option for the messages that the Emergency Assistant sent during the conversation, but some were better than others depending on the context with the Operator. This context is not easily measurable for complex tasks that depend on a dynamic world state. Therefore, we leave to future work automatically measuring dialogue quality through the use of context.", "The introduction of Instructional Manipulation Checks BIBREF29 before the game to filter out inattentive participants could improve the quality of the data (Crowdworkers are known for performing multiple tasks at once). Goodman2013 also recommend including screening questions that check both attention and language comprehension for AMT participants. Here, there is a balance that needs to be investigated between experience and quality of crowdworkers and the need for large numbers of participants in order to be quickly paired.", "We are currently exploring using the data collected to train dialogue models for the emergency response domain using Hybrid Code Networks BIBREF30." ], [ "In conclusion, this paper described a new, freely available tool to collect crowdsourced dialogues in rich task-oriented settings. By exploiting the advantages of both the Wizard-of-Oz technique and crowdsourcing services, we can effortlessly obtain dialogues for complex scenarios. The predefined dialogue options available to the Wizard intuitively guide the conversation and allow the domain to be deeply explored without the need for expert training. These predefined options also reinforce the feeling of a true Wizard-of-Oz experiment, where the participant who is not the Wizard thinks that they are interacting with a non-human agent.", "As the applications for task-based dialogue systems keep growing, we will see the need for systematic ways of generating dialogue corpora in varied, richer scenarios. This platform aims to be the first step towards the simplification of crowdsourcing data collections for task-oriented collaborative dialogues where the participants are working towards a shared common goal. The code for the platform and the data are also released with this publication." ], [ "This work was supported by the EPSRC funded ORCA Hub (EP/R026173/1, 2017-2021). Chiyah Garcia's PhD is funded under the EPSRC iCase EP/T517471/1 with Siemens." ] ] }
{ "question": [ "How is dialogue guided to avoid interactions that breach procedures and processes only known to experts?", "What is meant by semiguided dialogue, what part of dialogue is guided?", "Is CRWIZ already used for data collection, what are the results?", "How does framework made sure that dialogue will not breach procedures?" ], "question_id": [ "bc26eee4ef1c8eff2ab8114a319901695d044edb", "9c94ff8c99d3e51c256f2db78c34b2361f26b9c2", "8e9de181fa7d96df9686d0eb2a5c43841e6400fa", "ff1595a388769c6429423a75b6e1734ef88d3e46" ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction. We then perform a data collection and compare our dataset to a similar dataset collected in a more controlled lab setting with a single Wizard BIBREF4 and discuss the advantages/disadvantages of both approaches. Finally, we present future work. Our contributions are as follows:", "Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions." ], "highlighted_evidence": [ "In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction", "Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions." ] } ], "annotation_id": [ "67953a768253175e8b82edaf51cba6604a936010" ], "worker_id": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard." ], "yes_no": null, "free_form_answer": "", "evidence": [ "The solution we propose in this paper tries to minimise these costs by increasing the pool of Wizards to anyone wanting to collaborate in the data collection, by providing them the necessary guidance to generate the desired dialogue behaviour. This is a valuable solution for collecting dialogues in domains where specific expertise is required and the cost of training capable Wizards is high. We required fine-grained control over the Wizard interface so as to be able to generate more directed dialogues for specialised domains, such as emergency response for offshore facilities. By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages:", "A guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge.", "Providing several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios.", "Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.", "The dialogue structure for the Emergency Assistant (the Wizard) followed a dialogue flow previously used for the original lab-based Wizard-of-Oz study BIBREF4 but which was slightly modified and simplified for this crowdsourced data collection. In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely.", "The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations. The Emergency Assistant dialogue options show various speaking styles, with a more assertive tone (“I am sending Husky 1 to east tower”) or others with more collaborative connotations (“Which robot do you want to send?” or “Husky 1 is available to send to east tower”). Refer to BIBREF4 for more details. Furthermore, neither participants were restricted in the number of messages that they could send and we did not require a balanced number of turns between them. However, there were several dialogue transitions that required an answer or authorisation from the Operator, so the FSM would lock the dialogue state until the condition was met. As mentioned earlier, the commands to control the robots are also transitions of the FSM, so they were not always available." ], "highlighted_evidence": [ "By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages:\n\nA guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge.\n\nProviding several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios.", "Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.", "In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely.", "The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations." ] } ], "annotation_id": [ "f0e709e5450f68728ceb216c496d69a43f916281" ], "worker_id": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Yes, CRWIZ has been used for data collection and its initial use resulted in 145 dialogues. The average time taken for the task was close to the estimate of 10 minutes, 14 dialogues (9.66%) resolved the emergency in the scenario, and these dialogues rated consistently higher in subjective and objective ratings than those which did not resolve the emergency. Qualitative results showed that participants believed that they were interacting with an automated assistant.", "evidence": [ "For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). All the dialogues were manually checked by one of the authors and those where the workers were clearly not partaking in the task or collaborating were removed from the dataset. The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4.", "Data Analysis ::: Subjective Data", "Table TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest.", "Mann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting.", "Regarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“.", "Data Analysis ::: Single vs Multiple Wizards", "In Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment.", "Perhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts.", "The task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate.", "Data Analysis ::: Limitations", "It is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use." ], "highlighted_evidence": [ "For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). ", "The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4.\n\nData Analysis ::: Subjective Data\nTable TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest.\n\nMann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting.\n\nRegarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“.\n\nData Analysis ::: Single vs Multiple Wizards\nIn Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment.\n\nPerhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts.\n\nThe task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate.\n\nData Analysis ::: Limitations\nIt is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use." ] } ], "annotation_id": [ "37067c20bb2afc29e9dbc7ddf9e82c1fb7f7f4ad" ], "worker_id": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions." ], "yes_no": null, "free_form_answer": "", "evidence": [ "Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. Figure FIGREF11 shows this interface with several actions available to be used in our data collection.", "Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.", "System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions:", "Verbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.", "Non-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection.", "Submitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. This is reflected in the Wizard interface." ], "highlighted_evidence": [ "Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. ", "Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.", "System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions:\n\nVerbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.\n\nNon-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection.\n\nSubmitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. " ] } ], "annotation_id": [ "41e378720c8fbac9cf7c973a8dca6c412c11d07a" ], "worker_id": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ] } ] }
{ "caption": [ "Table 1: Comparison of relevant recent works. In order, the columns refer to: the dataset and reference; if the dataset was generated using Wizard-of-Oz techniques; if there was a unique participant per role for the whole dialogue; if the dataset was crowdsourced; the type of interaction modality used; and finally, the type of task or domain that the dataset covers. † The participants were aware that the dialogue was authored by humans. ‡ The participants were volunteers without getting paid.", "Figure 1: Interface shown to those in the Operator role running on the Slurk interaction server. It has a similar layout to other chat applications with the chat window on the left and a field to send messages at the bottom. The right side is used to display additional information.", "Figure 2: Interface shown to those in the Emergency Assistant Wizard role running on the Slurk interaction server. The chat window is on the left, with the dialogue options and buttons to control the robots on the right. The chat here shows GIFs that appear to increase engagement and show game progress visually.", "Figure 3: Some of the GIFs shown during the game. A and B are Husky robots assessing damages and inspecting a fire respectively. C and D show Quadcopter UAVs moving and inspecting an area.", "Figure 4: Frequency of the top-10 Emergency Assistant dialogue acts in the data collected. There were 40 unique dialogue acts, each with two or more distinct formulations on average. Most of them also had slots to fill with contextual information, such as the name of the robot. Dialogue acts are colour-coded based on 3 main types.", "Figure 5: Frequency of the top-10 Emergency Assistant dialogue acts in (Lopes et al., 2019).", "Table 2: Interaction features of the dialogues collected. We compare it with the results of the Wizard-of-Oz experiment in a controlled setting from (Lopes et al., 2019).", "Table 3: Distribution of the types of dialogue acts in the data collected with CRWIZ, compared with (Lopes et al., 2019).", "Table 4: Subjective ratings for the post-task survey reporting Mean, Median, Mode and Standard Deviation (SD). Scales were on a 7-point rating scale. “Dialogues Collected” refers to all the dialogues collected after filtering, whereas the other columns are for the dialogues that did not resolved the emergency (“Emergency Not Resolved Dialogues”) and those that did (“Emergency Resolved Dialogues”). Higher is better (Q3 reversed for this table). Highest numbers are bold. * indicates significant differences (p < 0.05, Mann-Whitney-U) between Emergency Resolved and Emergency Not Resolved dialogues.", "Table 5: Interaction between participants from one of the dialogues collected." ], "file": [ "3-Table1-1.png", "4-Figure1-1.png", "5-Figure2-1.png", "5-Figure3-1.png", "6-Figure4-1.png", "6-Figure5-1.png", "7-Table2-1.png", "7-Table3-1.png", "8-Table4-1.png", "9-Table5-1.png" ] }
1710.07395
Detecting Online Hate Speech Using Context Aware Models
In the wake of a polarizing election, the cyber world is laden with hate speech. Context accompanying a hate speech text is useful for identifying hate speech, which however has been largely overlooked in existing datasets and hate speech detection models. In this paper, we provide an annotated corpus of hate speech with context information well kept. Then we propose two types of hate speech detection models that incorporate context information, a logistic regression model with context features and a neural network model with learning components for context. Our evaluation shows that both models outperform a strong baseline by around 3% to 4% in F1 score and combining these two models further improve the performance by another 7% in F1 score.
{ "section_name": [ "Introduction", "Related Works", "Corpus Overview", "Annotation Guidelines", "Annotation Procedure", "Characteristics in Fox News User Comments corpus", "Logistic Regression Models", "Neural Network Models", "Ensemble Models", "Evaluation", "Experimental Results", "Conclusion" ], "paragraphs": [ [ "Following a turbulent election season, 2016's cyber world is awash with hate speech. Automatic detection of hate speech has become an urgent need since human supervision is unable to deal with large quantities of emerging texts.", "Context information, by our definition, is the text, symbols or any other kind of information related to the original text. While intuitively, context accompanying hate speech is useful for detecting hate speech, context information of hate speech has been overlooked in existing datasets and automatic detection models.", "Online hate speech tends to be subtle and creative, which makes context especially important for automatic hate speech detection. For instance,", "", "(1) barryswallows: Merkel would never say NO", "", "This comment is posted for the News titled by \"German lawmakers approve 'no means no' rape law after Cologne assaults\". With context, it becomes clear that this comment is a vicious insult towards female politician. However, almost all the publicly available hate speech annotated datasets do not contain context information. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .", "We have created a new dataset consisting of 1528 Fox News user comments, which were taken from 10 complete discussion threads for 10 widely read Fox News articles. It is different from previous datasets from the following two perspectives. First, it preserves rich context information for each comment, including its user screen name, all comments in the same thread and the news article the comment is written for. Second, there is no biased data selection and all comments in each news comment thread were annotated.", "In this paper, we explored two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information in automatic hate speech detection. First, logistic regression models have been used in several prior hate speech detection studies BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF0 , BIBREF2 , BIBREF9 and various features have been tried including character-level and word-level n-gram features, syntactic features, linguistic features, and comment embedding features. However, all the features were derived from the to-be-classified text itself. In contrast, we experiment with logistic regression models using features extracted from context text as well. Second, neural network models BIBREF10 , BIBREF11 , BIBREF12 have the potential to capture compositional meanings of text, but they have not been well explored for online hate speech detection until recently BIBREF13 . We experiment with neural net models containing separate learning components that model compositional meanings of context information. Furthermore, recognizing unique strengths of each type of models, we build ensemble models of the two types of models. Evaluation shows that context-aware logistic regression models and neural net models outperform their counterparts that are blind with context information. Especially, the final ensemble models outperform a strong baseline system by around 10% in F1-score." ], [ "Recently, a few datasets with human labeled hate speech have been created, however, most of existing datasets do not contain context information. Due to the sparsity of hate speech in everyday posts, researchers tend to sample candidates from bootstrapping instead of random sampling, in order to increase the chance of seeing hate speech. Therefore, the collected data instances are likely to be from distinct contexts.", "For instance, in the Primary Data Set described in BIBREF14 and later used by BIBREF9 , 10% of the dataset is randomly selected while the remaining consists of comments tagged by users and editors. BIBREF15 built a balanced data set of 24.5k tweets by selecting from Twitter accounts that claimed to be racist or were deemed racist using their followed news sources. BIBREF5 collected hateful tweets related to the murder of Drummer Lee Rigby in 2013. BIBREF0 provided a corpus of 16k annotated tweets in which 3.3k are labeled as sexist and 1.9k are labeled as racist. They created this corpus by bootstrapping from certain key words ,specific hashtags and certain prolific users. BIBREF16 created a dataset of 9000 human labeled paragraphs that were collected using regular expression matching in order to find hate speech targeting Judaism and Israel. BIBREF7 extracted data instances from instagram that were associated with certain user accounts. BIBREF2 presented a very large corpus containing over 115k Wikipedia comments that include around 37k randomly sampled comments and the remaining 78k comments were selected from Wikipedia blocked comments.", "Most of existing hate speech detection models are feature-based and use features derived from the target text itself. BIBREF5 experimented with different classification methods including Bayesian Logistic Regression, Random Forest Decision Trees and SVMs, using features such as n-grams, reduced n-grams, dependency paths, and hateful terms. BIBREF0 proposed a logistic regression model using character n-gram features. BIBREF14 used the paragraph2vec for joint modeling of comments and words, then the generated embeddings were used as feature in a logistic regression model. BIBREF9 experimented with various syntactic, linguistic and distributional semantic features including word length, sentence length, part of speech tags, and embedding features, in order to improve performance of logistic regression classifiers. Recently, BIBREF17 surveyed current approaches for hate speech detection, which interestingly also called to attention on modeling context information for resolving difficult hate speech instances." ], [ "The Fox News User Comments corpus consists of 1528 annotated comments (435 labeled as hateful) that were posted by 678 different users in 10 complete news discussion threads in the Fox News website. The 10 threads were manually selected and represent popular discussion threads during August 2016. All of the comments included in these 10 threads were annotated. The number of comments in each of the 10 threads is roughly equal. Rich context information was kept for each comment, including its user screen name, the comments and their nested structure and the original news article. The data corpus along with annotation guidelines is posted on github." ], [ "Our annotation guidelines are similar to the guidelines used by BIBREF9 . We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation. The labeling of hateful speech in our corpus is binary. A comment will be labeled as hateful or non-hateful." ], [ "We identified two native English speakers for annotating online user comments. The two annotators first discussed and practices before they started annotation. They achieved a surprisingly high Kappa score BIBREF18 of 0.98 on 648 comments from 4 threads. We think that thorough discussions in the training stage is the key for achieving this high inter-agreement. For those comments which annotators disagreed on, we label them as hateful as long as one annotator labeled them as hateful. Then one annotator continued to annotate the remaining 880 comments from the remaining six discussion threads." ], [ "Hateful comments in the Fox News User Comments Corpus is often subtle, creative and implicit. Therefore, context information is necessary in order to accurately identify such hate speech.", "The hatefulness of many comments depended on understanding their contexts. For instance,", "", "(3) mastersundholm: Just remember no trabjo no cervesa", "", "This comment is posted for the news \"States moving to restore work requirements for food stamp recipients\". This comment implies that Latino immigrants abuse the usage of food stamp policy, which is clearly a stereotyping.", "Many hateful comments use implicit and subtle language, which contain no clear hate indicating word or phrase. In order to recognize such hard cases, we hypothesize that neural net models are more suitable by capturing overall composite meanings of a comment. For instance, the following comment is a typical implicit stereotyping against women.", "", "(4) MarineAssassin: Hey Brianne - get in the kitchen and make me a samich. Chop Chop", "", "11% of our annotated comments have more than 50 words each. In such long comments, the hateful indicators usually appear in a small region of a comment while the majority of the comment is neutral. For example,", "", "(5) TMmckay: I thought ...115 words... Too many blacks winning, must be racist and needs affirmative action to make whites equally win! ", "", "Certain user screen names indicate hatefulness, which imply that comments posted by these users are likely to contain hate speech. In the following example, commie is a slur for communists.", "", "(6)nocommie11: Blah blah blah. Israel is the only civilized nation in the region to keep the unwashed masses at bay.", "" ], [ "In logistic regression models, we extract four types of features, word-level and character-level n-gram features as well as two types of lexicon derived features. We extract these four types of features from the target comment first. Then we extract these features from two sources of context texts, specifically the title of the news article that the comment was posted for and the screen name of the user who posted the comment.", "For logistic regression model implementation, we use l2 loss. We adopt the balanced class weight as described in Scikit learn. Logistic regression model with character-level n-gram features is presented as a strong baseline for comparison since it was shown very effective. BIBREF0 , BIBREF9 ", "", "", "For character level n-grams, we extract character level bigrams, tri-grams and four-grams. For word level n-grams, we extract unigrams and bigrams.", "Linguistic Inquiry and Word Count, also called LIWC, has been proven useful for text analysis and classification BIBREF19 . In the LIWC dictionary, each word is labeled with several semantic labels. In our experiment, we use the LIWC 2015 dictionary which contain 125 semantic categories. Each word is converted into a 125 dimension LIWC vector, one dimension per semantic category. The LIWC feature vector for a comment or its context is a 125 dimension vector as well, which is the sum of all its words' LIWC vectors.", "NRC emotion lexicon contains a list of English words that were labeled with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and sentiment polarities (negative and positive) BIBREF20 . We use NRC emotion lexicon to capture emotion clues in text. Each word is converted into a 10 dimension emotion vector, corresponding to eight emotion types and two polarity labels. The emotion vector for a comment or its context is a 10 dimension vector as well, which is the sum of all its words' emotion vectors.", "As shown in table TABREF20 , given comment as the only input content, the combination of character n-grams, word n-grams, LIWC feature and NRC feature achieves the best performance. It shows that in addition to character level features, adding more features can improve hate speech detection performance. However, the improvement is limited. Compared with baseline model, the F1 score only improves 1.3%.", "In contrast, when context information was taken into account, the performance greatly improved. Specifically, after incorporating features extracted from the news title and username, the model performance was improved by around 4% in both F1 score and AUC score. This shows that using additional context based features in logistic regression models is useful for hate speech detection." ], [ "Our neural network model mainly consists of three parallel LSTM BIBREF21 layers. It has three different inputs, including the target comment, its news title and its username. Comment and news title are encoded into a sequence of word embeddings. We use pre-trained word embeddings in word2vec. Username is encoded into a sequence of characters. We use one-hot encoding of characters.", "Comment is sent into a bi-directional LSTM with attention mechanism. BIBREF22 . News title and username are sent into a bi-directional LSTM. Note that we did not apply attention mechanism to the neural network models for username and news title because both types of context are relatively short and attention mechanism tends to be useful when text input is long. The three LSTM output layers are concatenated, then connected to a sigmoid layer, which outputs predictions.", "The number of hidden units in each LSTM used in our model is set to be 100. The recurrent dropout rate of LSTMs is set to 0.2. In addition, we use binary cross entropy as the loss function and a batch size of 128. The neural network models are trained for 30 epochs.", "As shown in table TABREF21 , given comment as the only input content, the bi-directional LSTM model with attention mechanism achieves the best performance. Note that the attention mechanism significantly improves the hate speech detection performance of the bi-directional LSTM model. We hypothesize that this is because hate indicator phrases are often concentrated in a small region of a comment, which is especially the case for long comments." ], [ "To study the difference of logistic regression model and neural network model and potentially get performance improvement, we will build and evaluate ensemble models.", "As shown in table TABREF24 , both ensemble models significantly improved hate speech detection performance. Figure FIGREF28 shows the system prediction results of comments that were labeled as hateful in the dataset. It can be seen that the two models perform differently. We further examined predicted comments and find that both types of models have unique strengths in identifying certain types of hateful comments.", "The feature-based logistic regression models are capable of making good use of character-level n-gram features, which are powerful in identifying hateful comments that contains OOV words, capitalized words or misspelled words. We provide two examples from the hateful comments that were only labeled by the logistic regression model:", "", "(7)kmawhmf:FBLM.", "", "Here FBLM means fuck Black Lives Matter. This hateful comment contains only character information which can exactly be made use of by our logistic regression model.", "", "(8)SFgunrmn: what a efen loon, but most femanazis are.", "", "This comment deliberately misspelled feminazi for femanazis, which is a derogatory term for feminists. It shows that logistic regression model is capable in dealing with misspelling.", "The LSTM with attention mechanism are suitable for identifying specific small regions indicating hatefulness in long comments. In addition, the neural net models are powerful in capturing implicit hateful language as well. The following are two hateful comment examples that were only identified by the neural net model:", "", "(9)freedomscout: @LarJass Many religions are poisonous to logic and truth, that much is true...and human beings still remain fallen human beings even they are Redeemed by the Sacrifice of Jesus Christ. So there's that. But the fallacies of thinking cannot be limited or attributed to religion but to error inherent in human motivation, the motivation to utter self-centeredness as fallen sinful human beings. Nearly all of the world's many religions are expressions of that utter sinful nature...Christianity and Judaism being the sole exceptions.", "", "This comment is expressing the stereotyping against religions which are not Christian or Judaism. The hatefulness is concentrated within the two bolded segments.", "", "(10)mamahattheridge: blacks Love being victims.", "In this comment, the four words themselves are not hateful at all. But when combined together, it is clearly hateful against black people." ], [ "We evaluate our model by 10 fold cross validation using our newly created Fox News User Comments Corpus. Both types of models use the exact same 10 folds of training data and test data. We report experimental results using multiple metrics, including accuracy, precision/recall/F1-score, and accuracy area under curve (AUC)." ], [ "Table TABREF20 shows the performance of logistic regression models. The first section of table TABREF20 shows the performance of logistic regression models using features extracted from a target comment only. The result shows that the logistic regression model was improved in every metric after adding both word-level n-gram features and lexicon derived features. However, the improvements are moderate.", "The second section shows the performance of logistic regression models using the four types of features extracted from both a target comment and its contextsThe result shows that the logistic regression model using features extracted from a comment and both types of context achieved the best performance and obtained improvements of 2.8% and 2.5% in AUC score and F1-score respectively.", "Table TABREF21 shows the performance of neural network models. The first section of table TABREF21 shows the performance of several neural network models that use comments as the only input. The model names are self-explanatory. We can see that the attention mechanism coupled with the bi-directional LSTM neural net greatly improved the online hate speech detection, by 5.7% in AUC score.", "The second section of table TABREF21 shows performance of the best neural net model (bi-directional LSTM with attention) after adding additional learning components that take context as input. The results show that adding username and news title can both improve model performance. Using news title gives the best F1 score while using both news title and username gives the best AUC score.", "Table TABREF24 shows performance of ensemble models by combining prediction results of the best context-aware logistic regression model and the best context-aware neural network model. We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions.", "We can see that both ensemble models further improved hate speech detection performance compared with using one model only and achieved the best classification performance. Compared with the logistic regression baseline, the Max Score Ensemble model improved the recall by more than 20% with a comparable precision and improved the F1 score by around 10%, in addition, the Average Score Ensemble model improved the AUC score by around 7%." ], [ "We demonstrated the importance of utilizing context information for online hate speech detection. We first presented a corpus of hateful speech consisting of full threads of online discussion posts. In addition, we presented two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information for improving hate speech detection performance. Furthermore, we show that ensemble models leveraging strengths of both types of models achieve the best performance for automatic online hate speech detection." ] ] }
{ "question": [ "How do they combine the models?", "What is their baseline?", "What context do they use?", "What is their definition of hate speech?", "What architecture has the neural network?" ], "question_id": [ "dd2046f5481f11b7639a230e8ca92904da75feed", "47e6c3e6fcc9be8ca2437f41a4fef58ef4c02579", "569ad21441e99ae782d325d5f5e1ac19e08d5e76", "90741b227b25c42e0b81a08c279b94598a25119d", "1d739bb8e5d887fdfd1f4b6e39c57695c042fa25" ], "nlp_background": [ "five", "five", "five", "five", "five" ], "topic_background": [ "research", "research", "research", "research", "research" ], "paper_read": [ "yes", "yes", "yes", "yes", "yes" ], "search_query": [ "", "", "", "", "" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "maximum of two scores assigned by the two separate models", "average score" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Table TABREF24 shows performance of ensemble models by combining prediction results of the best context-aware logistic regression model and the best context-aware neural network model. We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions." ], "highlighted_evidence": [ "We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions." ] } ], "annotation_id": [ "7ef612cdd857005a8a83a67e33106def49ae2ae6" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Logistic regression model with character-level n-gram features" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For logistic regression model implementation, we use l2 loss. We adopt the balanced class weight as described in Scikit learn. Logistic regression model with character-level n-gram features is presented as a strong baseline for comparison since it was shown very effective. BIBREF0 , BIBREF9" ], "highlighted_evidence": [ " Logistic regression model with character-level n-gram features is presented as a strong baseline for comparison since it was shown very effective." ] } ], "annotation_id": [ "91ccfe8a7d811a711c173f065e106c757a88a3e5" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "title of the news article", "screen name of the user" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In logistic regression models, we extract four types of features, word-level and character-level n-gram features as well as two types of lexicon derived features. We extract these four types of features from the target comment first. Then we extract these features from two sources of context texts, specifically the title of the news article that the comment was posted for and the screen name of the user who posted the comment." ], "highlighted_evidence": [ "Then we extract these features from two sources of context texts, specifically the title of the news article that the comment was posted for and the screen name of the user who posted the comment." ] } ], "annotation_id": [ "10171a4ff0ace9c172eaff1684142da661bcda82" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our annotation guidelines are similar to the guidelines used by BIBREF9 . We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation. The labeling of hateful speech in our corpus is binary. A comment will be labeled as hateful or non-hateful." ], "highlighted_evidence": [ "We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation." ] } ], "annotation_id": [ "3c8b80193d34b3bf7d7269793ac848aab86b756c" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "three parallel LSTM BIBREF21 layers" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our neural network model mainly consists of three parallel LSTM BIBREF21 layers. It has three different inputs, including the target comment, its news title and its username. Comment and news title are encoded into a sequence of word embeddings. We use pre-trained word embeddings in word2vec. Username is encoded into a sequence of characters. We use one-hot encoding of characters." ], "highlighted_evidence": [ "Our neural network model mainly consists of three parallel LSTM BIBREF21 layers." ] } ], "annotation_id": [ "aa2b02963b992088afdd800a4174a84f80716c2a" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] } ] }
{ "caption": [ "Table 1: Performance of Logistic Regression Models", "Table 2: Performance of Neural Network Models", "Table 3: Performance of Ensemble Models", "Figure 1: System Prediction Results of Comments that were Annotated as Hateful" ], "file": [ "5-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png", "6-Figure1-1.png" ] }
1904.02357
Plan, Write, and Revise: an Interactive System for Open-Domain Story Generation
Story composition is a challenging problem for machines and even for humans. We present a neural narrative generation system that interacts with humans to generate stories. Our system has different levels of human interaction, which enables us to understand at what stage of story-writing human collaboration is most productive, both to improving story quality and human engagement in the writing process. We compare different varieties of interaction in story-writing, story-planning, and diversity controls under time constraints, and show that increased types of human collaboration at both planning and writing stages results in a 10-50% improvement in story quality as compared to less interactive baselines. We also show an accompanying increase in user engagement and satisfaction with stories as compared to our own less interactive systems and to previous turn-taking approaches to interaction. Finally, we find that humans tasked with collaboratively improving a particular characteristic of a story are in fact able to do so, which has implications for future uses of human-in-the-loop systems.
{ "section_name": [ "Introduction", "System Overview", "Web Interface", "Model Design", "Experiments", "Details", "Conclusions and Future Work", "Acknowledgments", "Demo Video", "Decoding", "Training", "Mechanical Turk Materials" ], "paragraphs": [ [ "Collaborative human-machine story-writing has had a recent resurgence of attention from the research community BIBREF0 , BIBREF1 . It represents a frontier for AI research; as a research community we have developed convincing NLP systems for some generative tasks like machine translation, but lag behind in creative areas like open-domain storytelling. Collaborative open-domain storytelling incorporates human interactivity for one of two aims: to improve human creativity via the aid of a machine, or to improve machine quality via the aid of a human. Previously existing approaches treat the former aim, and have shown that storytelling systems are not yet developed enough to help human writers. We attempt the latter, with the goal of investigating at what stage human collaboration is most helpful.", "gordon2009sayanything use an information retrieval based system to write by alternating turns between a human and their system. clark2018mil use a similar turn-taking approach to interactivity, but employ a neural model for generation and allow the user to edit the generated sentence before accepting it. They find that users prefer a full-sentence collaborative setup (vs. shorter fragments) but are mixed with regard to the system-driven approach to interaction. roemmele2017eval experiment with a user-driven setup, where the machine doesn't generate until the user requests it to, and then the user can edit or delete at will. They leverage user-acceptance or rejection of suggestions as a tool for understanding the characteristics of a helpful generation. All of these systems involve the user in the story-writing process, but lack user involvement in the story-planning process, and so they lean on the user's ability to knit a coherent overall story together out of locally related sentences. They also do not allow a user to control the novelty or “unexpectedness” of the generations, which clark2018mil find to be a weakness. Nor do they enable iteration; a user cannot revise earlier sentences and have the system update later generations. We develop a system that allows a user to interact in all of these ways that were limitations in previous systems; it enables involvement in planning, editing, iterative revising, and control of novelty. We conduct experiments to understand which types of interaction are most effective for improving stories and for making users satisfied and engaged. We have two main interfaces that enable human interaction with the computer. There is cross-model interaction, where the machine does all the composition work, and displays three different versions of a story written by three distinct models for a human to compare. The user guides generation by providing a topic for story-writing and by tweaking decoding parameters to control novelty, or diversity. The second interface is intra-model interaction, where a human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages to jointly create better stories. The full range of interactions available to a user is: select a model, provide a topic, change diversity of content, collaborate on the planning for the story, and collaborate on the story sentences. It is entirely user-driven, as the users control how much is their own work and how much is the machine's at every stage. It supports revision; a user can modify an earlier part of a written story or of the story plan at any point, and observe how this affects later generations." ], [ "Figure FIGREF3 shows a diagram of the interaction system. The dotted arrows represent optional user interactions.", "requires the user to enter a topic, such as “the not so haunted house”, and can optionally vary the diversity used in the Storyline Planner or the Story Writer. Diversity numbers correspond directly to softmax temperatures, which we restrict to a reasonable range, determined empirically. The settings are sent to the Storyline Planner module, which generates a storyline for the story in the form of a sequence of phrases as per the method of yao2018plan. Everything is then sent to the Story Writer, which will return three stories.", "enables advanced interactions with one story system of the user's choice. The Storyline Planner returns either one storyline phrase or many, and composes the final storyline out of the combination of phrases the system generated, the user has written, and edits the user has made. These are sent to the Story Writer, which returns either a single sentence or a full story as per user's request. The process is flexible and iterative. The user can choose how much or little content they want to provide, edit, or re-generate, and they can return to any step at any time until they decide they are done.", "To enable interactive flexibility, the system must handle open-domain user input. User input is lower-cased and tokenized to match the model training data via spaCy. Model output is naively detokenized via Moses BIBREF2 based on feedback from users that this was more natural. User input OOV handling is done via WordNet BIBREF3 by recursively searching for hypernyms and hyponyms (in that order) until either an in-vocabulary word is found or until a maximum distance from the initial word is reached. We additionally experimented with using cosine similarity to GloVe vectors BIBREF4 , but found that to be slower and not qualitatively better for this domain." ], [ "Figure FIGREF10 shows screenshots for both the cross-model and intra-model modes of interaction. Figure FIGREF10 shows that the cross-model mode makes clear the differences between different model generations for the same topic. Figure FIGREF10 shows the variety of interactions a user can take in intra-model interaction, and is annotated with an example-in-action. User inserted text is underlined in blue, generated text that has been removed by the user is in grey strike-through. The refresh symbol marks areas that the user re-generated to get a different sentence (presumably after being unhappy with the first result). As can be seen in this example, minor user involvement can result in a significantly better story." ], [ "All models for both the Storyline Planner and Story Writer modules are conditional language models implemented with LSTMs based on merity2018regularizing. These are 3-stacked LSTMs that include weight-dropping, weight-tying, variable length back propagation with learning rate adjustment, and Averaged Stochastic Gradient Descent (ASGD). They are trained on the ROC dataset BIBREF5 , which after lowercasing and tokenization has a vocabulary of 38k. Storyline Phrases are extracted as in yao2018plan via the RAKE algorithm BIBREF6 which results in a slightly smaller Storyline vocabulary of 31k. The Storyline Planner does decoding via sampling to encourage creative exploration. The Story Writer has an option to use one or all three systems, all of which decode via beamsearch and are detailed below.", "The Title-to-Story system is a baseline, which generates directly from topic.", "The Plan-and-Write system adopts the static model in yao2018plan to use the storyline to supervise story-writing.", "Plan-and-Revise is a new system that combines the strengths of yao2018plan and holtzman2018learning. It supplements the Plan-and-Write model by training two discriminators on the ROC data and using them to re-rank the LSTM generations to prefer increased creativity and relevance. Thus the decoding objective of this system becomes INLINEFORM0 where INLINEFORM1 is the conditional language model probability of the LSTM, INLINEFORM2 is the discriminator scoring function, and INLINEFORM3 is the learned weight of that discriminator. At each timestep all live beam hypotheses are scored and re-ranked. Discriminator weights are learnt by minimizing Mean Squared Error on the difference between the scores of gold standard and generated story sentences." ], [ "We experiment with six types of interaction: five variations created by restricting different capabilities of our system, and a sixth turn-taking baseline that mimics the interaction of the previous work BIBREF1 , BIBREF7 . We choose our experiments to address the research questions: What type of interaction is most engaging? Which type results in the best stories? Can a human tasked with correcting for certain weaknesses of a model successfully do so? The variations on interactions that we tested are:", "We expand experiment 5 to answer the question of whether a human-in-the-loop interactive system can address specific shortcomings of generated stories. We identify three types of weaknesses common to generation systems – Creativity, Relevance, and Causal & Temporal Coherence, and conduct experiments where the human is instructed to focus on improving specifically one of them. The targeted human improvement areas intentionally match the Plan-and-Revise discriminators, so that, if successful, the \"human discriminator\" data can assist in training the machine discriminators. All experiments (save experiment 2, which lets the user pick between models) use the Plan-and-Revise system." ], [ "We recruit 30 Mechanical Turk workers per experiment (270 unique workers total) to complete story writing tasks with the system. We constrain them to ten minutes of work (five for writing and five for a survey) and provide them with a fixed topic to control this factor across experiments. They co-create a story and complete a questionnaire which asks them to self-report on their engagement, satisfaction, and perception of story quality. For the additional focused error-correction experiments, we instruct Turkers to try to improve the machine-generated stories with regard to the given aspect, under the same time constraints. As an incentive, they are given a small bonus if they are later judged to have succeeded.", "We then ask a separate set of Turkers to rate the stories for overall quality and the three improvement areas. All ratings are on a five-point scale. We collect two ratings per story, and throw out ratings that disagree by more than 2 points. A total of 11% of ratings were thrown out, leaving four metrics across 241 stories for analysis." ], [ "We have shown that all levels of human-computer collaboration improve story quality across all metrics, compared to a baseline computer-only story generation system. We have also shown that flexible interaction, which allows the user to return to edit earlier text, improves the specific metrics of creativity and causal-temporal coherence above previous rigid turn-taking approaches. We find that, as well as improving story quality, more interaction makes users more engaged and likely to use the system again. Users tasked with collaborating to improve a specific story quality were able to do so, as judged by independent readers.", "As the demo system has successfully used an ensemble of collaborative discriminators to improve the same qualities that untrained human users were able to improve even further, this suggests promising future research into human-collaborative stories as training data for new discriminators. It could be used both to strengthen existing discriminators and to develop novel ones, since discriminators are extensible to arbitrarily many story aspects." ], [ "We thank the anonymous reviewers for their feedback, as well as the members of the PLUS lab for their thoughts and iterative testing. This work is supported by Contract W911NF-15- 1-0543 with the US Defense Advanced Research Projects Agency (DARPA)." ], [ "The three-minute video demonstrating the interaction capabilities of the system can be viewed at https://youtu.be/-hGd2399dnA. (Same video as linked in the paper footnote)." ], [ "Default diversity (Softmax Temperature) for Storyline Planner is 0.5, for Story Writer it is None (as beamsearch is used an thus can have but does not require a temperature). Beam size for all Story Writer models is 5. Additionally, Storyline Phrases are constrained to be unique (unless a user duplicates them), and Beamsearch is not normalized by length (both choices determined empirically)." ], [ "We follow the parameters used in yao2018plan and merity2018regularizing." ], [ "Following are examples of the materials used in doing Mechanical Turk User Studies. Figure FIGREF37 is an example of the All + Creative focused experiment for story-writing. The instructions per experiment differ across all, but the template is the same. Figure FIGREF38 is the survey for ranking stories across various metrics. This remains constant save that story order was shuffled every time to control for any effects of the order a story was read in." ] ] }
{ "question": [ "How is human interaction consumed by the model?", "How do they evaluate generated stories?", "Do they evaluate in other language appart from English?", "What are the baselines?" ], "question_id": [ "5c70fdd3d6b67031768d3e28336942e49bf9a500", "f27502c3ece9ade265389d5ace90ca9ca42b46f3", "ffb7a12dfe069ab7263bb7dd366817a9d22b8ef2", "aa4b38f601cc87bf93849245d5f65124da3dc112" ], "nlp_background": [ "five", "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "displays three different versions of a story written by three distinct models for a human to compare", "human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages" ], "yes_no": null, "free_form_answer": "", "evidence": [ "gordon2009sayanything use an information retrieval based system to write by alternating turns between a human and their system. clark2018mil use a similar turn-taking approach to interactivity, but employ a neural model for generation and allow the user to edit the generated sentence before accepting it. They find that users prefer a full-sentence collaborative setup (vs. shorter fragments) but are mixed with regard to the system-driven approach to interaction. roemmele2017eval experiment with a user-driven setup, where the machine doesn't generate until the user requests it to, and then the user can edit or delete at will. They leverage user-acceptance or rejection of suggestions as a tool for understanding the characteristics of a helpful generation. All of these systems involve the user in the story-writing process, but lack user involvement in the story-planning process, and so they lean on the user's ability to knit a coherent overall story together out of locally related sentences. They also do not allow a user to control the novelty or “unexpectedness” of the generations, which clark2018mil find to be a weakness. Nor do they enable iteration; a user cannot revise earlier sentences and have the system update later generations. We develop a system that allows a user to interact in all of these ways that were limitations in previous systems; it enables involvement in planning, editing, iterative revising, and control of novelty. We conduct experiments to understand which types of interaction are most effective for improving stories and for making users satisfied and engaged. We have two main interfaces that enable human interaction with the computer. There is cross-model interaction, where the machine does all the composition work, and displays three different versions of a story written by three distinct models for a human to compare. The user guides generation by providing a topic for story-writing and by tweaking decoding parameters to control novelty, or diversity. The second interface is intra-model interaction, where a human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages to jointly create better stories. The full range of interactions available to a user is: select a model, provide a topic, change diversity of content, collaborate on the planning for the story, and collaborate on the story sentences. It is entirely user-driven, as the users control how much is their own work and how much is the machine's at every stage. It supports revision; a user can modify an earlier part of a written story or of the story plan at any point, and observe how this affects later generations." ], "highlighted_evidence": [ "We have two main interfaces that enable human interaction with the computer. There is cross-model interaction, where the machine does all the composition work, and displays three different versions of a story written by three distinct models for a human to compare. The user guides generation by providing a topic for story-writing and by tweaking decoding parameters to control novelty, or diversity. The second interface is intra-model interaction, where a human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages to jointly create better stories." ] } ], "annotation_id": [ "81242d85e0fa65a4c36b58e9c50450e5e104b588" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "separate set of Turkers to rate the stories for overall quality and the three improvement areas" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We then ask a separate set of Turkers to rate the stories for overall quality and the three improvement areas. All ratings are on a five-point scale. We collect two ratings per story, and throw out ratings that disagree by more than 2 points. A total of 11% of ratings were thrown out, leaving four metrics across 241 stories for analysis." ], "highlighted_evidence": [ "We then ask a separate set of Turkers to rate the stories for overall quality and the three improvement areas. All ratings are on a five-point scale. We collect two ratings per story, and throw out ratings that disagree by more than 2 points. A total of 11% of ratings were thrown out, leaving four metrics across 241 stories for analysis." ] } ], "annotation_id": [ "11492d733ff04445f586acee9dc35a41feee950e" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "101e19761be8a2b0d37a67a43cde3ca40941e245" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Title-to-Story system" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The Title-to-Story system is a baseline, which generates directly from topic." ], "highlighted_evidence": [ "The Title-to-Story system is a baseline, which generates directly from topic." ] } ], "annotation_id": [ "947a6075aae9a508486f9b3a215f8cdceb02472c" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: Diagram of human-computer interaction mediated by the the demo system. The dotted arrows represent optional interactions that the user can take. Depending on the set-up, the user may choose to interact with one or all story models.", "Figure 2: Screenshots of the demo user interface", "Table 1: User self-reported scores, from 1-5. E: Entertainment value, Q: Quality of Story, S: Satisfaction with Story. Note that the final column Use Again is based on converting “no” to 0, “conditional” to 1, and “yes” to 2.", "Table 2: Results for all experiments, from 1-5. Best scores per metric are bolded, scores not significantly different (α = 0.1, per Wilcoxon Signed-Rank Test) are starred. C-T stands for Causal-Temporal Coherence, the + experiments are the extensions where the user focuses on improving a particular quality.", "Table 3: Training parameters for models used in demo.", "Table 4: Questionnaire for user self-reporting, range 1 to 5 (1 low).", "Figure 3: Template & Instructions for Writing Stories in the All + Creative experiment." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png", "4-Table2-1.png", "7-Table3-1.png", "7-Table4-1.png", "8-Figure3-1.png" ] }
1907.02636
Collecting Indicators of Compromise from Unstructured Text of Cybersecurity Articles using Neural-Based Sequence Labelling
Indicators of Compromise (IOCs) are artifacts observed on a network or in an operating system that can be utilized to indicate a computer intrusion and detect cyber-attacks in an early stage. Thus, they exert an important role in the field of cybersecurity. However, state-of-the-art IOCs detection systems rely heavily on hand-crafted features with expert knowledge of cybersecurity, and require large-scale manually annotated corpora to train an IOC classifier. In this paper, we propose using an end-to-end neural-based sequence labelling model to identify IOCs automatically from cybersecurity articles without expert knowledge of cybersecurity. By using a multi-head self-attention module and contextual features, we find that the proposed model is capable of gathering contextual information from texts of cybersecurity articles and performs better in the task of IOC identification. Experiments show that the proposed model outperforms other sequence labelling models, achieving the average F1-score of 89.0% on English cybersecurity article test set, and approximately the average F1-score of 81.8% on Chinese test set.
{ "section_name": [ "Introduction", "Model", "Token Embedding Layer", "Sequence Representation Layer", "CRF Layer", "Features", "Spelling Features", "Contextual Features", "Usage of Features", "Datasets", "Training Details", "Results", "Analysis of Contextual Features", "Training the Proposed Model with Bilingual Data", "Conclusions" ], "paragraphs": [ [ "Indicators of Compromise (IOCs) are forensic artifacts that are used as signs when a system has been compromised by an attacker or infected with a particular piece of malware. To be specific, IOCs are composed of some combinations of virus signatures, IPs, URLs or domain names of botnets, MD5 hashes of attack files, etc. They are frequently described in cybersecurity articles, many of which are written in unstructured text, describing attack tactics, technique and procedures. For example, a snippet from a cybersecurity article is shown in Fig. FIGREF1 . From the text , token “INST.exe” is the name of an executable file of a malicious software, and the file “ntdll.exe” downloaded by “INST.exe” is a malicious file as well. Obviously, these kinds of IOCs can be then utilized for early detection of future attack attempts by using intrusion detection systems and antivirus software, and thus, they exert an important role in the field of cybersecurity. However, with the rapid evolvement of cyber threats, the IOC data are produced at a high volume and velocity every day, which makes it increasingly hard for human to gather and manage them.", "A number of systems are proposed to help discover and gather malicious information and IOCs from various types of data sources BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . However, most of those systems consist of several components that identify IOCs by using human-crafted features that heavily rely on specific language knowledge such as dependency structure, and they often have to be pre-defined by experts in the field of the cybersecurity. Furthermore, they need a large amount of annotated data used as the training data to train an IOC classifier. Those training data are frequently difficult to be crowed-sourced, because non-experts can hardly distinguish IOCs from those non-malicious IPs or URLs. Thus, it is a time-consuming and laborious task to construct such systems for different languages.", "In this work, we consider the task of collecting IOCs from cybersecurity articles as a task of sequence labelling of natural language processing (NLP). By applying a sequence labelling model, each token in an unstructured input text is assigned with a label, and tokens assigned with IOC labels are then collected as IOCs. Recently, sequence labelling models have been utilized in many NLP tasks. Huang et al. BIBREF6 proposed using a sequence labelling model based on the bidirectional long short-term memory (LSTM) BIBREF7 for the task of named entity recognition (NER). Chiu et al. BIBREF8 and Lample et al. BIBREF9 proposed integrating LSTM encoders with character embedding and the neural sequence labelling model to achieve a remarkable performance on the task of NER as well as part-of-speech (POS) tagging. Besides, Dernoncourt et al. BIBREF10 and Jiang et al. BIBREF11 proposed applying the neural sequence labelling model to the task of de-identification of medical records.", "Among the previous studies of the neural sequence labelling task, Zhou el al. BIBREF12 firstly propose using an end-to-end neural sequence labelling model to fully automate the process of IOCs identification. Their model is on the basis of an artificial neural networks (ANN) with bidirectional LSTM and CRF. However, their newly introduced spelling features bring a more extraction of false positives, i.e., tokens that are similar to IOCs but not malicious. In this paper, we further introduce a multi-head self-attention module and contextual features to the ANN model so that the proposed model can perform better in gathering the contextual information from the unstructured text for the task of IOCs identification. Based on the results of our experiments, our proposed approach achieves an average precision of 93.1% and the recall of 85.2% on English cybersecurity article test set, and an average precision of 82.9% and recall of 80.7% on Chinese test set. We further evaluate the proposed model by training the model using both the English dataset and Chinese dataset, which even achieves better performance." ], [ "Fig. FIGREF2 shows the 3 components (layers) of the proposed neural network architecture." ], [ "The token embedding layer takes a token as input and outputs its vector representation. As shown in Fig. FIGREF2 , given an input sequence of tokens INLINEFORM0 , the output vector INLINEFORM1 ( INLINEFORM2 ) of each token INLINEFORM3 results from the concatenation of two different types of embeddings: token embedding INLINEFORM4 and the character-based token embeddings INLINEFORM5 , INLINEFORM6 that come from the output of a character-level bi-LSTM encoder." ], [ "The Sequence Representation Layer takes the sequence of embeddings INLINEFORM0 ( INLINEFORM1 ) as input, and outputs a sequence INLINEFORM2 , where the INLINEFORM3 element of INLINEFORM4 represents the probability that the INLINEFORM5 token has the label INLINEFORM6 .", "Different from the previous work of sequence labelling in news articles or patient notes BIBREF9 , BIBREF10 , sentences from a cybersecurity report often contain a large number of tokens as well as lists of IOCs with little context, making it much more difficult for LSTM to encode the input sentence correctly. Therefore, instead of the token LSTM layer in BIBREF12 , we propose sequence representation layer that consists of 3 modules, i.e., attention-based Bi-LSTM module, multi-head self-attention module and token feature module.", "Considering that tokens cannot contribute equally to the representation of the input sequence, we introduce attention mechanism to Bi-LSTM to extract such tokens that are crucial to the meaning of the sentence. Then, we aggregate the representation of those informative words to form the vector of the input sequence. The attention mechanism is similar to the one proposed by Yang et al. BIBREF13 , which is defined as follows: DISPLAYFORM0 ", "That is to say, we first compute the INLINEFORM0 as a hidden representation of the hidden states of Bi-LSTM INLINEFORM1 for INLINEFORM2 input token, where INLINEFORM3 is obtained by concatenating the INLINEFORM4 hidden states of forward and backward LSTM, i.e., INLINEFORM5 . Then, we measure the importance of the INLINEFORM6 token with a trainable vector INLINEFORM7 and get a normalized importance weight INLINEFORM8 through a softmax function. After that, the sentence vector INLINEFORM9 is computed as a weighted sum of INLINEFORM10 ( INLINEFORM11 ). Here, weight matrix INLINEFORM12 , bias INLINEFORM13 and vector INLINEFORM14 are randomly initialized and jointly learned during the training process. Note that each input sentence merely has one sentence vector INLINEFORM15 as its weighted representation, and INLINEFORM16 is then used as a part of the INLINEFORM17 output of attention-based Bi-LSTM module, where INLINEFORM18 ( INLINEFORM19 ).", "Motivated by the successful application of self-attention in many NLP tasks BIBREF14 , BIBREF15 , we add a multi-head self-attention module to enhance the embedding of each word with the information of other words in a text adaptively. By means of this, the local text regions where convolution performs carry the global information of text. Following the encoder part of Vaswani et al. BIBREF14 , multi-head self-attention module is composed of a stack of several identical layers, each of which consists of a multi-head self-attention mechanism and two convolutions with kernel size 1. Given the sequence of embeddings INLINEFORM0 as input, and the output is defined as follows: DISPLAYFORM0 ", "where, INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are parameter matrices for the projections of queries INLINEFORM3 , keys INLINEFORM4 and values INLINEFORM5 in the INLINEFORM6 head, respectively. Here, INLINEFORM7 , INLINEFORM8 and INLINEFORM9 are set as the input sequence INLINEFORM10 ( INLINEFORM11 ). The INLINEFORM12 is then given to the two convolutions and the output of multi-head self-attention INLINEFORM13 ( INLINEFORM14 ) is obtained.", "Furthermore, we introduce some features to defined IOCs to improve the performance of the proposed model on a very small amount of training data. Here, we define two types of features, i.e., spelling features and contextual features, and map each token INLINEFORM0 ( INLINEFORM1 ) to a feature vector INLINEFORM2 , where INLINEFORM3 is the spelling feature vector and INLINEFORM4 is the contextual feature vector. Note that the values of features are then jointly learned during the process of training. In Section SECREF3 , we will explain the features in more detail.", "As shown in Fig. FIGREF2 , the vector INLINEFORM0 ( INLINEFORM1 ) is a concatenation of the INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . Each vector INLINEFORM5 is then given to a feed-forward neural network with one hidden layer, which outputs the corresponding probability vector INLINEFORM6 ." ], [ "We also introduce a CRF layer to output the most likely sequence of predicted labels. The score of a label sequence INLINEFORM0 is defined as the sum of the probabilities of unigram labels and the bigram label transition probabilities: DISPLAYFORM0 ", "where INLINEFORM0 is a matrix that contains the transition probabilities of two subsequent labels. Vector INLINEFORM1 is the output of the token LSTM layer, and INLINEFORM2 is the probability of label INLINEFORM3 in INLINEFORM4 . INLINEFORM5 is the probability that a token with label INLINEFORM6 is followed by a token with the label INLINEFORM7 . Subsequently, these scores are turned into probabilities of the label sequence by taking a softmax function over all possible label sequences." ], [ "We extract a vector of features for each tokens of input sequences. In this section, we present each feature category in detail." ], [ "Since the IOCs tend to follow fixed patterns, we predefined several regular expressions and spelling rules to identify IOCs. For example, to identify a URL, we defined a regular expression INLINEFORM0 and set the value of the URL feature to 1 when the input token matches the regular expression. However, such expressions and spelling rules could introduce false positives, i.e., tokens that have the same spelling patterns as IOCs but are not malicious. In this work, we further introduce the contextual features as described next." ], [ "IOCs in cybersecurity articles are often described in a predictable way: being connected to a set of contextual keywords BIBREF16 , BIBREF1 . For example, a human user can infer that the word “ntdll.exe” is the name of a malicious file on the basis of the words “download” and “compromised” from the text shown in Fig. FIGREF1 . By analyzing the whole corpus, it is interesting that malicious file names tends to co-occur with words such as \"download\", \"malware\", \"malicious\", etc. In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords.", "Taking the above into account, we introduce the contextual feature vector INLINEFORM0 for a given input token INLINEFORM1 , where the INLINEFORM2 element of INLINEFORM3 is defined as follows: DISPLAYFORM0 ", " INLINEFORM0 is the frequency of token INLINEFORM1 in the whole corpus, while INLINEFORM2 is the frequency of contextual keyword INLINEFORM3 from the windowed portions of the texts centering on the token INLINEFORM4 in the whole corpus and INLINEFORM5 is the size of window. The set of contextual keywords INLINEFORM6 are automatically extracted from the annotated texts, where each contextual keyword INLINEFORM7 ( INLINEFORM8 ) satisfies the following conditions:", " INLINEFORM0 , where INLINEFORM1 is the set of manually annotated IOCs and INLINEFORM2 is a the lower bound of the frequency.", " INLINEFORM0 is not a punctuation or stopword.", "Note that we extract contextual keywords only from manually annotated data (e.g., training set), while we compute the contextual feature vector in all of the unlabeled data. According to this definition, it is obvious that the dimension of the contextual feature vector is as the same as the number of extracted contextual keywords. The size of window INLINEFORM0 and the lower bound of frequency INLINEFORM1 are then tuned by the validation set." ], [ "The feature vector for an input token is the concatenation of the token spelling feature vector and the contextaul feature vector. Here, to elucidate the best usage of the feature vector, we evaluate the feature vector by concatenating it at different locations in the proposed model, i.e., the input of the token LSTM layer ( INLINEFORM0 ), the hidden state of the token LSTM ( INLINEFORM1 ), and the output of token LSTM ( INLINEFORM2 ). Among them, to concatenate the feature vector with the LSTM hidden state vector and the sentence vector of attention in the token LSTM layer, as shown in Section SECREF4 , achieved the best performance. We speculate that the features played an important role in the task of IOCs identification and feature vectors near the output layer were able to improve the performance more significantly than those at other locations." ], [ "For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. All of these cybersecurity articles are used to train the English word embedding. Afterwards, we randomly select 370 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 70 articles as the validation set and 70 articles as the test set; the remaining articles are used for training.", "For Chinese dataset, we crawl 5,427 cybersecurity articles online from 35 cybersecurity blogs which are published from 2001 to 2018. All of these cybersecurity articles are used to train the Chinese word embedding. Afterwards, we randomly select 607 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 122 articles as the validation set and 122 articles as the test set; the remaining articles are used for training.", "TABLE TABREF20 shows statistics of the datasets. The output labels are annotated with the BIO (which stands for “Begin”, “Inside” and “Outside”) scheme." ], [ "For pre-trained token embedding, we apply word2vec BIBREF17 to all crawled 687 English APT reports and 5,427 Chinese cybersecurity articles described in Section SECREF21 respectively. The word2vec models are trained with a window size of 8, a minimum vocabulary count of 1, and 15 iterations. The negative sampling number of word2vec is set to 8 and the model type is skip-gram. The dimension of the output token embedding is set to 100.", "The ANN model is trained with the stochastic gradient descent to update all parameters, i.e., token embedding, character embedding, parameters of Bi-LSTM, weights of sentence attention, weights of multi-head self-attention, token features, and transition probabilities of CRF layers at each gradient step. For regularization, the dropout is applied to the output of each sub layer of the ANN model. Further training details are given below: (a) For attention-based Bi-LSTM module, dimensions of character embedding, hidden states of character-based token embedding LSTM, hidden states of Bi-LSTM, and sentence attention are set to 25, 25, 100 and 100, respectively. For multi-head self-attention module, we employ a stack of 6 multi-head self attention layer, each of which has 4 head and dimension of each head is set to 64. (b) All of the ANN’s parameters are initialized with a uniform distribution ranging from -1 to 1. (c) We train our model with a fixed learning rate of 0.005. The minimum number of epochs for training is set as 30. After the first 30 epochs had been trained, we compute the average F1-score of the validation set by the use of the currently produced model after every epoch had been trained, and stop the training process when the average F1-score of validation set fails to increase during the last ten epochs. We train our model for, if we do not early stop the training process, 100 epochs as the maximum number. (d) We rescale the normalized gradient to ensure that its norm does not exceed 5. (e) The dropout probability is set to 0.5." ], [ "As shown in TABLE TABREF24 , we report the micro average of precision, recall and F1-score for all 11 types of labels for a baseline as well as the proposed model. As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 . As presented in TABLE TABREF24 , the score obtained by the proposed model is clearly higher than the baseline. Here, as described in Section SECREF14 , the sizes of window and lower bounds of frequency for selecting contextual keywords are tuned as 4 and 7 throughout the evaluation of English dataset, and tuned as 3 and 4 throughout the evaluation of Chinese dataset. The number of extracted contextual keywords from the English dataset is 1,328, and from the Chinese dataset is 331.", "Furthermore, we quantitatively compare our study with other typical works of sequence labelling, i.e., the work of Huang et al. BIBREF6 , the work of Lample et al. BIBREF9 and the work of Rei et al. BIBREF18 . Huang et al. BIBREF6 proposed a bidirectional LSTM model with a CRF layer, including hand-crafted features specialized for the task of sequence labelling. Lample et al. BIBREF9 described a model where the character-level representation was concatenated with word embedding and Rei et al. BIBREF18 improved the model by introducing an attention mechanism to the character-level representations. We train these models by employing the same training set and training parameters as the proposed model. As shown in TABLE TABREF24 , the proposed model obtains the highest precision, recall and F1-score than other models in the task of IOCs extraction. Compared with the second-best model of Lample et al. BIBREF9 , the performance gain of the proposed model on the English dataset is approximately 10.1% of precision and 10.0% of recall. The performance gain of the proposed model on the Chinese dataset is approximately 4.2% of precision and 9.0% of recall.", "We also quantitatively compare our study with the work of Zhou et al. BIBREF12 , which proposed a bidirectional LSTM model with a CRF layer, including hand-crafted spelling features for the task of IOC identification. As shown in TABLE TABREF24 , the proposed model obtains a slightly higher F1-score on the English dataset and significantly higher F1-score on the Chinese dataset.", "TABLE TABREF26 compares several examples of correct IOC extraction produced by the proposed model with one by the work of Lample et al. BIBREF9 . In the first example, the model of Lample et al. BIBREF9 fails to identify the malicious URL “http://www7.chrome-up.date/0m5EE”, because the token only appears in the test set and consists of several parts that are uncommon for URLs, such as “www7” and “date”, and thus both the token embedding and the character embedding lack proper information to represent the token as a malicious URL. The proposed model correctly identifies the URL, where the token is defined as a URL by spelling features and is then identified as a malicious URL by the use of the context information. In the second example, the model of Lample et al. BIBREF9 fails to identify token “cr.sh” of the input Chinese text as a malicious file name, while the token is assigned with a correct label by the proposed model. It is mainly because that the token “cr.sh” is defined as a token of file information by spelling features and tends to co-occur with words, “”(download) and “”(mining software). These two words often appear nearby malicious file information and are then extracted as contextual keywords in Section SECREF14 . The token “cr.sh” is then correctly identified as a token of malicious file information by the use of the contextual features." ], [ "The proposed model provides an intuitive way to inspect the contextual information of each given token. As described in Section SECREF14 , we initialize the contextual features of each given token using the automatically extracted contextual keywords and jointly learn them during the process of training with the whole ANN model. To prove the effectiveness of the contextual features, we visualize the learned weights martix of each contextual keyword of contextual feature and show several examples in Fig. FIGREF28 . Each row of the matrix in each plot indicates the weights of contextual keywords for the given tokens. From this we see which contextual keyword were considered more important to represent the contextual information of the given token. We can see from the matrix in Fig. FIGREF28 that, for the token “spearphshing”, which is an email-spoofing attack method, the contextual keyword “email” has the largest weight. For the malware “SunOrcal”, which drops several malicious executable files, contextual keywords “droppper” and “dropper” have larger weights than other contextual keywords such as “ascii”, “port” and “type”. For non-IOC token “socket”, contextual keywords “gateway” and “port” yield larger weights than other keywords because \"socket\" tends to co-occur with “gateway” and “port”.", "We further calculate the average weight of each contextual keyword and show the top 10 and bottom 10 largest weighted contextual keywords in TABLE TABREF29 . From this we see that contextual keywords such as, “hash” and “filename”, which tends to co-occur with malicious filenames, have the largest weights for IOCs, while the contextual keywords such as “ascii”, “password” have the largest weights for non-IOCs. Here, it is interesting to find that contextual keyword “dropped” and “droppper”, which tend to co-occur with malicious file information and malwares, yield large weights for IOCs but small weights for non-IOCs. The proposed ANN model benefits from the differences of contextual information between IOCs and non-IOCs that is represented by the contextual features, and thus, achieves better performance than the previous works." ], [ "Even though security articles are written in different languages, most of the IOCs are written in English, and are described in a similar pattern. Therefore, using multilingual corpora could be a solution for addressing the lack of annotated data, and the performance of the proposed model is expected to be improved by extending the training set. To examine the hypothesis, we ran a number of additional experiments using both the English dataset and Chinese dataset, both of which are described in Section SECREF21 and are not parallel data or comparable data.", "As pre-trained word embeddings for the bilingual training dataset, we applied a cross-lingual word embedding obtained by the work of Duong el al BIBREF19 , where the English-Chinese cross-lingual dictionary is obtained by simply translating all the English words from English dataset to Chinese and Chinese words from Chinese dataset to English using Google translation. As contextual feature vector, we concatenate the contextual feature vector obtained from English dataset with the contextual feature vector obtained from Chinese dataset. Then we merge the English training set and the Chinese training set into one set and train the proposed model with the merged bilingual training set. TABLE TABREF31 shows that the proposed model trained with the English training set and Chinese training set achieves a small improvement of F1-score on English test set when compared with the model trained with only English training set, and a great improvement of F1-score on Chinese test set when compared with the model trained with only Chinese training set.", "We compare scores of each label when the proposed model is trained with different training sets in TABLE TABREF32 . When using the English test set, the F1-scores of labels “attack method”, “attack target” and “malware” by the model trained with the English training set and Chinese training set are lower than those scores by the model trained with only the English training set. It is mainly because that tokens of these labels can be written in different languages, which harms the model trained with the bilingual training data set. In contrast, benefiting from the extension of training set, for types of labels that are often written in English, e.g., “domain ”, “file imformation”, “IPv4” and “vlunerability”, the proposed model trained with the English training set and the Chinese training set achieves higher scores than the model trained with only the English training set. When using the Chinese test set, the proposed model trained with the English training set and the Chinese training set obtained a obviously higher F1-scores than the model trained with only the Chinese training set for almost all the types of labels. It is interesting to find that types of labels “e-mail address”, “attack method”, “attacker”, which lack of instances in Chinese training set, show the biggest improvement by using the model trained with the bilingual training set." ], [ "To conclude, in this paper, we newly introduce a multi-head self-attention module and contextual features to the neural based sequence labelling model, which significantly improved the performance in the task of IOC identification. Based on the evaluation results of our experiments, our proposed model is proved effective on both the English test set and the Chinese test set. We further evaluated the proposed model by training the proposed model using both the English training set and the Chinese training set and compared it with models that are trained with only one training set, where the model trained with the merged bilngual training set performs better.", "One of our future works is to integrate the contextual embeddings from the bidirectional language model into our proposed model. The pretrained neural language models are proved effective in the sequence labelling models BIBREF26 , BIBREF27 , BIBREF28 . It is expected to improve the performance of the proposed model by integrating both the contextual features and contextual embeddings into the neural sequence labelling model." ] ] }
{ "question": [ "What is used a baseline?", "What contextual features are used?", "Where are the cybersecurity articles used in the model sourced from?", "What type of hand-crafted features are used in state of the art IOC detection systems?" ], "question_id": [ "08b87a90139968095433f27fc88f571d939cd433", "ef872807cb0c9974d18bbb886a7836e793727c3d", "4db3c2ca6ddc87209c31b20763b7a3c1c33387bc", "63337fd803f6fdd060ebd0f53f9de79d451810cd" ], "nlp_background": [ "five", "five", "five", "five" ], "topic_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "search_query": [ "", "", "", "" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As shown in TABLE TABREF24 , we report the micro average of precision, recall and F1-score for all 11 types of labels for a baseline as well as the proposed model. As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 . As presented in TABLE TABREF24 , the score obtained by the proposed model is clearly higher than the baseline. Here, as described in Section SECREF14 , the sizes of window and lower bounds of frequency for selecting contextual keywords are tuned as 4 and 7 throughout the evaluation of English dataset, and tuned as 3 and 4 throughout the evaluation of Chinese dataset. The number of extracted contextual keywords from the English dataset is 1,328, and from the Chinese dataset is 331." ], "highlighted_evidence": [ "As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 ." ] } ], "annotation_id": [ "102b5f1010602ad1ea20ccdc52d330557bfc7433" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "The words that can indicate the characteristics of the neighbor words as contextual keywords and generate it from the automatically extracted contextual keywords.", "evidence": [ "IOCs in cybersecurity articles are often described in a predictable way: being connected to a set of contextual keywords BIBREF16 , BIBREF1 . For example, a human user can infer that the word “ntdll.exe” is the name of a malicious file on the basis of the words “download” and “compromised” from the text shown in Fig. FIGREF1 . By analyzing the whole corpus, it is interesting that malicious file names tends to co-occur with words such as \"download\", \"malware\", \"malicious\", etc. In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords." ], "highlighted_evidence": [ " In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords." ] } ], "annotation_id": [ "aef28565f179d4c9f16d43c8a36ed736718157fc" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. All of these cybersecurity articles are used to train the English word embedding. Afterwards, we randomly select 370 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 70 articles as the validation set and 70 articles as the test set; the remaining articles are used for training." ], "highlighted_evidence": [ "For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. " ] } ], "annotation_id": [ "e9486a8eb7bfa181261aef55adfe2acf4a011664" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "c3aaa905861aab52233d0a80bb71b8c517cc2e94" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] }
{ "caption": [ "Fig. 2. ANN model of sequence labeling for IOCs automatic identification", "TABLE I STATISTICS OF DATASETS (NUMBERS OF TRAINING / VALIDATION / TEST SET)", "TABLE II EVALUATION RESULTS (MICRO AVERAGE FOR 11 LABELS)", "TABLE III EXAMPLES OF CORRECT IDENTIFICATION BY THE PROPOSED MODEL", "Fig. 3. Heatmap of part of contextual features martix in the English dataset", "TABLE IV TOP 10 AND BOTTOM 10 LARGEST WEIGHTED CONTEXTUAL KEYWORDS OF CONTEXTUAL FEATURE IN THE ENGLISH DATASET", "TABLE V COMPARISON OF EVALUATION RESULTS WHEN TRAINING THE PROPOSED MODEL WITH DIFFERENT TRAINING SETS (MICRO AVERAGE PRECISION / RECALL / F1-SCORE FOR 11 LABELS)", "TABLE VI EVALUATION RESULTS FOR EACH LABEL WHEN TRAINING THE PROPOSED MODEL WITH DIFFERENT TRAINING SETS (PRECISION / RECALL / F1-SCORE)" ], "file": [ "3-Figure2-1.png", "4-TableI-1.png", "5-TableII-1.png", "6-TableIII-1.png", "6-Figure3-1.png", "6-TableIV-1.png", "7-TableV-1.png", "8-TableVI-1.png" ] }
1605.08675
Boosting Question Answering by Deep Entity Recognition
In this paper an open-domain factoid question answering system for Polish, RAFAEL, is presented. The system goes beyond finding an answering sentence; it also extracts a single string, corresponding to the required entity. Herein the focus is placed on different approaches to entity recognition, essential for retrieving information matching question constraints. Apart from traditional approach, including named entity recognition (NER) solutions, a novel technique, called Deep Entity Recognition (DeepER), is introduced and implemented. It allows a comprehensive search of all forms of entity references matching a given WordNet synset (e.g. an impressionist), based on a previously assembled entity library. It has been created by analysing the first sentences of encyclopaedia entries and disambiguation and redirect pages. DeepER also provides automatic evaluation, which makes possible numerous experiments, including over a thousand questions from a quiz TV show answered on the grounds of Polish Wikipedia. The final results of a manual evaluation on a separate question set show that the strength of DeepER approach lies in its ability to answer questions that demand answers beyond the traditional categories of named entities.
{ "section_name": [ "Introduction", "RAFAEL", "Related work", "System Architecture", "Knowledge Base Processing", "Question Analysis", "Document Retrieval", "Entity Recognition", "Mention selection", "Deep Entity Recognition", "Entity Library", "Evaluation", "Data", "Automatic Evaluation", "Results", "Experiments", "Final System Evaluation", "Discussion", "Conclusions", "Appendix A: Named Entity Recognition in RAFAEL", "Acknowledgments" ], "paragraphs": [ [ "A Question Answering (QA) system is a computer program capable of understanding questions in a natural language, finding answers to them in a knowledge base and providing answers in the same language. So broadly defined task seems very hard; BIBREF0 describes it as AI-Complete, i.e. equivalent to building a general artificial intelligence. Nonetheless, the field has attracted a lot of attention in Natural Language Processing (NLP) community as it provides a way to employ numerous NLP tools in an exploitable end-user system. It has resulted in valuable contributions within TREC competitions BIBREF1 and, quite recently, in a system called IBM Watson BIBREF2 , successfully competing with humans in the task.", "However, the problem remains far from solved. Firstly, solutions designed for English are not always easily transferable to other languages with more complex syntax rules and less resources available, such as Slavonic. Secondly, vast complexity and formidable hardware requirements of IBM Watson suggest that there is still a room for improvements, making QA systems smaller and smarter.", "This work attempts to contribute in both of the above areas. It introduces RAFAEL (RApid Factoid Answer Extraction aLgorithm), a complete QA system for Polish language. It is the first QA system designed to use an open-domain plain-text knowledge base in Polish to address factoid questions not only by providing the most relevant sentence, but also an entity, representing the answer itself. The Polish language, as other Slavonic, features complex inflection and relatively free word order, which poses additional challenges in QA. Chapter SECREF2 contains a detailed description of the system architecture and its constituents.", "In the majority of such systems, designers' attention focus on different aspects of a sentence selection procedure. Herein, a different idea is incorporated, concentrating on an entity picking procedure. It allows to compare fewer sentences, likely to contain an answer. To do that, classical Named Entity Recognition (NER) gets replaced by Deep Entity Recognition. DeepER, introduced in this work, is a generalisation of NER which, instead of assigning each entity to one of several predefined NE categories, assigns it to a WordNet synset.", "For example, let us consider a question: Which exiled European monarch returned to his country as a prime minister of a republic?. In the classical approach, we recognise the question as concerning a person and treat all persons found in texts as potential answers. Using DeepER, it is possible to limit the search to persons being monarchs, which results in more accurate answers. In particular, we could utilise information that Simeon II (our answer) is a tsar; thanks to WordNet relations we know that it implies being a monarch. DeepER is a generalisation of NER also from another point of view – it goes beyond the classical named entity categories and treats all entities equally. For example, we could answer a question Which bird migrates from the Arctic to the Antarctic and back every year?, although arctic tern is not recognized as NE by NER systems. Using DeepER, we may mark it as a seabird (hence a bird) and include among possible answers. Chapter SECREF3 outlines this approach.", "The entity recognition process requires an entities library, containing known entities, their text representations (different ways of textual notation) and WordNet synsets, to which they belong. To obtain this information, the program analyses definitions of entries found in encyclopaedia (in this case the Polish Wikipedia). In previous example, it would use a Wikipedia definition: The Arctic Tern (Sterna paradisaea) is a seabird of the tern family Sternidae. This process, involving also redirect and disambiguation pages, is described in section SECREF40 . Next, having all the entities and their names, it suffices to locate their mentions in a text. The task (section SECREF73 ) is far from trivial because of a complicated named entity inflection in Polish (typical for Slavonic languages, see BIBREF3 ).", "DeepER framework provides also another useful service, i.e. automatic evaluation. Usually QA systems are evaluated by verifying accordance between obtained and actual answer based on a human judgement. Plain string-to-string equality is not enough, as many entities have different text representations, e.g. John F. Kennedy is as good as John Fitzgerald Kennedy and John Kennedy, or JFK (again, the nominal inflection in Polish complicates the problem even more). However, with DeepER, a candidate answer can undergo the same recognition process and be compared to the actual expected entity, not string.", "Thanks to automatic evaluation vast experiments requiring numerous evaluations may be performed swiftly; saving massive amount of time and human resources. As a test set, authentic questions from a popular Polish quiz TV show are used. Results of experiments, testing (among others) the optimal context length, a number of retrieved documents, a type of entity recognition solution, appear in section SECREF88 .", "To avoid overfitting, the final system evaluation is executed on a separate test set, previously unused in development, and is checked manually. The results are shown in section SECREF93 and discussed in chapter SECREF6 . Finally, chapter SECREF7 concludes the paper." ], [ "As stated in previous chapter, RAFAEL is a computer system solving a task of Polish text-based, open-domain, factoid question answering. It means that provided questions, knowledge base and returned answers are expressed in Polish and may belong to any domain. The system analyses the knowledge base, consisting of a set of plain text documents, and returns answers (as concise as possible, e.g. a person name), supplied with information about supporting sentences and documents.", "What are the kinds of requests that fall into the category of factoid questions? For the purpose of this study, it is understood to include the following types:", "Although the above list rules out many challenging types of questions, demanding more elaborate answers (e.g. Why was JFK killed?, What is a global warming?, How to build a fence?), it still involves very distinct problems. Although RAFAEL can recognize factoid questions from any of these types and find documents relevant to them (see more in section SECREF18 and BIBREF4 ), its answering capabilities are limited to those requesting single unnamed entities and named entities. In this document, they are called entity questions.", "The task description here is similar to the TREC competitions and, completed with test data described in section SECREF80 , could play a similar role for Polish QA, i.e. provide a possibility to compare different solutions of the same problem. More information about the task, including its motivation, difficulties and a feasibility study for Polish could be found in BIBREF5 ." ], [ "The problem of Question Answering is not new to the Polish NLP community (nor working on other morphologically rich languages), but none of studies presented so far coincides with the notion of plain text-based QA presented above.", "First Polish QA attempts date back to 1985, when BIBREF6 presented a Polish interface to ORBIS database, containing information about the solar system. The database consisted of a set of PROLOG rules and the role of the system (called POLINT) was to translate Polish questions to appropriate queries. Another early solution, presented by BIBREF7 , could only work in a restricted domain (business information).", "A system dealing with a subset of the TREC tasks was created for Bulgarian by BIBREF8 . His solution answers only three types of questions: Definition, Where-Is and Temporal. He was able to achieve good results with 100 translated TREC questions, using several manually created answer patterns, without NER or any semantic information. Another system for Bulgarian BIBREF9 participated in the CLEF 2005 competition. Its answer extraction module bases on partial grammars, playing a role of patterns for different types of questions. They could answer correctly 37 of 200 questions, of which only 16 belong to the factoid type. Previously the same team BIBREF10 took part in a Bulgarian-English track of the CLEF 2004, in which Bulgarian questions were answered using English texts.", "A QA solution was also created for Slovene BIBREF11 . The task there is to answer students' questions using databases, spreadsheet files and a web service. Therefore, it differs from the problem discussed above by limited domain (issues related to a particular faculty) and the non-textual knowledge base. Unfortunately, no quantitative results are provided in this work.", "More recently, several elements of a Polish QA system called Hipisek were presented by BIBREF12 . It bases on a fairly common scheme of transforming a question into a search query and finding the most appropriate sentence, satisfying question constrains. Unfortunately, a very small evaluation set (65 question) and an unspecified knowledge base (gathered by a web crawler) make it difficult to compare the results. In their later works BIBREF13 , BIBREF14 , the team concentrated on spatial reasoning using a knowledge base encoded as a set of predicates.", "The approach presented by BIBREF15 is the closest to the scope of this work, as it includes analysis of Polish Wikipedia content and evaluation is based on questions translated from a TREC competition. Unfortunately, it heavily relies on a structure of Wikipedia entries, making it impossible to use with an arbitrary textual corpus.", "A non-standard approach to answer patterns has been proposed by BIBREF16 . In their Czech open-domain QA system they used a set of templates associated with question types, but also presented a method to learn them semi-automatically from search results. BIBREF17 in their Bulgarian QA system concentrated on semantic matching between between a question and a possible answer checked using dependency parsing. However, they provide no data regarding an answering precision of the whole system.", "The last Polish system worth mentioning has been created by BIBREF18 . Generally, their task, called Open Domain Question Answering (ODQA), resembles what is treated here, but with one major difference. A document is considered an answer; therefore they focus on improving ranking in a document retrieval stage. They have found out that it could benefit from taking nearness of query terms occurrences into account.", "As some of Slavonic languages lack necessary linguistic tools and resources, only partial solutions of QA problems exist for them, e.g. document retrieval for Macedonian BIBREF19 , question classification for Croatian BIBREF20 or answer validation for Russian BIBREF21 .", "The idea of DeepER in a nutshell is to improve QA by annotating a text with WordNet synsets using an entity base created by understanding definitions found in encyclopaedia. Parts of this concept have already appeared in the NLP community.", "A technique of coordinating synsets assigned to a question and a possible answer emerged in a study by BIBREF45 . While a question analysis there seems very similar to this work, entity library (called proper noun ontology) generation differs a lot. The author analysed 1 GB of newswire text and extracted certain expressions, e.g. \"X, such as Y\" implies that Y is an instance of X. Albeit precision of resulting base was not very good (47 per cent for non-people proper names), it led to a substantial improvement of QA performance.", "The idea of analysing encyclopaedic definitions to obtain this type of information already appeared, but was employed for different applications. For example, BIBREF46 described a method of building a gazetteer by analysing hyperonymy branches of nouns of first sentences in Wikipedia definitions. Unlike in this work, an original synset was replaced by a coarse-grained NER category. Another example of application is a NE recognizer BIBREF47 using words from a definition as additional features for a standard CRF classifier. In their definition analysis only the last word of the first nominal group was used.", "Other researchers dealt with a task explicitly defined as classifying Wikipedia entries to NER categories. For example BIBREF48 addressed the problem by combining traditional text classification techniques (bag of words) with contexts of entity mentions. Others BIBREF49 thoroughly examined article categories as a potential source of is-a relations in a taxonomy (99 per cent of entries have at least one category). Inhomogeneity of categories turned out as the main problem, dealt with by a heuristic classifier, assigning is-a and not-is-a labels. Categories were also used as features in a NER task BIBREF50 , but it required a set of manually designed patterns to differentiate between categories of different nature.", "Exploring a correspondence between Wikipedia entries and WordNet synsets found an application in automatic enriching ontologies with encyclopaedic descriptions BIBREF51 . However, only NEs already appearing in the WordNet were considered. The task (solved by bag-of-words similarity) is non-trivial only in case of polysemous words, e.g. which of the meanings of Jupiter corresponds to which Wikipedia article? Others BIBREF52 concentrated on the opposite, i.e. extending the WordNet by NEs that are not there yet by adding titles of entries as instances of synsets corresponding to their common category.", "Also, some see Wikipedia as an excellent source of high-quality NER training data. Again, it requires to project entries to NE categories. A thorough study of this problem, presented by BIBREF53 , utilizes features extracted from article content (bag of words), categories, keywords, inter-article and inter-language links. A final annotated corpus turns out as good for NER training as a manually annotated gold standard.", "Finally, some researchers try to generalise NER to other categories, but keep the same machine-learning-based approach. For example, BIBREF54 developed a tagger, assigning words in a text to one of 41 supersenses. Supersenses include NE categories, but also other labels, such as plant, animal or shape. The authors projected word-sense annotations of publicly available corpora to supersenses and applied perceptron-trained Hidden Markov Model for sequence classification, obtaining precision and recall around 77 per cent." ], [ "A general architectural scheme of RAFAEL (figure FIGREF11 ) has been inspired by similar systems developed for English; for examples see works by BIBREF22 and BIBREF23 .", "Two of the steps in the diagram concern offline processing of a knowledge base. Firstly, it is indexed by a search engine to ensure efficient searching in further stages (INDEXING). Secondly, it may be annotated using a set of tools (NLP), but this could also happen at an answering stage for selected documents only.", "After the system receives a question, it gets analysed (QUESTION ANALYSIS) and transformed into a data structure, called question model. One of its constituents, a search query, is used to find a set of documents, which are probably appropriate for the current problem (SEARCH). For each of the documents, all entity mentions compatible with an obtained question type (e.g. monarchs), are extracted (ENTITY RECOGNITION). For each of them, a context is generated (CONTEXT GENERATION). Finally, a distance between a question content and the entity context is computed to asses its relevance (DISTANCE MEASURE). All the mentions and their distance scores are stored and, after no more documents are left, used to select the best match (BEST ENTITY SELECTION). The system returns the entity, supplied with information about a supporting sentence and a document, as an answer." ], [ "Knowledge base (KB) processing consists of two elements: indexing and annotating. The objective of the first is to create an index for efficient searching using a search engine. In the system, Lucene 3.6 is used to build two separate full-text indices: regular and stemmed using a built-in stemmer for Polish, Stempel BIBREF24 .", "Secondly, texts go through a cascade of annotation tools, enriching it with the following information:", "Morphosyntactic interpretations (sets of tags), using Morfeusz 0.82 BIBREF25 ,", "Tagging (selection of the most probable interpretation), using a transformation-based learning tagger, PANTERA 0.9.1 BIBREF26 ,", "Syntactic groups (possibly nested) with syntactic and semantic heads, using a rule-based shallow parser Spejd 1.3.7 BIBREF27 with a Polish grammar, including improved version of modifications by BIBREF28 , enabling lemmatisation of nominal syntactic groups,", "Named entities, using two available tools: NERF 0.1 BIBREF29 and Liner2 2.3 BIBREF30 .", "All the annotations are stored in a variant of TEI P5 standard, designed for the National Corpus of Polish BIBREF31 . As noted previously, the process of annotating is not indispensable at the stage of offline KB processing; it could be as well executed only on documents returned from the search engine (for example see Webclopedia by BIBREF22 or LASSO by BIBREF23 ). However, since during evaluation experiments the same documents undergo the process hundreds of times, it seems reasonable to process the whole KB only once." ], [ "The goal of question analysis is to examine a question and extract all the information that suffices for answer finding. A resulting data structure, called question model, contains the following elements:", "Question type – a description of expected answer type, instructing the system, what type of data could be returned as an answer. It has three levels of specificity:", "General question type – one of the types of factoid questions, enumerated at the beginning of this chapter,", "Named entity type – applicable only in case general type equals named entity. Possible values are the following: place, continent, river, lake, mountain, mountain range, island, archipelago, sea, celestial body, country, state, city, nationality, person, first name, last name, band, dynasty, organisation, company, event, date, century, year, period, number, quantity, vehicle, animal, title.", "Focus synset – applicable in case of entity questions; a WordNet synset, to which a question focus belongs; necessary for DeepER.", "Search query – used to find possibly relevant documents,", "Question content – the words from question which are supposed to appear also in context of an answer.", "The task presented above, called question classification, is an example of text classification with very short texts. It could be tackled by a general-purpose classifier; for example, BIBREF11 used SVMs (Support Vector Machines) for closed-domain Slovene QA system; BIBREF32 employed SNoW (Sparse Network of Winnows) for hierarchical classification of TREC questions. For Polish results are not satisfactory BIBREF4 because of data sparsity.", "However, sometimes a solution seems quite evident, as part of the question types enforce its structure. For example, when it begins with Who or When, it belongs to person and date question types, respectively. That is why a set of 176 regular expressions (in case of RAFAEL) suffices to deal with them. They match only a subset of questions (36.15 per cent of the training set), but are highly unambiguous (precision of classification equals 95.37 per cent). Nevertheless, some BIBREF33 use solely such patterns, but need a great number of them (1,273).", "Unfortunately, most of entity questions are ambiguous, i.e. it is not enough to inspect an interrogative pronoun to find an answer type. They may begin with what or which, followed by a question focus. For example, let us consider a question Which russian submarine sank in 2000 with its whole crew?. Its focus (russian submarine) carries information that the question could be answered by a named entity of type vehicle. The whole process of focus analysis is shown in figure FIGREF25 . The first nominal group after a pronoun serves as a possible lexeme name in plWordNet 2.1 BIBREF34 . As long as there are no results, it gets replaced by its semantic head. When a matching lexeme exists in WordNet, a set of all its hypernyms is extracted. If any of the elements in the set correspond to one of the named entity types, this type is recorded in the question model. Otherwise the general question type takes the value unnamed entity. A WordNet-assisted focus analysis was also implemented in one of solutions participating in a TREC competition BIBREF35 .", "Search query generation is described in the next chapter. The last element of a question model, called question content, contains segments, which are to be compared with texts to find the best answer. It includes all the words of the interrogative sentence except those included in the matched pattern (Which, ?) and the focus (submarine). In our example the following are left: russian, sank, in, 2000, with, its, whole, crew. An entity mention, which context resembles this set, will be selected as an answer (see details in section SECREF33 ).", "The question analysis stage explained above follows a design presented in previous works BIBREF4 , BIBREF36 , where more details could be found. The major difference lies in result processing – an original synset is not only projected to one of the named entity types, but also recorded as a focus synset in question type, utilised in DeepER to match entity types. In our example, it would only consider submarines as candidate answers." ], [ "The use of search engines in QA systems is motivated mainly by performance reasons. Theoretically, we could analyse every document in a text base and find the most relevant to our query. However, it would take excessive amount of time to process the documents, majority of which belong to irrelevant domains (839,269 articles in the test set). A search engine is used to speed up the process by selecting a set of documents and limiting any further analysis to them.", "As described in section SECREF12 , a knowledge base is indexed by Lucene offline. Given a question, we need to create a search query. The problem is that an answer in the knowledge base is probably expressed differently than the question. Hence, a query created directly from words of the question would not yield results, unless using a highly-redundant KB, such as the WWW (for this type of solution see BIBREF37 ). Therefore, some of the query terms should be dropped – based on their low IDF BIBREF38 or more complex heuristics BIBREF23 . On the other hand, the query may be expanded with synonyms BIBREF22 or derived morphological forms BIBREF38 .", "Finally, we need to address term matching issue – how to compare a query keyword and a text word in a morphologically-rich language, such as Polish? Apart from exact match, it also is possible to use a stemmer or fuzzy queries, available in Lucene (accepting a predefined Levenshtein distance between matching strings).", "Previous experiments BIBREF36 led to the following query generation procedure:", "Remove all words matched by a regular expression at the classification stage (What, Which, etc.),", "Keep a question focus,", "Connect all the remaining words by OR operator,", "Use fuzzy term matching strategy with absolute distance equal 3 characters and fixed prefix.", "Lucene handles a query and yields a ranked document list, of which N first get transferred to further analysis. The influence of value of N on answering performance is evaluated in section SECREF88 ." ], [ "Having a set of proposed documents and a question type, the next step is to scan them and find all mentions of entities with appropriate types. RAFAEL includes two approaches to the problem: classical Named Entity Recognition (NER) and novel Deep Entity Recognition.", "Three NERs for Polish are employed: NERF, Liner2 and Quant. NERF BIBREF29 is a tool designed within the project of the National Corpus of Polish and bases on linear-chain conditional random fields (CRF). It recognizes 13 types of NEs, possibly nested (e.g. Warsaw in University of Warsaw). Liner2 BIBREF30 also employs CRFs, but differentiates NEs of 56 types (which could be reduced to 5 for higher precision). Annotation using both of the tools happens offline within the KB preprocessing, so in the currently described stage it suffices to browse the annotations and find matching entities. As the above tools lack recognition of quantitative expressions, a new one has been developed especially for RAFAEL and called Quant. It is able to handle both numbers and quantities (using WordNet) in a variety of notations.", "Appendix A contains details of implementation of named entity recognition in RAFAEL, including a description of Quant and a mapping between question types and named entity types available in NERF and Liner2. An alternative being in focus of this work, i.e. DeepER approach, is thorougly discussed in chapter SECREF3 .", "RAFAEL may use any of the two approaches to entity recognition: NER (via NERF, Liner2 and Quant) or novel DeepER; this choice affects its overall performance. Experiments showing precision and recall of the whole system with respect to applied entity recognition technique are demonstrated in section SECREF88 .", "An entity recognition step is performed within the question answering process and aims at selecting all entity mentions in a given annotated document. Before it begins, the entity library is read into a PATRICIA trie, a very efficient prefix tree. In this structure, every entity name becomes a key for storing a corresponding list of entities.", "When a document is ready for analysis, it is searched for strings that match any of the keys in the trie. The candidate chunks (sequences of segments) come from three sources:", "lemmata of words and syntactic groups,", "sequences of words in surface forms (as they appear in text),", "sequences of words in base forms (lemmata).", "The last two techniques are necessary, because a nominal group lemmatisation often fails, especially in case of proper names. Their rich inflection in Polish BIBREF3 means that a nominal suffix of an entity may be hard to predict. Therefore, a chunk is considered to match an entity name if:", "they share a common prefix,", "an unmatched suffix in neither of them is longer than 3 characters,", "the common prefix is longer than the unmatched chunk suffix.", "Given a list of entity mentions, RAFAEL checks their compatibility with a question model. Two of its constituents are taken into account: a general question type and a synset. An entity mention agrees with NAMED_ENTITY type if its first segment starts with a capital letter and always agrees with UNNAMED_ENTITY. To pass a semantic agreement test, the synset of the question model needs to be a (direct or indirect) hypernym of one of the synsets assigned to the entity. For example, list of synsets assigned to entity Jan III Sobieski contains <król.1> (king), so it matches a question focus <władca.1, panujący.1, hierarcha.2, pan.1> (ruler) through a hypernymy path <władca.1, panujący.1, hierarcha.2, pan.1> INLINEFORM0 <monarcha.1, koronowana głowa.1> (monarch) INLINEFORM1 <król.1>. All the mentions of entities satisfying these conditions are returned for further processing." ], [ "When a list of entity mentions in a given document is available, we need to decide which of them most likely answers the question. The obvious way to do that is to compare surroundings of every mention with the content of the question. The procedure consists of two steps: context generation and similarity measurement.", "The aim of a context generation step is to create a set of segments surrounding an entity, to which they are assigned. Without capabilities of full text understanding, two approximate approaches seem legitimate:", "Sentence-based – for a given entity mention, a sentence in which it appears, serves as a context,", "Segment-based – for a given entity mention, every segment sequence of length M, containing the entity, is a context.", "Both of them have some advantages: relying on a single sentence ensures relation between an entity and a context, whereas the latter provides possibility of modifying context length. Obviously, the value of M should be proportional to question (precisely, its content) length.", "The method of treating sentences as a context has gained most popularity (see work of BIBREF39 ), but a window of fixed size also appears in the literature; for example BIBREF38 used one with M=140 bytes.", "The context generation is also related to another issue, i.e. anaphoric expressions. Some segments (e.g. this, him, they) may refer to entities that occurred earlier in a text and therefore harm a similarity estimation. It could be tackled by applying anaphora resolution, but a solution for Polish BIBREF40 remains in an early stage. Observations show that the majority of anaphora refer to an entity in a document title, so the problem is partially bypassed by adding a title to a context.", "An influence of the context generation techniques on final results is shown in section SECREF88 .", "To measure a similarity between a question content (explained in section SECREF18 ) and an entity context (generated by the procedures in previous section), a Jaccard similarity index BIBREF41 is computed. However, not all word co-occurrences matter equally (e.g. compare this and Honolulu), so word weights are used: INLINEFORM0 ", "The sets INLINEFORM0 and INLINEFORM1 contain segments in base forms, whereas INLINEFORM2 denotes a weight of an INLINEFORM3 -th base form, equal to its scaled IDF computed on a document set INLINEFORM4 : INLINEFORM5 ", "The Jaccard index is a popular solution for sentence similarity measurement in QA (for example see a system by BIBREF42 ). In case of selecting relevant documents, cosine measure is also applied. BIBREF18 compared it to Minimal Span Weighting (MSW) and observed that the latter performs better, as it takes into account a distance between matched words. A study of different techniques for sentence similarity assessment could be found in BIBREF39 .", "At this stage, a large set of pairs of entity mention and its contexts with scores assigned, is available. Which of them answers the question? Choosing the one with the highest score seems an obvious solution, but we could also aggregate scores of different mentions corresponding to the same answer (entity), e.g. compute their sum or mean. However, such experiments did not yield improvement, so RAFAEL returns only a single answer with the highest score.", "An answer consists of the following elements: an answer string, a supporting sentence, a supporting document and a confidence value (the score). A sentence and a document, in which the best mention appeared, are assumed to support the answer. Thanks to properties of Jaccard similarity, the mention score ranges between 0 for completely unrelated sentences to 1 for practically (ignoring inflection and a word order) the same. Therefore, it may serve as an answer confidence.", "When no entity mentions satisfying constraints of a question are found, no answer is returned. This type of result could also be used when the best confidence score is below a predefined value; performance of such technique are shown in section SECREF88 . The refusal to answer in case of insufficient confidence plays an important role in Jeopardy!, hence in IBM Watson BIBREF2 , but it was also used to improve precision in other QA systems BIBREF43 ." ], [ "Deep Entity Recognition procedure is an alternative to applying Named Entity Recognition in QA to find entities matching question constraints. It scans a text and finds words and multi-word expressions, corresponding to entities. However, it does not assign them to one of several NE categories; instead, WordNet synsets are used. Therefore, named entities are differentiated more precisely (e.g. monarchs and athletes) and entities beyond the classical NE categories (e.g. species, events, devices) could also be recognised in a text.", "It does not seem possible to perform this task relying solely on features extracted from words and surrounding text (as in NER), so it is essential to build an entity library. Such libraries already exist (Freebase, BabelNet, DBpedia or YAGO) and could provide an alternative for DeepER, but they concentrate on English. The task of adaptation of such a base to another language is far from trivial, especially for Slavonic languages with complex NE inflection BIBREF3 . An ontology taking into account Polish inflection (Prolexbase) has been created by BIBREF44 , but it contains only 40,000 names, grouped into 34 types." ], [ "An entity library for DeepER contains knowledge about entities that is necessary for deep entity recognition. Each of them consists of the following elements (entity #9751, describing the Polish president, Bronisław Komorowski):", "Main name: Bronisław Komorowski,", "Other names (aliases): Bronisław Maria Komorowski, Komorowski,", "Description URL: http://pl.wikipedia.org/wiki/?curid=121267,", "plWordNet synsets:", "<podsekretarz1, podsekretarz stanu1, wiceminister1> (vice-minister, undersecretary),", "<wicemarszałek1> (vice-speaker of the Sejm, the Polish parliament),", "<polityk1> (politician),", "<wysłannik1, poseł1, posłaniec2, wysłaniec1, posłannik1> (member of a parliament),", "<marszałek1> (speaker of the Sejm),", "<historyk1> (historian),", "<minister1> (minister),", "<prezydent1, prezydent miasta1> (president of a city, mayor).", "A process of entity library extraction is performed offline, before question answering. The library built for deep entity recognition in RAFAEL, based on the Polish Wikipedia (857,952 articles, 51,866 disambiguation pages and 304,823 redirections), contains 809,786 entities with 1,169,452 names (972,592 unique). The algorithm does not depend on any particular feature of Wikipedia, so any corpus containing entity definitions could be used.", "Figure FIGREF54 shows an exemplary process of converting the first paragraph of a Polish Wikipedia entry, describing former Polish president Lech Wałęsa, into a list of WordNet synsets. First, we omit all unessential parts of the paragraph (1). This includes text in brackets or quotes, but also introductory expressions like jeden z (one of) or typ (type of). Then, an entity name is detached from the text by matching one of definition patterns (2). In the example we can see the most common one, a dash (–). Next, all occurrences of separators (full stops, commas and semicolons) are used to divide the text into separate chunks (3). The following step employs shallow parsing annotation – only nominal groups that appear at the beginning of the chunks are passed on (4). The first chunk that does not fulfil this requirement and all its successors get excluded from further analysis (4.1). Finally, we split the coordination groups and check, whether their lemmas correspond to any lexemes in WordNet (5). If not, the process repeats with the group replaced by its semantic head. In case of polysemous words, only the first word sense (usually the most common) is taken into account.", "The whole process is more complicated than the simple example shows. Generally, it consists of the following steps:", "Prepare a corpus – data format and annotation process is the same as for a knowledge base, used in question answering, see section SECREF12 . It differs in scope of page categories, including not only articles, but also disambiguation and redirection pages.", "For each of article pages, extract the first paragraph and apply readDefinition function. If a resulting entity has a non-empty synset list, add it to the library. If some of the redirection pages point to the entity name, add their names as entity aliases.", "For each of disambiguation pages, extract all items and apply readDefinition function. If an item refers to an existing entity, extend it with extracted synsets and disambiguation page name. Create a new entity otherwise. Add redirection names as previously.", "Save the obtained base for future use.", "Function readDefinition( INLINEFORM0 ) – interprets a definition to assign synsets to an entity. INLINEFORM1 - annotated first paragraph of an encyclopaedic entry INLINEFORM2 - synsets describing an entity INLINEFORM3 := {} INLINEFORM4 := removeInBrackets( INLINEFORM5 ) INLINEFORM6 := removeInQuotes( INLINEFORM7 ) INLINEFORM8 in INLINEFORM9 INLINEFORM10 matches INLINEFORM11 INLINEFORM12 := match( INLINEFORM13 , INLINEFORM14 ).group(2) break INLINEFORM15 := removeDefinitionPrefixes( INLINEFORM16 ) INLINEFORM17 := split( INLINEFORM18 , INLINEFORM19 ) INLINEFORM20 in INLINEFORM21 INLINEFORM22 := firstGroupOrWord( INLINEFORM23 ) isNominal( INLINEFORM24 ) INLINEFORM25 := INLINEFORM26 INLINEFORM27 extractSynsets( INLINEFORM28 ) break INLINEFORM29 ", "The readDefinition function (shown as algorithm SECREF40 ) analyses a given paragraph of text and extracts a set of synsets, describing an entity, to which it corresponds, as exemplified by figure FIGREF54 . Simplifying, it is done by removing all unnecessary text (in brackets or quotes), splitting it on predefined separators (commas, full stops, semicolons) and applying extractSynsets function with an appropriate stop criterion. The readDefinition makes use of the following elements:", "removes everything that is between brackets ([], () or {}) from the text (step (1) in figure FIGREF54 ).", "removes everything between single or double quotes from the text (step (1) in the example).", "contains patterns of strings separating a defined concept from a definition, e.g. hyphens or dashes (used in step (2) of the example) or jest to (is a).", "removes expressions commonly prefixing a nominal group, such as jeden z (one of), typ (a type of) or klasa (a class of), not present in the example.", "a set of three characters that separate parts of a definition: \".\", \",\" and \";\".", "returns the longest syntactic element (syntactic group or word) starting at the beginning of a chunk (step (4) in the example).", "decides, whether a chunk is a noun in nominative, a nominal group or a coordination of nominal groups.", "Function extractSynsets( INLINEFORM0 ) – recursively extracts synsets from a nominal chunk. INLINEFORM1 - a nominal chunk (a syntactic group or a single noun) INLINEFORM2 - WordNet synsets corresponding to INLINEFORM3 INLINEFORM4 := lemmatise( INLINEFORM5 ) inWordNet( INLINEFORM6 ) getLexemes( INLINEFORM7 ).synset(0) isCoordination( INLINEFORM8 ) INLINEFORM9 := {} INLINEFORM10 in INLINEFORM11 INLINEFORM12 := INLINEFORM13 INLINEFORM14 extractSynsets( INLINEFORM15 ) INLINEFORM16 isGroup( INLINEFORM17 ) extractSynsets( INLINEFORM18 .semanticHead) {}", "The extractSynsets function (shown as algorithm SECREF40 ) accepts a nominal chunk and extracts WordNet synsets, corresponding to it. It operates recursively to dispose any unnecessary chunk elements and find the longest subgroup, having a counterpart in WordNet. It corresponds to step (5) in figure FIGREF54 and uses the following elements:", "returns a lemma of a nominal group.", "checks whether a given text corresponds to a lexeme in WordNet.", "return a list of WordNet lexemes corresponding to a given text.", "return a synset including a lexeme in a given word sense number.", "return TRUE iff a given chunk is a coordination group.", "return TRUE iff a given chunk is a group.", "is an element of a syntactic group, denoted as a semantic head.", "A few of design decisions reflected in these procedures require further comment. First of all, they differ a lot from the studies that involve a definition represented with a bag of words BIBREF48 , BIBREF51 , BIBREF53 . Here, a certain definition structure is assumed, i.e. a series of nominal groups divided by separators. What is more, as the full stop belongs to them, the series may continue beyond a single sentence, which has improved recall in preliminary experiments. Availability of a shallow parsing layer and group lemmatisation allows to query WordNet by syntactic groups instead of single nouns, as in work of BIBREF46 . As word order is relatively free in Polish, a nominal group cannot be assumed to end with a noun, like BIBREF47 did. Instead, a semantic head of a group is used.", "Finally, the problem of lack of word sense disambiguation remains – the line getLexemes( INLINEFORM0 ).synset(0) means that always a synset connected to the first meaning of a lexeme is selected. We assume that it corresponds to the most common meaning, but that is not always the case – in our example at figure FIGREF54 <prezydent.1, prezydent miasta.1> (president of a city, i.e. mayor) precedes <prezydent.2> (president of a country, the obvious meaning). However, it does not have to harm QA performance as far as the question analysis module (section SECREF18 ) functions analogously, e.g. in case of a question beginning with który prezydent... (which president...). Therefore, the decision has been motivated by relatively good performance of this solution in previously performed experiments on question analysis BIBREF36 . It also works in other applications, e.g. gazetteers generation BIBREF46 .", "To assess quality of the entity library, its content has been compared with synsets manually extracted from randomly selected 100 Wikipedia articles. 95 of them contain a description of an entity in the first paragraph. Among those, DeepER entity library includes 88 (per-entity recall 92.63 per cent). 135 synsets have been manually assigned to those entities, while the corresponding set in library contains 133 items. 106 of them are equal (per-synset precision 79,70 per cent), while 13 differ only by word sense. 16 of manually extracted synsets hove no counterpart in the entity library (per-synset recall 88.15 per cent), which instead includes 14 false synsets." ], [ "Evaluation of RAFAEL is typical for factoid QA systems: given a knowledge base and and questions, its responses are compared to the expected ones, prepared in advance. Section SECREF80 describes data used in this procedure, whereas section SECREF87 explains how an automatic evaluation is possible without human labour." ], [ "The Polish Wikipedia serves as a knowledge base. It has been downloaded from a project site as a single database dump at 03.03.2013, from which plain text files have been extracted using Wikipedia Extractor 2.2 script. It means that only plain text is taken into account – without lists, infoboxes, tables, etc. This procedure leads to a corpus with 895,486 documents, containing 168,982,550 segments, which undergo the annotation process, described in section SECREF12 .", "The questions that are to be answered with the knowledge base come from two separate sets:", "Development set bases on 1500 (1130 after filtering) questions from a Polish quiz TV show, called Jeden z dziesięciu BIBREF55 . It was involved in previous experiments BIBREF4 , BIBREF36 .", "Evaluation set bases on an open dataset for Polish QA systems, published by BIBREF56 . It has been gathered from Did you know... column, appearing in the main page of the Polish Wikipedia. It contains 4721 questions, from which 1000 have been analysed, which resulted in 576 satisfying the task constrains, given in chapter SECREF2 .", "Table TABREF85 shows a distribution of different question types and named entity types in the sets.", "To each of the questions from both sets some information has been assigned manually. It includes an identification number, an expected answer string, a general question type, a named entity type (if applicable) and an expected source document. Table TABREF86 contains several exemplary questions from the development set.", "The additional information (question types and expected documents) makes it possible to evaluate only selected modules of the whole QA system. For example, we could test question classification by comparing results against given question types or entity selection by analysing only the relevant document." ], [ "Thanks to availability of the DeepER entity library, it is possible to automatically perform answer evaluation for all the question types that are recognised by this technique (UNNAMED_ENTITY and NAMED_ENTITY excluding dates, numbers and quantities).", "Both an expected and obtained answer are represented as short strings, e.g. Bronisław Komorowski. However, it does not suffice to check their exact equality. That is caused by existence of different names for one entity (Bronisław Maria Komorowski or Komorowski), but also rich nominal inflection (Komorowskiego, Komorowskiemu, ...).", "In fact, we want to compare entities, not names. Hence, deep entity recognition is a natural solution here. To check correctness of an answer, we use it as an input for the recognition process, described in section SECREF73 . Then, it is enough to check whether the expected answer appears in any of lists of names, assigned to the recognized entities. For example, let us consider a question: Kto jest obecnie prezydentem Polski? (Who is the current president of Poland?) with expected answer Bronisław Komorowski and a system answer Komorowski. The DeepER process finds many entities in the string (all the persons bearing this popular surname). One of them is the question goal, hence, has Bronisław Komorowski in its list of names.", "As the process of entity recognition is imperfect, so is the automatic evaluation. However, it still lets us to notice general trends in answering performance with respect to several factors. Of course, the final evaluation needs to be checked manually." ], [ "As mentioned in previous section, the results consist of two groups: experiments, showing an influence of some aspects of algorithm on performance, and a final assessment. Both use the Polish Wikipedia as a knowledge base, whereas the questions asked belong to development and evaluation sets, respectively. In this section, recall measures percentage of questions, to which RAFAEL gave any answer, whereas precision denotes percentage of question answered correctly.", "When analysing results of different entity recognition techniques, we need to remember that they strongly rely on output of the question analysis, which is not perfect. In particular, tests show that 15.65 per cent of questions is assigned to wrong type and 17.81 per cent search results do not include the expected document BIBREF36 . The entity recognition (ER) stage, a focus of this work, is very unlikely to deliver valid answers in these cases. However, as the expected question type and source document are available in question metadata, it is possible to correct results of question analysis by artificially replacing a wrong type and/or adding the expected document to the retrieved set. In that way the ER modules could be evaluated, as if question analysis worked perfectly. Note that this approach slightly favours NER-based solutions as the question metadata contains general types and named entity types but lack focus synsets, used by DeepER." ], [ "The goal of the first experiment is to test how number a of documents retrieved from the search engine and analysed by the entity recognition techniques, influences the performance. Question classification errors have been bypassed as described in the previous paragraph. Additionally, two versions have been evaluated: with and without corrections of a retrieved set of documents. Figure FIGREF89 demonstrates results for different entity recognition techniques.", "As we can see, if a retrieved set contains the desired article, adding new documents slightly increases recall, while precision drops observably. That is because additional irrelevant documents usually introduce noise. However, in some cases they are useful, as increasing recall indicates. On the other hand, if we have no guarantee of presence of the expected document in a list, it seems more desirable to extend it, especially for small sizes. For sets bigger than 50 elements, the noise factor again dominates our results. Judging by F1 measure, the optimal value is 20 documents.", "When it comes to the comparison, it should be noted that DeepER performs noticeably better than traditional NER. The gain in precision is small, but recall is almost twice as big. It could be easily explained by the fact that the NER solutions are unable to handle UNNAMED_ENTITY type, which accounts for 36 per cent of the entity questions.", "It is also worthwhile to check how the system performs while using different values of minimal confidence rate (Jaccard similarity), as described in section UID38 . It could become useful when we demand higher precision and approve lower recall ratio. The plot in figure FIGREF90 shows answering performance using DeepER with corrected question analysis with respect to the minimal confidence rate. Generally, the system behaves as expected, but the exact values disappoint. The precision remain at a level of 25-40 per cent up to confidence 0.75, where in turn recall drops to 0.35 per cent only. Values of F1 measure suggest that 0.2 is the highest sensible confidence rate.", "One more parameter worth testing, explained in section UID34 , is the context generation strategy. To find the entity with a context most similar to a question content, we could analyse a single sentence, where it appears, or a sequence of words of a predefined length. For both of these solutions, we could also add a document title, as it is likely to be referred to by anaphoric expressions. Figure FIGREF91 shows the value of precision (recall does not depend on context) for these four solutions.", "We can see that inclusion of a title in a context helps to achieve a better precision. The impact of anaphoric reference to title emerges clearly in case of flexible context – the difference grows with context size. Quite surprisingly, for the optimal context length (1.5 * question size), it is on the contrary. However, because of the small difference between the techniques including title, for the sake of simplicity, the single sentence is used in the final evaluation." ], [ "To impose a realistic challenge to the system, the evaluation set, used at this stage, substantially differs from the one used during the development (see section SECREF80 ). A configuration for the final evaluation has been prepared based on results of the experiments. All of the tested versions share the following features:", "no question analysis corrections,", "question classification and query generation solutions which proved best in the previous experiments (see section SECREF18 ),", "a retrieved set of documents including 20 articles,", "no minimal confidence,", "singe sentence context with title.", "Tested solutions differ with respect to entity recognition only; RAFEL variants based on the following options are considered:", "quantities recognizer (Quant),", "traditional NER solutions: Nerf and Liner2,", "deep entity recognition (DeepER),", "hybrid approach, where entity mentions were gathered from all the above sources.", "Table TABREF103 shows results of the final evaluation, expressed by recall, precision, F1 measure and Mean Reciprocal Rank (MRR). Standard deviations of these values have been obtained by bootstrap resampling of the test set. Additionally, precision obtained by automatic evaluation has been added, where applicable. As we can see, only a small percentage of questions is handled by the quantitative entities recognition. NER-based solutions deal with slightly more (Nerf) or less (Liner2) than a half of the questions. When using DeepER, the recall ratio rises to 73 per cent while the precision does not differ significantly. That is because UNNAMED_ENTITY questions (unreachable for traditional NER) account for a substantial part of the test set. The maximum recall is obtained by the hybrid solution (90 per cent) but it comes at a cost of lower precision (33 per cent). On the other hand, when we take the whole ranking lists into account, traditional NERs seem to perform better (in terms of MRR).", "As expected, the automatic evaluation underestimates precision, but the difference remains below 5 per cent. Judging by F1 measure, the hybrid solution seems to beat the others." ], [ "The main strength of DeepER compared to NER, according to results shown in figure TABREF103 , is much higher recall. Table TABREF106 shows examples of questions, to which only DeepER provides a correct answer. As we can see (notice question foci in the table), they could not be assigned to any of the traditional NE categories.", "The other striking fact in the results is low precision. A part of the wrong answers was inspected and most of the errors seem to result from the following phenomena:", "The entity recognizers also introduce errors typical for them:", "The last remark applies also to other techniques. For example, consider a word kot, which means a cat. However, it is also a name of a journal, a lake, a village, a badge (KOT), a surname of 10 persons in the Polish Wikipedia and much more. A human would usually assume the most common meaning (a cat), but the system treats them as equally probable. It introduces noise in the process, as such an entity matches many types of questions.", "Another thing that demands explanation is a difference in precision of answers found using Liner2 and DeepER: in evaluation set the latter does not maintain its advantage from development set. It could be explained by different compositions of the question sets (table TABREF85 ) – the development one contains much more questions beginning with ambiguous pronouns, followed by a question focus, e.g. Który poeta... (which poet), thus providing a precise synset (a poet) for deep entity recognition. Members of the evaluation set much more frequently begin with pronouns like Kto ...(who), where a synset corresponds to a general NE type (a person).", "As RAFAEL is the first Polish QA system, able to answer by entities instead of documents, we can not compare it directly to any other solution. However, the evaluation set has been created based on questions published by BIBREF56 and used for evaluation of a document retrieval system BIBREF18 . Their baseline configuration achieved a@1 (percentage of questions answered by the first document, corresponds to precision in table TABREF103 ) equal 26.09 per cent. By taking into account proximity of keyword matches (MCSW method), they improved the result to 38.63 per cent. We can see that RAFAEL, despite solving much more challenging problem, in all configurations obtains better precision than baseline; using Liner2 it beats even the best method tested on this set (MCSW).", "The results suggest two possible directions of future work to improve performance of RAFAEL. Firstly, involving semantics in sentence matching could solve some of the problems mentioned above. There are a lot of techniques in that area, also in QA systems (see a variety of them used by BIBREF39 ), but their implementation in a morphologically rich language would require a thorough study. For example, there exist techniques computing a semantic similarity based on a WordNet graph BIBREF57 , which is available for Polish and proved very useful in this study. Secondly, the relatively good performance of hybrid ER indicates that it may be good to apply different entity recognizer to different questions. For example, we could evaluate them for each question type separately and select the one that performs best for a given one. However, it would require much more training data to have a substantial number of questions of each type, including the scarce ones (observe sparsity of table TABREF85 ).", "When it comes to DeepER, word ambiguity seem to be the main issue for future efforts. Of course, a full-lexicon precise word-sense disambiguation tool would solve the problem, but we can't expect it in near future. Instead, we could select a synset somewhere in a path between a focus synset and a named entity type. In the example from figure FIGREF54 rather than choosing between <prezydent.1, prezydent miasta.1> (president of a city) and <prezydent.2> (president of a country) we could use <urzędnik.1, biuralista.1> (official), which covers both meanings." ], [ "This paper introduces RAFAEL, a complete open-domain question answering system for Polish. It is capable of analysing a given question, scanning a large corpus and extracting an answer, represented as a short string of text.", "In its design, the focus has been on entity recognition techniques, used to extract all the entities compatible with a question from a given text. Apart from the traditional named entity recognition, differentiating between several broad categories of NEs, a novel technique, called Deep Entity Recognition (DeepER), has been proposed and implemented. It is able to find entities belonging to a given WordNet synset, using an entity library, gathered by interpreting definitions from encyclopaedia.", "Automatic evaluation, provided by DeepER approach, has let to perform several experiments, showing answering accuracy with respect to different parameters. Their conclusions have been used to prepare final evaluation, which results have been checked manually. They suggest that the DeepER-based solution yields similar precision to NER, but is able to answer much more questions, including those beyond the traditional categories of named entities." ], [ "As mentioned in section SECREF32 , apart from DeepER, RAFAEL employs also traditional NER-based solutions for entity recognition: NERF, Liner2 and Quant. Each of them uses its own typology of named entities, which covers only a part of the types, enumerated in section SECREF18 . Table TABREF118 shows a correspondence between these types. As we can see, there are a few problems:", "The problems 3 and 4 are solved by an additional postprocessing code, extracting CENTURY from date and NAME and SURNAME from person_nam entities. In case of multi-segment person entities it assumes that the first and last word correspond to first and last name, respectively.", "While NERF and Liner2 are standalone NER tools and details of their design are available in previously mentioned publications, Quant has been created specifically for RAFAEL. To find numbers, it annotates all chains of segments according to a predefined pattern, which accepts the following types of segments:", "The pattern is matched in greedy mode, i.e. it adds as many new segments as possible. It could recognise expressions like 10 tysięcy (10 thousand), kilka milionów (several million), 10 000 or 1.698,88 (1,698.88).", "Quantity is a sequence of segments, recognised as a number, followed by a unit of measurement. To check whether a word denotes a unit of measurement, the plWordNet is searched for lexemes equal to its base. Then it suffices to check whether it belongs to a synset, having <jednostka miary 1> (unit of measurement) as one of (direct or indirect) hypernyms, e.g. piętnaście kilogramów (fifteen kilograms) or 5 000 watów (5 000 watts)." ], [ "Study was supported by research fellowship within \"Information technologies: research and their interdisciplinary applications\" agreement number POKL.04.01.01-00-051/10-00. Critical reading of the manuscript by Agnieszka Mykowiecka and Aleksandra Brzezińska is gratefully acknowledged." ] ] }
{ "question": [ "Do they compare DeepER against other approaches?", "How is the data in RAFAEL labelled?", "How do they handle polysemous words in their entity library?" ], "question_id": [ "63496705fff20c55d4b3d8cdf4786f93e742dd3d", "7b44bee49b7cb39cb7d5eec79af5773178c27d4d", "6d54bad91b6ccd1108d1ddbff1d217c6806e0842" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "FLOAT SELECTED: Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional NER (Nerf, Liner2 ), deep entity recognition (DeepER) and their combination (Hybrid)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional NER (Nerf, Liner2 ), deep entity recognition (DeepER) and their combination (Hybrid)." ] } ], "annotation_id": [ "3dd14ec7c6c2a4fa560f7cff98479063dda0e1c9" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Using a set of annotation tools such as Morfeusz, PANTERA, Spejd, NERF and Liner", "evidence": [ "Secondly, texts go through a cascade of annotation tools, enriching it with the following information:", "Morphosyntactic interpretations (sets of tags), using Morfeusz 0.82 BIBREF25 ,", "Tagging (selection of the most probable interpretation), using a transformation-based learning tagger, PANTERA 0.9.1 BIBREF26 ,", "Syntactic groups (possibly nested) with syntactic and semantic heads, using a rule-based shallow parser Spejd 1.3.7 BIBREF27 with a Polish grammar, including improved version of modifications by BIBREF28 , enabling lemmatisation of nominal syntactic groups,", "Named entities, using two available tools: NERF 0.1 BIBREF29 and Liner2 2.3 BIBREF30 ." ], "highlighted_evidence": [ "Secondly, texts go through a cascade of annotation tools, enriching it with the following information:\n\nMorphosyntactic interpretations (sets of tags), using Morfeusz 0.82 BIBREF25 ,\n\nTagging (selection of the most probable interpretation), using a transformation-based learning tagger, PANTERA 0.9.1 BIBREF26 ,\n\nSyntactic groups (possibly nested) with syntactic and semantic heads, using a rule-based shallow parser Spejd 1.3.7 BIBREF27 with a Polish grammar, including improved version of modifications by BIBREF28 , enabling lemmatisation of nominal syntactic groups,\n\nNamed entities, using two available tools: NERF 0.1 BIBREF29 and Liner2 2.3 BIBREF30 .\n\n" ] } ], "annotation_id": [ "1075c87b188f9958978397a9f9589fc0136d8fca" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "only the first word sense (usually the most common) is taken into account" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Figure FIGREF54 shows an exemplary process of converting the first paragraph of a Polish Wikipedia entry, describing former Polish president Lech Wałęsa, into a list of WordNet synsets. First, we omit all unessential parts of the paragraph (1). This includes text in brackets or quotes, but also introductory expressions like jeden z (one of) or typ (type of). Then, an entity name is detached from the text by matching one of definition patterns (2). In the example we can see the most common one, a dash (–). Next, all occurrences of separators (full stops, commas and semicolons) are used to divide the text into separate chunks (3). The following step employs shallow parsing annotation – only nominal groups that appear at the beginning of the chunks are passed on (4). The first chunk that does not fulfil this requirement and all its successors get excluded from further analysis (4.1). Finally, we split the coordination groups and check, whether their lemmas correspond to any lexemes in WordNet (5). If not, the process repeats with the group replaced by its semantic head. In case of polysemous words, only the first word sense (usually the most common) is taken into account." ], "highlighted_evidence": [ "In case of polysemous words, only the first word sense (usually the most common) is taken into account." ] } ], "annotation_id": [ "3ed4ab7fb1ef561174c750eaf67ea3cc23b8d73b" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Fig. 1. Overall architecture of the QA system – RAFAEL. See descriptions of elements in text.", "Fig. 2. Outline of a question focus analysis procedure used to determine an entity type in case of ambiguous interrogative pronouns.", "Fig. 3. Example of the entity extraction process in DeepER, transforming a Wikipedia entry of Lech Wałęsa into a list of synsets.", "Table 1. A distribution of different general types and named entity types in development (1130 questions) and final evaluation (576 questions) sets.", "Table 2. Exemplary questions with their types (general and named entity), expected source articles and answers.", "Fig. 4. Question answering performance with respect to size of a retrieved set of documents, undergoing a full analysis. Two versions are considered – with and without guaranteed presence of an article, containing the desired information, in a set. The results for different entity recognition techniques– traditional NER (Nerf, Liner2) and DeepER.", "Fig. 5. RAFAEL performance with respect to minimal confidence rate. Results computed using DeepER with corrected question type and corrected list of 50 documents.", "Fig. 6. Question answering performance for different context generation strategies: single sentence and sequence of segments of certain length. Both types considered with and without an article title added.", "Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional NER (Nerf, Liner2 ), deep entity recognition (DeepER) and their combination (Hybrid).", "Table 4. Examples of questions which have been handled and answered correctly only with the DeepER approach. Their foci lie beyond areas covered by the NE categories.", "Table 5. Correspondence between named entity types from question analysis and supported by different NER solutions." ], "file": [ "4-Figure1-1.png", "6-Figure2-1.png", "12-Figure3-1.png", "15-Table1-1.png", "16-Table2-1.png", "18-Figure4-1.png", "19-Figure5-1.png", "19-Figure6-1.png", "20-Table3-1.png", "21-Table4-1.png", "23-Table5-1.png" ] }
1709.08858
Polysemy Detection in Distributed Representation of Word Sense
In this paper, we propose a statistical test to determine whether a given word is used as a polysemic word or not. The statistic of the word in this test roughly corresponds to the fluctuation in the senses of the neighboring words a nd the word itself. Even though the sense of a word corresponds to a single vector, we discuss how polysemy of the words affects the position of vectors. Finally, we also explain the method to detect this effect.
{ "section_name": [ "Introduction", "Related Work", "Senses and Contexts", "Proposed Method", "Experimental Settings and Examples of Calculation", "Evaluation", "Error analysis", "Discussion", "Conclusion" ], "paragraphs": [ [ "Distributed representation of word sense provides us with the ability to perform several operations on the word. One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this set the neighbor of the word. When a word has several senses, it is called a polysemic word. When a word has only one sense, it is called a monosemic word. We have observed that the neighbor of a polysemic word consists of words that resemble the primary sense of the polysemic word. We can explain this fact as follows. Even though a word may be a polysemic, it usually corresponds to a single vector in distributed representation. This vector is primarily determined by the major sense, which is most frequently used. The information about a word's minor sense is subtle, and the effect of a minor sense is difficult to distinguish from statistical fluctuation.", "To measure the effect of a minor sense, this paper proposes to use the concept of surrounding uniformity. The surrounding uniformity roughly corresponds to statistical fluctuation in the vectors that correspond to the words in the neighbor. We have found that there is a difference in the surrounding uniformity between a monosemic word and a polysemic word. This paper describes how to compute surrounding uniformity for a given word, and discuss the relationship between surrounding uniformity and polysemy." ], [ "The distributed word representation can be computed as weight vectors of neurons, which learn language modeling BIBREF0 . We can obtain a distributed representation of a word using the Word2Vec software BIBREF1 which enable us to perform vector addition/subtraction on a word's meaning. The theoretical background is analyzed by BIBREF2 , where the operation is to factorize a word-context matrix, where the elements in the matrix are some function of the given word and its context pairs. This analysis gives us insight into how the vector is affected by multiple senses or multiple context sets. If a word has two senses, the obtained representation for the word will be a linearly interpolated point between the two points of their senses.", "The importance of multiple senses is well recognized in word sense detection in distributed representation. The usual approach is to compute the corresponding vectors for each sense of a word BIBREF3 , BIBREF4 . In this approach, first, the context is clustered. Then, the vector for each cluster is computed. However, the major problem faced by this approach is that all target words need to be assumed as polysemic words first, and their contexts are always required to be clustered. Another approach is to use external language resources for word sense, and to classify the context BIBREF5 . The problem with this approach is that it requires language resources of meanings to obtain the meaning of a polysemic word. If we know whether a given word is polysemic or monosemic thorough a relatively simple method, we can concentrate our attention on polysemic words." ], [ "In this paper, we assume that the sense of a word is determined by the distribution of contexts in which the word appears in a given corpus. If a word comes to be used in new contexts, the word comes to have a new sense. If we could have an infinitely sizes corpus, this sense might converge into the sense in the dictionary. In reality, the size of the corpus in hand is limited, and some senses indicated in a dictionary may not appear in the corpus. The distinction between the senses in a dictionary and the senses in the corpus is important in this paper, because it is crucial for discussing polysemy. All discussions in this paper depend on the corpus in hand. We now use the FIL9 corpus (http://mattmahoney.net/dc/textdata), which primarily consists of a description of believed facts, rather than conversations. We can expect that the senses that are mainly used in conversation would not appear in this corpus.", "In this paper, we analyze auxiliary verbs, which are polysemic words from a dictionary. If the corpus is limited to a description of believed facts, we may regard auxiliary verbs as monosemic words, since their contexts are limited. In addition, we particularly analyze the relationship between the auxiliary verb “may”, and name of the month “May”. In the dictionary, these two are regarded as two different words, rather than as two different senses of one word. By ignoring the upper/lower case characters, these two words have same character sequence and the word “may” becomes a polysemic word, which has two types of context in the given corpus." ], [ "Our proposed method is based on the following measures. Let $\\vec{w}$ be the vector corresponding to the given word. Let $N$ be the size of the neighbor, such as 4. First, we choose $N$ neighboring words whose angle with the given word is the smallest. This operation is already implemented in the Word2Vec software. Let $\\vec{a_i}$ ( $\\vec{w}$ ) be the vectors corresponding to $i$ th vector of the neighbor of the word.", "We choose the uniformity of vectors, which can be regarded as general case of triangle inequality. The uniformity of a set of vectors is a ratio, i.e., the size of the vector of the vector addition of the vectors divided by the scalar sum of the sizes of the vectors. If and only if all directions of the vectors are the same, the uniformity becomes 1.0. We compute this uniformity for the neighbors, including the word itself. Surrounding Uniformity (SU) can be expressed as follows: $SU(\\vec{w}) = \\frac{|\\vec{s}(\\vec{w})|}{|\\vec{w}| + \\sum _{i}^{N}|\\vec{a_i}(\\vec{w})|}$ ", "where $\\vec{s}(\\vec{w}) = \\vec{w} + \\sum _{i}^{N} \\vec{a_i}(\\vec{w}).$ ", "When computing SU, we consider the set of words whose vectors are reliable. We choose these words as the most frequently appearing words in corpus. The size of words is denoted as $limit$ . If a word is not in this set, or the word does not have sufficient number of neighbors in this set, we consider that the value of SU is undefined, and that the word does not have this value.", "Our method performs a statistical test to determine whether a given word is used polysemously in the text, according to the following steps:", "This is a basic statistical test BIBREF6 to detect outliers.", "Note that we cannot compute the variance if some $a_i$ does not have the value of SU. Further, it may be also possible that all $a_i$ may have the same SU, sharing identical neighbors. In this case, the variance becomes an extreme value, that is, 0. In these cases, we consider that we cannot perform the statistical test." ], [ "We used FIL9, which is freely available as the test corpus for Word2Vec and is derived from Wikipedia. We compute 200-dimensional distributed vector representations with default parameter. In this situation, all-uppercase are converted into lower case. This is why all proper nouns are in lower case in this example. First we selected stable words as the 1000 words that appear most frequently in the text. We compute surrounding uniformity of these words. We define the given word $w$ and its neighboring word $a_i$ are limited to stable words. We then determine the search scope for stable neighboring words and set $N$ , which is the number of neighbors used to compute the surrounding uniformity, to 4. For example, if there are 7 stable words in the search scope, we use only the top 4 words to compute the surrounding uniformity.", "Table 1 shows the uniformity of auxiliary verbs in this setting. We were able to compute the surrounding uniformity for 160 words; for the remaining 840 words, there were fewer than the required 4 stable neighboring words in the search scope and the surrounding uniformity could not be determined.", "For the case of the word “may”, neighbor words are “can”, “should”, “might”, and “will”. Their surrounding uniformities are, 0.9252 (“can”), 0.9232 (“should”), 0.9179 (“might”), and 0.9266 (“will”). Then $m$ is equal to 0.9232, and $\\sigma $ is equal to 0.0038. Therefore, $m-3\\sigma $ is 0.9118, which is greater than 0.8917 (“may”). Since the surrounding uniformity of the word “may” is regarded as an outlier, we think of “may” as polysemic. In this setting, the word “may” is polysemic because the program works in a case-insensitive mode, and the word “may” could be both an auxiliary verb and the name of a month.", "The next example is the word “might”, whose surrounding uniformity is smaller than every neighbor word. For the word “might”, neighbor words are “would”, “could”, “should”, and “cannot”. Their surrounding uniformities are 0.9266 (“would”), 0.9290 (“could”), 0.9232 (“should”), and 0.9224 (“cannot”). Hence, $m$ is equal to 0.9253, and $\\sigma $ is equal to 0.0032. Therefore, $m-3\\sigma $ is 0.9157, which is less than 0.9179 (“might”). We cannot say 0.9179 is an outlier, and thus we cannot say the word “might” is polysemic.", "Figure 1 shows the distribution of vectors.", "The vector of “may” is placed in the interpolated position between “may” as an auxiliary verb and “may” as the name of a month. Since the word “may” is more frequently used as auxiliary verb, the vector is placed near other auxiliary verbs. However, the position of “may” could be an outlier for other auxiliary verbs.", "In addition, we should show the results of names of months because these names will have the same contexts when the word is used as the name of a month. The word “may” has other contexts as auxiliary verbs. The word “august” has the sense of an adjective in the dictionary. The word “march” has a sense of a verb. Other names are monosemic words in the dictionary. Table 2 shows the surrounding uniformity for all the names of the months.", "If we apply the test, only the word “may” passes the test. The example that fails the test is the word “august”, whose surrounding uniformity is also smaller than every neighbor word. For the case of the word “august”, $m$ is equal to 0.9808, and $\\sigma $ is equal to 0.0005. Therefore, $m-3\\sigma $ becomes 0.9793, which is less than 0.9802 (“august”). We cannot say the word “august” is polysemic, but the value of uniformity is very close to the lower bound. Other names have a greater uniformity than the corresponding lower bound. In summary, the proposed method can detect the polysemic “may”, but cannot detect the polysemicity of “august” and “march”.", "Although we can claim nothing if the statistical test fails, even the negatives have a practical value for this test. For the case of the word “august”, it can be used as an adjective. Although we cannot say the word “august” is polysemic from the proposed procedure, we cannot claim that the word “august” is monosemic. We think this failure is caused by a few, if any, contexts of “august” as an adjective. In that case, the clustering context will be difficult in practice. Therefore, the proposed test will be meaningful even for a negative result, when the result is used to judge whether further analysis of the context is worthwhile. This discussion should be also true for the word “march”, which may be used as a verb.", "There are other interesting words for which the proposed method detects polysemicity. These words are “james”, “mark”, and “bill”. The neighboring words are names of persons, such as “john”, “richard”, “robert”, “william”, “david”, “charles”, “henry”, “thomas”, “michael”, and “edward”. “mark” and “bill” have the same spell of the regular noun. The word “james” does not have such words and is subject to error analysis." ], [ "First, we set the value of $limit$ to 1000, and $N$ to 4. We then performed the statistical test of these 1000 words. From these, 33 words passed test, and we assume that these words belong to the set POLY. Further, we are unable to performs the statistical test for 127 words. We say that the remaining 840 words belong to the set MONO.", "As evaluation, we attempted to measure the agreement of human judgment for the all words of POLY and MONO. However, during the valuation, we found that many of the errors come from the problem of Word2Vec. For example, the vector of “sir” and the vector of “william” are very close because “sir william” should be very close to “william”. This is similar for “w” and “george\".", "Therefore, we first selected words whose 10 neighboring words seem reasonable neighbors for human judgments, and performed human judgments of polysemicity. We also focused the words that have bigger SU than 0.75. This is because the statistical test will be reliable when SU is large. Table 3 shows that list of words that passed the test, and have higher SU than 0.75.", "Table 3 shows all the words in POLY that are judged by human. Similarly Table 4 shows all the words in MONO that are judged by human.", "We have sampled words from MONO because there are many words in MONO. In these tables, the SU of surrounding words are also presented.", "Table 5 shows the confusion matrix for computer human judgment.", "As there exists a case for which the number is less than or equal to 5, we need Yate's continuity correction. It achieves statistical significance with level of $\\alpha =0.05$ . The disagreement in POLY in Table 5 for the word “james” attracted our attention." ], [ "The disagreement in MONO could be because we chose $3\\sigma $ , which can detect polysemicity in extremely apparent cases. Even so, the word “james” passes the proposed statistical test. Therefore, the word “james” is worth investing in.", "After examining the context of “james”, we found that it can be used as the name of river and a person. Table 6 shows the various names and how many times the name is used with the word “river”.", "The word “james” is most frequently used with “river”. This may make the word pass the statistical test." ], [ "The majority of the polysemicity presented in this paper exists due to the Word2Vec compute the distributed representation after ignoring cases. This polysemicity might not be regarded as polysemicity with more careful preprocessing.", "The behavior of proposed method depends on the Word2Vec options and the size of the corpus. If Word2Vec does not have a reasonable neighbor that consists of words of similar usage, the proposed method cannot work effectively. In addition, a problem arising due the use of Word2Vec for our application is the placement of the vector “sir” and the vector “william” in similar position. Therefore, we may need to utilize another method to compute the distributed representation of words. We use the FIL9 corpus for the experiment. Though this corpus is freely available to everyone, the size may not be sufficient. Although we can detect the polysemicity of “may”, we cannot detect the polysemicity of “august” and “march”. The statistical test cannot detect the right answer if we do not have sufficient data; therefore, this failure may be interpreted as insufficient usage of “march” as verb, and “august” as adverb, owing to its origin from Wikipedia, which is in essence a description of facts.", "We believe we need to find a way to select the number of neighbors to improve the accuracy of the test. To make the statistical test more accurate, we need more samples from the neighbors. At the same time, since we assume that we can measure the statistical fluctuation from the neighbors, we need to exclude words of a different nature from the neighbors. It is natural that the right number for a neighbor may be different according to the word. The number that we choose is the minimum value for the statistical test, and has room to adjust for improvement.", "We computed the neighbor and surrounding uniformity of the 1000 most frequently used words in FIL9. We observed that proper nouns tend to have a large surrounding uniformity, whereas prepositions tend to have a small surrounding uniformity. It is an interesting observation that the surrounding uniformity reflects the part of speech information, although it is difficult to determine the class of a word from the value of the surrounding uniformity alone. For the ease of confirming this observation, the obtained table can be downloaded from the reference (http://www.ss.cs.tut.ac.jp/FIL9SU/)." ], [ "In this paper, we proposed a method to detect polysemy based on the distributed representation by Word2Vec. We computed the surrounding uniformity of word vector and formed a statistical test. We illustrated several examples to this measure, and explained the statistical test for detecting polysemy. In addition, we have also discussed the feasibility of this test." ] ] }
{ "question": [ "How is the fluctuation in the sense of the word and its neighbors measured?" ], "question_id": [ "238ec3c1e1093ce2f5122ee60209b969f7669fae" ], "nlp_background": [ "" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Our method performs a statistical test to determine whether a given word is used polysemously in the text, according to the following steps:\n1) Setting N, the size of the neighbor.\n2) Choosing N neighboring words ai in the order whose angle with the vector of the given word w is the smallest.\n3) Computing the surrounding uniformity for ai(0 < i ≤ N) and w.\n4) Computing the mean m and the sample variance σ for the uniformities of ai .\n5) Checking whether the uniformity of w is less than m − 3σ. If the value is less than m − 3σ, we may regard w as a polysemic word.", "evidence": [ "Distributed representation of word sense provides us with the ability to perform several operations on the word. One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this set the neighbor of the word. When a word has several senses, it is called a polysemic word. When a word has only one sense, it is called a monosemic word. We have observed that the neighbor of a polysemic word consists of words that resemble the primary sense of the polysemic word. We can explain this fact as follows. Even though a word may be a polysemic, it usually corresponds to a single vector in distributed representation. This vector is primarily determined by the major sense, which is most frequently used. The information about a word's minor sense is subtle, and the effect of a minor sense is difficult to distinguish from statistical fluctuation.", "To measure the effect of a minor sense, this paper proposes to use the concept of surrounding uniformity. The surrounding uniformity roughly corresponds to statistical fluctuation in the vectors that correspond to the words in the neighbor. We have found that there is a difference in the surrounding uniformity between a monosemic word and a polysemic word. This paper describes how to compute surrounding uniformity for a given word, and discuss the relationship between surrounding uniformity and polysemy.", "We choose the uniformity of vectors, which can be regarded as general case of triangle inequality. The uniformity of a set of vectors is a ratio, i.e., the size of the vector of the vector addition of the vectors divided by the scalar sum of the sizes of the vectors. If and only if all directions of the vectors are the same, the uniformity becomes 1.0. We compute this uniformity for the neighbors, including the word itself. Surrounding Uniformity (SU) can be expressed as follows: $SU(\\vec{w}) = \\frac{|\\vec{s}(\\vec{w})|}{|\\vec{w}| + \\sum _{i}^{N}|\\vec{a_i}(\\vec{w})|}$", "where $\\vec{s}(\\vec{w}) = \\vec{w} + \\sum _{i}^{N} \\vec{a_i}(\\vec{w}).$" ], "highlighted_evidence": [ "One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this set the neighbor of the word.", "We have observed that the neighbor of a polysemic word consists of words that resemble the primary sense of the polysemic word.", " Even though a word may be a polysemic, it usually corresponds to a single vector in distributed representation. This vector is primarily determined by the major sense, which is most frequently used. The information about a word's minor sense is subtle, and the effect of a minor sense is difficult to distinguish from statistical fluctuation.", "To measure the effect of a minor sense, this paper proposes to use the concept of surrounding uniformity.", "The surrounding uniformity roughly corresponds to statistical fluctuation in the vectors that correspond to the words in the neighbor", "Surrounding Uniformity (SU) can be expressed as follows: $SU(\\vec{w}) = \\frac{|\\vec{s}(\\vec{w})|}{|\\vec{w}| + \\sum _{i}^{N}|\\vec{a_i}(\\vec{w})|}$\n\nwhere $\\vec{s}(\\vec{w}) = \\vec{w} + \\sum _{i}^{N} \\vec{a_i}(\\vec{w}).$" ] } ], "annotation_id": [ "107800957bb3f9cc126bc15bd4413355fdfe15dc" ], "worker_id": [ "74eea9f3f4f790836045fcc75d0b3f5156901499" ] } ] }
{ "caption": [ "TABLE I AUXILIARY VERBS, THEIR NEIGHBORING WORDS, AND SURROUNDING UNIFORMITIES. THE NEIGHBORING WORDS OF AN AUXILIARY VERB CONSIST OF OTHER AUXILIARY VERBS. THE WORD “MAY” HAS A SMALL SURROUNDING UNIFORMITY, ALTHOUGH ITS NEIGHBORING WORDS CONSIST OF AUXILIARY VERBS.", "TABLE II NAMES OF THE MONTHS, THEIR NEIGHBORING WORDS, AND SURROUNDING UNIFORMITIES. ONLY “MAY”, WHICH HAS THE SMALLEST SURROUNDING UNIFORMITY, PASS THE STATISTICAL TEST. ALTHOUGH THE WORD “MAY” MIGHT BE USED AS THE NAME OF A MONTH, THE CORRESPONDING VECTOR IS NEAR THE AUXILIARY VERBS.", "TABLE III EVALUATED WORDS AND ITS NEIGHBOR THAT PASSES THE STATISTICAL TEST.", "TABLE IV EVALUATED WORDS THAT DOES NOT PASS THE STATISTICAL TEST.", "TABLE V CONFUSION MATRIX OF THE AGREEMENT BETWEEN COMPUTER AND HUMAN JUDGMENTS. IT SHOWS STATISTICAL SIGNIFICANCE BY USING X2 TEST.", "TABLE VI FREQUENCIES OF A PERSON’S NAME AND THE NAME FOLLOWED BY THE WORD “RIVER”. THE NAME“JAMES” IS THE MOST FREQUENTLY USED NAME WITH THE WORD “RIVER”." ], "file": [ "3-TableI-1.png", "4-TableII-1.png", "5-TableIII-1.png", "5-TableIV-1.png", "5-TableV-1.png", "5-TableVI-1.png" ] }
1706.03610
Neural Domain Adaptation for Biomedical Question Answering
Factoid question answering (QA) has recently benefited from the development of deep learning (DL) systems. Neural network models outperform traditional approaches in domains where large datasets exist, such as SQuAD (ca. 100,000 questions) for Wikipedia articles. However, these systems have not yet been applied to QA in more specific domains, such as biomedicine, because datasets are generally too small to train a DL system from scratch. For example, the BioASQ dataset for biomedical QA comprises less then 900 factoid (single answer) and list (multiple answers) QA instances. In this work, we adapt a neural QA system trained on a large open-domain dataset (SQuAD, source) to a biomedical dataset (BioASQ, target) by employing various transfer learning techniques. Our network architecture is based on a state-of-the-art QA system, extended with biomedical word embeddings and a novel mechanism to answer list questions. In contrast to existing biomedical QA systems, our system does not rely on domain-specific ontologies, parsers or entity taggers, which are expensive to create. Despite this fact, our systems achieve state-of-the-art results on factoid questions and competitive results on list questions.
{ "section_name": [ "Introduction", "Model", "Input Layer", "Output Layer", "Decoding", "Domain Adaptation", "Datasets", "Training", "Evaluation", "Ensemble", "Comparison to competing BioASQ systems", "Qualitative Analysis", "Discussion and future work", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Question answering (QA) is the task of retrieving answers to a question given one or more contexts. It has been explored both in the open-domain setting BIBREF0 as well as domain-specific settings, such as BioASQ for the biomedical domain BIBREF1 . The BioASQ challenge provides $\\approx 900$ factoid and list questions, i.e., questions with one and several answers, respectively. This work focuses on answering these questions, for example: Which drugs are included in the FEC-75 regimen? $\\rightarrow $ fluorouracil, epirubicin, and cyclophosphamide.", "We further restrict our focus to extractive QA, i.e., QA instances where the correct answers can be represented as spans in the contexts. Contexts are relevant documents which are provided by an information retrieval (IR) system.", "Traditionally, a QA pipeline consists of named-entity recognition, question classification, and answer processing steps BIBREF2 . These methods have been applied to biomedical datasets, with moderate success BIBREF3 . The creation of large-scale, open-domain datasets such as SQuAD BIBREF4 have recently enabled the development of neural QA systems, e.g., wang2016machine, dcn, seo2016bidirectional, weissenborn2017fastqa, leading to impressive performance gains over more traditional systems.", "However, creating large-scale QA datasets for more specific domains, such as the biomedical, would be very expensive because of the need for domain experts, and therefore not desirable. The recent success of deep learning based methods on open-domain QA datasets raises the question whether the capabilities of trained models are transferable to another domain via domain adaptation techniques. Although domain adaptation has been studied for traditional QA systems BIBREF5 and deep learning systems BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , it has to our knowledge not yet been applied for end-to-end neural QA systems.", "To bridge this gap we employ various domain adaptation techniques to transfer knowledge from a trained, state-of-the-art neural QA system (FastQA, weissenborn2017fastqa) to the biomedical domain using the much smaller BioASQ dataset. In order to answer list questions in addition to factoid questions, we extend FastQA with a novel answering mechanism. We evaluate various transfer learning techniques comprehensively. For factoid questions, we show that mere fine-tuning reaches state-of-the-art results, which can further be improved by a forgetting cost regularization BIBREF9 . On list questions, the results are competitive to existing systems. Our manual analysis of a subset of the factoid questions suggests that the results are even better than the automatic evaluation states, revealing that many of the \"incorrect\" answers are in fact synonyms to the gold-standard answer." ], [ "Our network architecture is based on FastQA BIBREF15 , a state-of-the-art neural QA system. Because the network architecture itself is exchangeable, we treat it as a black box, with subtle changes at the input and output layer as well as to the decoding and training procedure. These changes are described in the following. See Figure 1 for an overview of the system." ], [ "In a first step, words are embedded into a high-dimensional vector space. We use three sources of embeddings, which are concatenated to form a single embedding vector:", "GloVe embeddings: 300-dimensional GloVe vectors BIBREF14 . These are open-domain word vectors trained on 840 billion tokens from web documents. The vectors are not updated during training.", "Character embeddings: As used in FastQA BIBREF15 and proposed originally by seo2016bidirectional, we employ a 1-dimensional convolutional neural network which computes word embeddings from the characters of the word.", "Biomedical Word2Vec embeddings: 200-dimensional vectors trained using Word2Vec BIBREF18 on about 10 million PubMed abstracts BIBREF19 . These vectors are specific to the biomedical domain and we expect them to help on biomedical QA.", "As an optional step, we add entity tag features to the token embeddings via concatenation. Entity tags are provided by a dictionary-based entity tagger based on the UMLS Metathesaurus. The entity tag feature vector is a 127-dimensional bit vector that for each of the UMLS semantic types states whether the current token is part of an entity of that type. This step is only applied if explicitly noted.", "Finally, a one-hot encoding of the question type (factoid or list) is appended to all the input vectors. With these embedding vectors as input, we invoke FastQA to produce start and end scores for each of the $n$ context tokens. We denote start scores by $y_{start}^{i}$ and end scores conditioned on a predicted start at position $i$ by $y_{end}^{i, j}$ , with start index $i \\in [1, n]$ and end index $j \\in [i, n]$ ." ], [ "In our adapted output layer, we convert the start and end scores to span probabilities. The computation of these probabilities is independent of the question type. The interpretation, however, depends on the question type: While for factoid questions, the list of answer spans is interpreted as a ranked list of answer candidates, for list questions, answers above a certain probability threshold are interpreted as the set of answers to the question.", "Given the start scores $y_{start}^1, ..., y_{start}^n$ and end scores $y_{end}^{i, 1}, ..., y_{end}^{i, n}$ , we compute the start and end probabilities as follows: ", "$$p_{start}^i = \\sigma (y_{start}^i)$$ (Eq. 16) ", "$$p_{end}^{i, \\cdot } = \\operatorname{softmax}(y_{end}^{i, \\cdot })$$ (Eq. 17) ", "where $\\sigma (x)$ is the sigmoid function. As a consequence, multiple tokens can be chosen as likely start tokens, but the network is expected to select a single end token for a given start token, hence the $\\operatorname{softmax}$ function. Finally, the probability that a given span $(i, j)$ answers the question is $p_{span}^{i, j} = p_{start}^{i} \\cdot p_{end}^{i, j}$ . This extension generalizes the FastQA output layer such that multiple answer spans with different start positions can have a high probability, allowing us to retrieve multiple answers for list questions." ], [ "Given a trained model, start probabilities can be obtained by running a forward pass and computing the start probability as in Equation 16 . For the top 20 starts, we compute the end probabilities as given by Eq. 17 . From the start and end probabilities, we extract the top 20 answer spans ranked by $p_{span}^{i, j}$ . As a simple post-processing step, we remove duplicate strings and retain only those with the highest probability.", "For factoid questions, we output the 5 most likely answer spans as our ranked list of answers. For list questions, we learn a probability cutoff threshold $t$ that defines the set of list answers $A = \\lbrace (i, j) | p_{span}^{i, j} \\ge t\\rbrace $ . We choose $t$ to be the threshold that optimizes the list F1 score on the respective development set." ], [ "Our training procedure consists of two phases: In the pre-training phase, we train the model on SQuAD, using a token F1 score as the training objective as by weissenborn2017fastqa. We will refer to the resulting parameters as the base model. In the fine-tuning phase, we initialize the model parameters with the base model and then continue our optimization on the BioASQ dataset with a smaller learning rate.", "To avoid catastrophic forgetting during fine-tuning as a means to regularize our model, we optionally add an additional forgetting cost term $L_{fc}$ , as proposed by riemer2017forgettingcost. It is defined as the cross-entropy loss between the current predictions and the base model's predictions.", "We also add an L2 loss term $L_{l2}$ which penalizes deviations from the base model's parameters. Note that a more advanced approach would be to apply this loss selectively on weights which are particularly important in the source domain BIBREF10 . The final loss is computed as $L_{final} = L_{original} + C_{fc} \\cdot L_{fc} + C_{l2} \\cdot L_{l2}$ where $C_{fc}$ and $C_{l2}$ are hyperparameters which are set to 0 unless otherwise noted.", "In this section, we evaluate various domain adaptation techniques. The results of the experiments are summarized in Table 1 .", "As a baseline without transfer learning, Experiment 1 trains the model on BioASQ only. Because the BioASQ dataset by itself is very small, a dropout rate of $0.7$ was used, because it worked best in preliminary experiments. We observe a rather low performance, which is expected when applying deep learning to such a small dataset.", "Experiments 2 and 3 evaluate the pure fine-tuning approach: Our base model is a system trained on SQuAD only and tested on BioASQ (Experiment 2). For Experiment 3, we fine-tuned the base model on the BioASQ4B training set. We observe that performance increases significantly, especially on list questions. This increase is expected, because the network is trained on biomedical- and list questions, which are not part of the SQuAD dataset, for the first time. Overall, the performance of the fine-tuned model on both question types is much higher than the baseline system without transfer learning.", "In order to evaluate the impact of using biomedical word embeddings, we repeat Experiment 3 without them (Experiment 4). We see a factoid and list performance drop of $3.3$ and $1.2$ percentage points, respectively, showing that biomedical word embeddings help increase performance.", "In Experiment 5, we append entity features to the word vector, as described in Section \"Input Layer\" . Even though these features provide the network with domain-specific knowledge, we found that it actually harms performance on factoid questions. Because most of the entity features are only active during fine-tuning with the small dataset, we conjecture that the performance decrease is due to over-fitting.", "We continue our study with techniques to combat catastrophic forgetting as a means to regularize training during fine-tuning. In Experiment 6 of Table 1 we fine-tune the base model on a half-half mixture of BioASQ and SQuAD questions (BioASQ questions have been upsampled accordingly). This form of joint training yielded no significant performance gains. Experiment 7 regularizes the model via an additional forgetting cost term, as proposed by riemer2017forgettingcost and explained in Section \"Domain Adaptation\" . We generally found that this technique only increases performance for factoid questions where the performance boost was largest for $C_{fc} = 100.0$ . The fact that the forgetting loss decreases performance on list questions is not surprising, as predictions are pushed more towards the predictions of the base model, which has very poor performance on list questions.", "Experiment 8 adds an L2 loss which penalizes deviations from the base model's parameters. We found that performance decreases as we increase the value of $C_{l2}$ which shows that this technique does not help at all. For the sake of completeness we report results for $C_{l2} = 0.3$ , the lowest value that yielded a significant drop in performance." ], [ "SQuAD BIBREF4 is a dataset of $\\approx 100,000$ questions with relevant contexts and answers that sparked research interest into the development of neural QA systems recently. The contexts are excerpts of Wikipedia articles for which crowd-source workers generated questions-answer pairs. Because of the large amount of training examples in SQuAD, it lends itself perfectly as our source dataset.", "The BioASQ challenge provides a biomedical QA dataset BIBREF1 consisting of questions, relevant contexts (called snippets) from PubMed abstracts and possible answers to the question. It was carefully created with the help of biomedical experts.", "In this work, we focus on Task B, Phase B of the BioASQ challenge, in which systems must answer questions from gold-standard snippets. These questions can be either yes/no questions, summary questions, factoid questions, or list questions. Because we employ an extractive QA system, we restrict this study to answering factoid and list questions by extracting answer spans from the provided contexts.", "The 2017 BioASQ training dataset contains $1,799$ questions, of which 413 are factoid and 486 are list questions. The questions have $\\approx 20$ snippets on average, each of which are on average $\\approx 34$ tokens long. We found that around $65\\%$ of the factoid questions and around $92\\%$ of the list questions have at least one extractable answer. For questions with extractable answers, answers spans are computed via a simple substring search in the provided snippets. All other questions are ignored during training and treated as answered incorrectly during evaluation." ], [ "We minimize the cross-entropy loss for the gold standard answer spans. However, for multiple answer spans that refer to the same answer (e.g. synonyms), we only minimize the loss for the span of the lowest loss. We use the ADAM BIBREF20 for optimization on SQuAD with a learning rate starting at $10^{-3}$ which is halved whenever performance drops between checkpoints. During the fine-tuning phase, we continue optimization on the BioASQ dataset with a smaller learning rate starting at $10^{-4}$ . During both phases, the model is regularized by variational dropout of rate $0.5$ BIBREF21 ." ], [ "The official evaluation measures from BioASQ are mean reciprocal rank (MRR) for factoid questions and F1 score for list questions . For factoid questions, the list of ranked answers can be at most five entries long. The F1 score is measured on the gold standard list elements. For both measures, case-insensitive string matches are used to check the correctness of a given answer. A list of synonyms is provided for all gold-standard answers. If the system's response matches one of them, the answer counts as correct.", "For evaluation, we use two different fine-tuning datasets, depending on the experiment: BioASQ3B, which contains all questions of the first three BioASQ challenges, and BioASQ4B which additionally contains the test questions of the fourth challenge. BioASQ4B is used as the training dataset for the fifth BioASQ challenge whereas BioASQ3B was used for training during the fourth challenge.", "Because the datasets are small, we perform 5-fold cross-validation and report the average performance across the five folds. We use the larger BioASQ4B dataset except when evaluating the ensemble and when comparing to participating systems of previous BioASQ challenges.", "All models were implemented using TensorFlow BIBREF22 with a hidden size of 100. Because the context in BioASQ usually comprises multiple snippets, they are processed independently in parallel for each question. Answers from all snippets belonging to a question are merged and ranked according to their individual probabilities." ], [ "Model ensembles are a common method to tweak the performance of a machine learning system. Ensembles combine multiple model predictions, for example by averaging, in order to improve generalization and prevent over-fitting. We evaluate the utility of an ensemble by training five models on the BioASQ3B dataset using 5-fold cross-validation. Each of the models is evaluated on the 4B test data, i.e., data which is not included in BioASQ3B.", "During application, we run an ensemble by averaging the start and end scores of individual models before they are passed to the sigmoid / softmax functions as defined in Eq. 16 and 17 . In Table 2 we summarize the average performance of the five models, the best performance across the five models, and the performance of the ensemble. We observe performance gains of 3 percentage points on factoid questions and a less than 1 percentage point on list questions, relative to the best single model. This demonstrates a small performance gain that is consistent with the literature." ], [ "Because the final results of the fifth BioASQ challenge are not available at the time of writing, we compare our system to the best systems in last year's challenge . For comparison, we use the best single model and the model ensemble trained on BioASQ3B (see Section \"Ensemble\" ). We then evaluate the model on the 5 batches of last year's challenge using the official BioASQ evaluation tool. Each batch contains 100 questions of which only some are factoid and list questions. Note that the results underestimate our system's performance, because our competing system's responses have been manually evaluated by humans while our system's responses are evaluated automatically using string matching against a potentially incomplete list of synonyms. In fact, our qualitative analysis in Section \"Qualitative Analysis\" shows that many answers are counted as incorrect, but are synonyms of the gold-standard answer. The results are summarized in Table 3 and compared to the best systems in the challenge in each of the batches and question type categories.", "With our system winning four out of five batches on factoid questions, we consider it state-of-the-art in biomedical factoid question answering, especially when considering that our results might be higher on manual evaluation. The results on list questions are slightly worse, but still very competitive. This is surprising, given that the network never saw a list question prior to the fine-tuning phase. Due to small test set sizes, the sampling error in each batch is large, causing the single model to outperform the model ensemble on some batches." ], [ "In order to get a better insight into the quality of the predictions, we manually validated the predictions for the factoid questions of batch 5 of the fourth BioASQ challenge as given by the best single model (see Table 3 ). There are in total 33 factoid questions, of which 23 have as the gold standard answer a span in one of the contexts. According to the official BioASQ evaluation, only 4 questions are predicted correctly (i.e., the gold standard answer is ranked highest). However, we identified 10 rank-1 answers which are not counted as correct but are synonyms to the gold standard answer. Examples include \"CMT4D disease\" instead of \"Charcot-Marie-Tooth (CMT) 4D disease\", \"tafazzin\" instead of \"Tafazzin (TAZ) gene\", and \" $\\beta $ -glucocerebrosidase\" instead of \"Beta glucocerebrosidase\". In total, we labeled 14 questions as correct and 24 questions as having their correct answer in the top 5 predictions.", "In the following, we give examples of mistakes made by the system. Questions are presented in italics. In the context, we underline predicted answers and present correct answers in boldface.", "We identified eight questions for which the semantic type of the top answer differs from the question answer type. Some of these cases are completely wrong predictions. However, this category also includes subtle mistakes like the following:", "In which yeast chromosome does the rDNA cluster reside?", "The rDNA cluster in Saccharomyces cerevisiae is located 450 kb from the left end and 610 kb from the right end of chromosome XII...", "Here, it predicted a yeast species the rDNA cluster is located in, but ignored that the question is asking for a chromosome.", "Another type of mistakes is that the top answer is somewhat correct, but is missing essential information. We labeled four predictions with this category, like the following example:", "How early during pregnancy does non-invasive cffDNA testing allow sex determination of the fetus?", "Gold Standard Answer: \"6th to 10th week of gestation\" or \"first trimester of pregnancy\"", "Given Top Answer: \"6th-10th\"", "In summary, to our judgment, 14 of 33 questions ( $42.4\\%$ ) are answered correctly, and 24 of 33 questions ( $72.7\\%$ ) are answered correctly in one of the top 5 answers. These are surprisingly high numbers considering low MRR score of $23.7\\%$ of the automatic evaluation (Table 3 )." ], [ "The most significant result of this work is that state-of-the-art results in biomedical question answering can be achieved even in the absence of domain-specific feature engineering. Most competing systems require structured domain-specific resources, such as biomedical ontologies, parsers, and entity taggers. While these resources are available in the biomedical domain, they are not available in most domains.", "Our system, on the other hand, requires a large open-domain QA dataset, biomedical word embeddings (which are trained in an unsupervised fashion), and a small biomedical QA dataset. This suggests that our methodology is easily transferable to other domains as well.", "Furthermore, we explored several supervised domain adaptation techniques. In particular, we demonstrated the usefulness of forgetting cost for factoid questions. The decreased performance on list questions is not surprising, because the model's performance on those questions is very poor prior to fine-tuning which is due to the lack of list questions in SQuAD. We believe that large scale open-domain corpora for list questions would enhance performance further.", "Unsupervised domain adaptation could be an interesting direction for future work, because the biomedical domain offers large amounts of textual data, some of which might even contain questions and their corresponding answers. We believe that leveraging these resources holds potential to further improve biomedical QA." ], [ "In this paper, we described a deep learning approach to address the task of biomedical question answering by using domain adaptation techniques. Our experiments reveal that mere fine-tuning in combination with biomedical word embeddings yield state-of-the-art performance on biomedical QA, despite the small amount of in-domain training data and the lack of domain-dependent feature engineering. Techniques to overcome catastrophic forgetting, such as a forgetting cost, can further boost performance for factoid questions. Overall, we show that employing domain adaptation on neural QA systems trained on large-scale, open-domain datasets can yield good performance in domains where large datasets are not available." ], [ "This research was supported by the German Federal Ministry of Education and Research (BMBF) through Software Campus project GeNIE (01IS12050)." ] ] }
{ "question": [ "Among various transfer learning techniques, which technique yields to the best performance?" ], "question_id": [ "f704d182c9e01a2002381b76bf21e4bb3c0d3efc" ], "nlp_background": [ "five" ], "topic_background": [ "unfamiliar" ], "paper_read": [ "no" ], "search_query": [ "question" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "108dd4f0f2f41d11b3e029d7a8a22d83896cb812" ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] } ] }
{ "caption": [ "Figure 1: Network architecture of our system for biomedical question answering. At its core, it uses an extractive neural QA system as a black box (we use FastQA (Weissenborn et al., 2017)). The embedding layer is modified in order to include biomedical word embeddings and question type features. The output layer is adjusted to add the ability to answer list questions in addition to factoid questions.", "Table 1: Comparison of various transfer learning techniques. In Experiment 1, the model was trained on BioASQ only. In Experiment 2, the model was trained on SQuAD and tested on BioASQ. We refer to it as the base model. In Experiment 3, the base model parameters were fine-tuned on the BioASQ training set. Experiments 4-5 evaluate the utility of domain dependent word vectors and features. Experiments 6-8 address the problem of catastrophic forgetting. All experiments have been conducted with the BioASQ4B dataset and 5-fold cross-validation.", "Table 2: Performance of a model ensemble. Five models have been trained on the BioASQ3B dataset and tested on the 4B test questions. We report the average and best single model performances, as well as the ensemble performance.", "Table 3: Comparison to systems on last year’s (fourth) BioASQ challenge for factoid and list questions. For each batch and question type, we list the performance of the best competing system, our single model and ensemble. Note that our qualitative analysis (Section 5.4) suggests that our factoid performance on batch 5 would be about twice as high if all synonyms were contained in the gold standard answers." ], "file": [ "3-Figure1-1.png", "6-Table1-1.png", "7-Table2-1.png", "8-Table3-1.png" ] }
1908.11425
Classifying topics in speech when all you have is crummy translations.
Given a large amount of unannotated speech in a language with few resources, can we classify the speech utterances by topic? We show that this is possible if text translations are available for just a small amount of speech (less than 20 hours), using a recent model for direct speech-to-text translation. While the translations are poor, they are still good enough to correctly classify 1-minute speech segments over 70% of the time - a 20% improvement over a majority-class baseline. Such a system might be useful for humanitarian applications like crisis response, where incoming speech must be quickly assessed for further action.
{ "section_name": [ "Introduction", "Methods ::: Speech-to-text translation.", "Methods ::: Topic modeling and classification.", "Experimental Setup ::: Data.", "Experimental Setup ::: Fine-grained topic analysis.", "Results ::: Spanish-English ST.", "Results ::: Topic Modeling on training data.", "Results ::: Topic classification on test data", "Related work", "Conclusions and future work", "Acknowledgments", "Using NMF for topic modeling", "Using NMF for topic modeling ::: Text processing", "Using NMF for topic modeling ::: Learning topics", "Using NMF for topic modeling ::: Making topic predictions", "Using NMF for topic modeling ::: Silver labels and evaluation", "Fisher corpus: assigned topics", "Tracking topic drift over conversations" ], "paragraphs": [ [ "Quickly making sense of large amounts of linguistic data is an important application of language technology. For example, after the 2011 Japanese tsunami, natural language processing was used to quickly filter social media streams for messages about the safety of individuals, and to populate a person finder database BIBREF0. Japanese text is high-resource, but there are many cases where it would be useful to make sense of speech in low-resource languages. For example, in Uganda, as in many parts of the world, the primary source of news is local radio stations, which broadcast in many languages. A pilot study from the United Nations Global Pulse Lab identified these radio stations as a potentially useful source of information about a variety of urgent topics related to refugees, small-scale disasters, disease outbreaks, and healthcare BIBREF1. With many radio broadcasts coming in simultaneously, even simple classification of speech for known topics would be helpful to decision-makers working on humanitarian projects.", "Recent research has shown that it is possible train direct Speech-to-text Translation (ST) systems from speech paired only with translations BIBREF2, BIBREF3, BIBREF4. Since no transcription is required, this could be useful in very low-resource settings, even for languages with no writing systems. In realistic low-resource settings where only a few hours of training data is available, these systems produce poor translations BIBREF5, but it has long been recognized that there are good uses for bad translations BIBREF6. Could classifying the original speech be one of those uses?", "We answer this question affirmatively: using ST to translate speech to text, we then classify by topic using supervised models (Figure FIGREF1). We test our method on a corpus of conversational Spanish speech paired with English text translations. Using an ST model trained on 20 hours of Spanish-English data, we are able to predict topics correctly 71% of the time. With even worse ST, we can still predict topics with an accuracy of 61%." ], [ "We use the method of BIBREF5 to train neural sequence-to-sequence Spanish-English ST models. As in that study, before training ST, we pre-train the models using English ASR data from the Switchboard Telephone speech corpus BIBREF7, which consists of around 300 hours of English speech and transcripts. This was reported to substantially improve translation quality when the training set for ST was only tens of hours." ], [ "To classify the translated documents, we first need a set of topic labels, which were not already available for our dataset. So, we initially discover a set of topics from the target-language training text using a topic model. To classify the translations of the test data, we choose the most probable topic according to the learned topic model. To train our topic model, we use Nonnegative Matrix Factorization BIBREF8, BIBREF9." ], [ "We use the Fisher Spanish speech corpus BIBREF11, which consists of 819 phone calls, with an average duration of 12 minutes, amounting to a total of 160 hours of data. We discard the associated transcripts and pair the speech with English translations BIBREF12, BIBREF13. To simulate a low-resource scenario, we sampled 90 calls (20h) of data (train20h) to train both ST and topic models, reserving 450 calls (100h) to evaluate topic models (eval100h). Our experiments required ST models of varying quality, so we also trained models with decreasing amounts of data: ST-10h, ST-5h, and ST-2.5h are trained on 10, 5, and 2.5 hours of data respectively, sampled from train20h. To evaluate ST only, we use the designated Fisher test set, as in previous work." ], [ "In the Fisher protocol, callers were prompted with one of 25 possible topics. It would seem appealing to use the prompts as topic labels, but we observed that many conversations quickly departed from the initial prompt and meandered from topic to topic. For example, one call starts: “Ok today's topic is marriage or we can talk about anything else...”. Within minutes, the topic shifts to jobs: “I'm working oh I do tattoos.” To isolate different topics within a single call, we split each call into 1 minute long segments to use as `documents'. This gives us 1K training and 5.5K test segments, but leaves us with no human-annotated topic labels for them.", "Obtaining gold topic labels for our data would require substantial manual annotation, so we instead use the human translations from the 1K (train20h) training set utterances to train the NMF topic model with scikit-learn BIBREF14, and then use this model to infer topics on the evaluation set. These silver topics act as an oracle: they tell us what a topic model would infer if it had perfect translations. NMF and model hyperparameters are described in Appendix SECREF7.", "To evaluate our ST models, we apply our ST model to test audio, and then predict topics from the translations using the NMF model trained on the human translations of the training data (Figure FIGREF1). To report accuracy we compare the predicted labels and silver labels, i.e., we ask whether the topic inferred from our predicted translation (ST) agrees with one inferred from a gold translation (human)." ], [ "To put our topic modeling results in context, we first report ST results. Figure FIGREF9 plots the BLEU scores on the Fisher test set and on eval100h for Spanish-English ST models. The scores are very similar for both sets when computed using a single human reference; scores are 8 points higher on the Fisher test set if all 4 of its available references are used. The state-of-the-art BLEU score on the Fisher test set is 47.3 (using 4 references), reported by BIBREF3, who trained an ST model on the entire 160 hours of data in the Fisher training corpus. By contrast, 20 hour model (ST-20h) achieves a BLEU score of 18.1. Examining the translations (Table TABREF10), we see that while they are mediocre, they contain words that might enable correct topic classification." ], [ "Turning to our main task of classification, we first review the set of topics discovered from the human translations of train20h (Table TABREF13). We explored different numbers of topics, and chose 10 after reviewing the results. We assigned a name to each topic after manually reviewing the most informative terms; for topics with less coherent sets of informative terms, we include misc in their names.", "We argued above that the silver labels are sensible for evaluation despite not always matching the assigned call topic prompts, since they indicate what an automatic topic classifier would predict given correct translations and they capture finer-grained changes in topic. Table TABREF14 shows a few examples where the silver labels differ from the assigned call topic prompts. In the first example, the topic model was arguably incorrect, failing to pick up the prompt juries, and instead focusing on the other words, predicting intro-misc. But in the other examples, the topic model is reasonable, in fact correctly identifying the topic in the third example where the transcripts indicate that the annotation was wrong (specifying the topic prompt as music). The topic model also classifies a large proportion of discussions as intro-misc (typically at the start of the call) and family-misc (often where the callers stray from their assigned topic).", "Our analysis also supports our observation that discussed topics stray from the prompted topic in most speech segments. For example, among segments in the 17 training data calls with the prompt religion, only 36% have the silver label religion, and the most frequently assigned label is family-misc with 46%. Further details are in Appendix SECREF9." ], [ "Now we turn to our main experiment. For each of the audio utterances in eval100h, we have four ST model translations: ST-2.5h, 5h, 10h, 20h (in increasing order of quality). We feed each of these into the topic model from Table TABREF13 to get the topic distribution and use the highest scoring topic as the predicted label.", "Figure FIGREF16 compares the frequencies of the silver labels with the predictions from the ST-20h model. The family-misc topic is predicted most often—almost 50% of the time. This is reasonable since this topic includes words associated with small talk. Other topics such as music, religion and welfare also occur with a high enough frequency to allow for a reasonable evaluation.", "Figure FIGREF17 shows the accuracy for all ST models, treating the silver topic labels as the correct topics. We use the family-misc topic as a majority class naive baseline, giving an accuracy of 49.6%. We observe that ST models trained on 10 hours or more of data outperform the naive-baseline by more than 10% absolute, with ST-20h scoring 71.8% and ST-10h scoring 61.6%. Those trained on less than 5 hours of data score close to or below that of the naive baseline: 51% for ST-5h and 48% for ST-2.5h.", "Since topics vary in frequency, we look at label-specific accuracy to see if the ST models are simply predicting frequent topics correctly. Figure FIGREF18 shows a normalized confusion matrix for the ST-20h model. Each row sums to 100%, representing the distribution of predicted topics for any given silver topic, so the numbers on the diagonal can be interpreted as the topic-wise recall. For example, a prediction of music recalls 88% of the relevant speech segments. We see that the model has an recall of more than 50% for all 10 topics, making it quite effective for our motivating task. The family-misc topic (capturing small-talk) is often predicted when other silver topics are present, with e.g. 23% of the silver dating topics predicted as family-misc." ], [ "We have shown that even low-quality ST can be useful for speech classification. Previous work has also looked at speech analysis without high-quality ASR. In a task quite related to ours, BIBREF15 showed how to cluster speech segments in a completely unsupervised way. In contrast, we learn to classify speech using supervision, but what is important about our result is it shows that a small amount of supervision goes a long way. A slightly different approach to quickly analysing speech is the established task of Keyword spotting BIBREF16, BIBREF17, which simply asks whether any of a specific set of keywords appears in each segment. Recent studies have extended the early work to end-to-end keyword spotting BIBREF18, BIBREF19 and to semantic keyword retrieval, where non-exact but relevant keyword matches are retrieved BIBREF20, BIBREF21, BIBREF22. In all these studies, the query and search languages are the same, while we consider the cross-lingual case.", "There has been some limited work on cross-lingual keyword spotting BIBREF23, where ASR is cascaded with text-based cross-lingual retrieval. Some recent studies have attempted to use vision as a complementary modality to do cross-lingual retrieval BIBREF24, BIBREF25. But cross-lingual topic classification for speech has not been considered elsewhere, as far as we know." ], [ "Our results show that poor speech translation can still be useful for speech classification in low-resource settings. By varying the amount of training data, we found that translations with a BLEU score as low as 13 are still able to correctly classify 61% of the speech segments.", "Cross-lingual topic modeling may be useful when the target language is high-resource. Here, we learned target topics just from the 20 hours of translations, but in future work, we could use a larger text corpus in the high-resource language to learn a more general topic model covering a wider set of topics, and/or combine it with keyword lists curated for specific scenarios like disaster recovery BIBREF26." ], [ "This work was supported in part by a James S McDonnell Foundation Scholar Award and a Google faculty research award. We thank Ida Szubert, Marco Damonte, and Clara Vania for helpful comments on previous drafts of this paper.", "" ], [ "We now describe how we learn topics using NMF. Given a set of text documents as input, the model will output (1) for each document, a distribution over the selected number of topics (henceforth, the document-topic distribution), and (2) for each topic, a distribution over the set of unique terms in the text (henceforth, the topic-term distribution)." ], [ "Our training set (train20h) has 1080 English sentences. We start by generating a tf-idf representation for each of these. The English text contains 170K tokens and 6K terms (vocabulary size). As we are looking for topics which are coarse-level categories, we do not use the entire vocabulary, but instead focus only on the high importance terms. We lowercase the English translations and remove all punctuation, and stopwords. We further remove the terms occurring in more than 10% of the documents and those which occur in less than 2 documents, keeping only the 1000 most frequent out of the remaining.", "After preprocessing the training set, we have a feature matrix $V$ with dimensions $1080\\times 1000$, where each row is a document, and each column represents the tf-idf scores over the 1000 selected terms. The feature matrix will be sparse as only a few terms would occur in a document, and will also be non-negative as tf-idf values are greater than or equal to 0." ], [ "NMF is a matrix factorization method, which given the matrix $V$, factorizes it into two matrices: $W$ with dimensions $1080\\times t$ (long-narrow), and $H$ with dimensions $t\\times 1000$ (short-wide), where $t$ is a hyper-parameter. Figure FIGREF21 shows this decomposition when $t$ is set to 10.", "In the context of topic modeling, $t$ is the number of topics we want to learn; $W$ is the document-topic distribution, where for each document (row) the column with the highest value is the most-likely topic; and $H$ is the topic-term distribution, where each row is a topic, and the columns with the highest values are terms most relevant to it.", "The values for $W$ and $H$ are numerically approximated using a multiplicative update rule BIBREF27, with the Frobenius norm of the reconstruction error as the objective function. In this work, we use the machine-learning toolkit scikit-learn BIBREF14 for feature extraction, and to perform NMF, using default values as described at scikit-learn.org." ], [ "Using our topic-term distribution matrix $H$, we can now make topic predictions for new text input. Our evaluation set (eval100h) has 5376 English sentences. For each of these, we have the gold text, and also the ST model output. We preprocess and represent these using the same procedure as before (SECREF19) giving us the feature matrix $V^{^{\\prime }}_{gold}$ for gold, and $V^{^{\\prime }}_{ST}$ for ST output, each with dimensions $5376\\times 1000$. Our goal is to learn the document-topic distributions $W^{^{\\prime }}_{gold}$ and $W^{^{\\prime }}_{ST}$, where:", "The values for each $W^{^{\\prime }}$ matrix are again numerically approximated using the same objective function as before, but keeping $H$ fixed." ], [ "We use the highest scoring topic for each document as the prediction. The silver labels are therefore computed as $argmax(W^{^{\\prime }}_{gold})$, and for ST as $argmax(W^{^{\\prime }}_{ST})$. We can now compute the accuracy over these two sets of predictions." ], [ "Figure FIGREF24 shows the topics assigned to callers in the Fisher speech corpus. Some topic prompts overlap, for example, music-preference asks callers to discuss what kind of music they like to listen to, and music-social-message asks them to discuss the social impact of music. For both these topics, we would expect the text to contain similar terms. Similarly the topics cellphones-usage, tech-devices and telemarketing-spam also overlap. Such differences might be difficult for an unsupervised topic modeling algorithm to pick up.", "Table TABREF25 shows the topics learned by NMF by using human English translations from the entire 160 hours of training data as input, when the number of topics is set to 25. We observe that some new topics are found that were not discovered by the 20hr/10-topic model and that match the assigned topic prompts, such as juries and housing. However, there are also several incoherent topics, and we don't find a major improvement over the topics learned by just using 20 hours of training data, with the number of topics set to 10." ], [ "To measure how often speakers stray from assigned topic prompts, we take a closer look at the calls in train20h with the assigned prompt of religion. This is the most frequently assigned prompt in the Fisher dataset (17 calls in train20h). We also select this topic for further analysis as it contains terms which are strongly indicative, such as god, bible, etc. and should be relatively easier for our topic model to detect.", "Figure FIGREF26 shows the trend of discussion topics over time. Overall, only 36% of the total dialog segments in these calls have the silver label religion, and the most frequently assigned label is family-misc with 46%. We observe that the first segment is often labeled as intro-misc, around 70% of the time, which is expected as speakers begin by introducing themselves. Figure FIGREF26 shows that a similar trend emerges for calls assigned the prompt music (14 calls in train20h). Silver labels for music account for 45% of the call segments and family-misc for around 38%." ] ] }
{ "question": [ "What is the architecture of the model?", "What language do they look at?" ], "question_id": [ "da544015511e535503dee2eaf4912a5e36c806cd", "7bc993b32484d6ae3c86d0b351a68e59fd2757a5" ], "nlp_background": [ "", "" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "BIBREF5 to train neural sequence-to-sequence", "NMF topic model with scikit-learn BIBREF14" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use the method of BIBREF5 to train neural sequence-to-sequence Spanish-English ST models. As in that study, before training ST, we pre-train the models using English ASR data from the Switchboard Telephone speech corpus BIBREF7, which consists of around 300 hours of English speech and transcripts. This was reported to substantially improve translation quality when the training set for ST was only tens of hours.", "Obtaining gold topic labels for our data would require substantial manual annotation, so we instead use the human translations from the 1K (train20h) training set utterances to train the NMF topic model with scikit-learn BIBREF14, and then use this model to infer topics on the evaluation set. These silver topics act as an oracle: they tell us what a topic model would infer if it had perfect translations. NMF and model hyperparameters are described in Appendix SECREF7." ], "highlighted_evidence": [ "We use the method of BIBREF5 to train neural sequence-to-sequence Spanish-English ST models.", "Obtaining gold topic labels for our data would require substantial manual annotation, so we instead use the human translations from the 1K (train20h) training set utterances to train the NMF topic model with scikit-learn BIBREF14" ] } ], "annotation_id": [ "efa0c448e59f1d6ea924445e98dde8cb52e9079d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Spanish" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use the Fisher Spanish speech corpus BIBREF11, which consists of 819 phone calls, with an average duration of 12 minutes, amounting to a total of 160 hours of data. We discard the associated transcripts and pair the speech with English translations BIBREF12, BIBREF13. To simulate a low-resource scenario, we sampled 90 calls (20h) of data (train20h) to train both ST and topic models, reserving 450 calls (100h) to evaluate topic models (eval100h). Our experiments required ST models of varying quality, so we also trained models with decreasing amounts of data: ST-10h, ST-5h, and ST-2.5h are trained on 10, 5, and 2.5 hours of data respectively, sampled from train20h. To evaluate ST only, we use the designated Fisher test set, as in previous work." ], "highlighted_evidence": [ "We use the Fisher Spanish speech corpus BIBREF11, which consists of 819 phone calls, with an average duration of 12 minutes, amounting to a total of 160 hours of data." ] } ], "annotation_id": [ "10d24790c198f005fc03b620b2f5a825d1268226" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: Spanish speech is translated to English text, and a classifier then predicts its topic.", "Figure 2: BLEU scores for Spanish-English ST models computed on Fisher test set, using all 4 human references available, and using only 1 reference, and on eval100h, for which we have only 1 human reference.", "Table 1: Examples of Spanish audio shown as Spanish text. An ST system translates the audio into English text, and we give the human reference. Our task is to predict the topic of discussion in the audio, which are potentially signaled by the underlined words.", "Table 3: Example audio utterances from eval100h. We show a part of the human translation here. Assigned is the topic assigned to speakers in the current call to prompt discussion. Silver is topic inferred by feeding the human translation through the topic model.", "Table 2: Topics discovered using human translated text from train20h, with manually-assigned topic names.", "Figure 3: Distribution of topics predicted for the 5K audio utterances in eval100h. silver labels are predicted using human translations. The ST model has been trained on 20 hours of Spanish-English data.", "Figure 4: Accuracy of topic prediction using ST model output. The naive baseline is calculated using majority class prediction, which is the topic family-misc.", "Figure 5: Confusion matrix for ST model trained on 20 hours of Spanish-English data. Each cell represents the percentage of the silver topic labels predicted as the x-axis label, with each row summing to 100%.", "Figure 6: Nonnegative Matrix Factorization. V is the document-term matrix, where d is each document; N is the number of documents; w1 to w1000 are the terms selected as features; and t1 to t10 are the topics.", "Table 4: Topics discovered using human translated text from the full 160hr Fisher training set. We set the number of topics to 25. We assign the topic names manually, and use — where the topic clustering is not very clear.", "Figure 7: Topics assigned to callers in the Fisher dataset, as a percentage of the 819 calls.", "Figure 8: Tracking silver labels over time for calls where the assigned prompt is religion. Total of 17 calls in train20h.", "Figure 9: Tracking silver labels over time for calls where the assigned prompt is music. Total of 14 calls in train20h." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "3-Table1-1.png", "3-Table3-1.png", "3-Table2-1.png", "3-Figure3-1.png", "4-Figure4-1.png", "4-Figure5-1.png", "7-Figure6-1.png", "8-Table4-1.png", "8-Figure7-1.png", "9-Figure8-1.png", "9-Figure9-1.png" ] }
1711.04457
Word, Subword or Character? An Empirical Study of Granularity in Chinese-English NMT
Neural machine translation (NMT), a new approach to machine translation, has been proved to outperform conventional statistical machine translation (SMT) across a variety of language pairs. Translation is an open-vocabulary problem, but most existing NMT systems operate with a fixed vocabulary, which causes the incapability of translating rare words. This problem can be alleviated by using different translation granularities, such as character, subword and hybrid word-character. Translation involving Chinese is one of the most difficult tasks in machine translation, however, to the best of our knowledge, there has not been any other work exploring which translation granularity is most suitable for Chinese in NMT. In this paper, we conduct an extensive comparison using Chinese-English NMT as a case study. Furthermore, we discuss the advantages and disadvantages of various translation granularities in detail. Our experiments show that subword model performs best for Chinese-to-English translation with the vocabulary which is not so big while hybrid word-character model is most suitable for English-to-Chinese translation. Moreover, experiments of different granularities show that Hybrid_BPE method can achieve best result on Chinese-to-English translation task.
{ "section_name": [ "Introduction", "Neural Machine Translation", "Description of Different Translation Granularities", "Character Level", "Hybrid Word-Characters Level", "Subword Level", "Dataset", "Training Details", "Data Segmentation", "Results on Chinese-to-English Translation", "Results on English-to-Chinese Translation", "Related Work", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Neural machine translation (NMT) proposed by Kalchbrenner and Blunsom BIBREF0 and Sutskever et al. BIBREF1 has achieved significant progress in recent years. Unlike traditional statistical machine translation(SMT) BIBREF2 , BIBREF3 , BIBREF4 which contains multiple separately tuned components, NMT builds an end-to-end framework to model the entire translation process. For several language pairs, NMT has already achieved better translation performance than SMT BIBREF5 , BIBREF6 .", "Conventional NMT system limits the vocabulary to a modest-sized vocabulary in both sides and words out of vocabulary are replaced by a special UNK symbol. However, the process of training and decoding is often conducted on an open vocabulary, in which an obvious problem is that NMT model is incapable of translating rare words. In particular, if a source word is outside the source vocabulary or its translation is outside the target vocabulary, the model is unable to generate proper translation for this word during decoding. Both Sutskever et al. BIBREF1 and Bahdanau et al. BIBREF7 have observed that sentences with many out-of-vocabulary words tend to be translated much more poorly than sentences mainly containing frequent words.", "To address this problem, many researchers propose a broad category of approaches by employing different translation granularities. Most of these are below the word level, e.g. characters BIBREF8 , hybrid word-characters BIBREF9 , BIBREF5 , and more intelligent subwords BIBREF10 , BIBREF5 . Besides, pioneering studies BIBREF5 , BIBREF6 demonstrate that translation tasks involving Chinese are some of the most difficult problems in NMT systems. However, there is no study that shows which translation granularity is suitable for Chinese-to-English and English-to-Chinese translation tasks.", "In this work, we make an empirical comparison of different translation granularities for bidirectional English-Chinese translation tasks. In addition, we analyze the impact of these strategies on the translation results in detail. We demonstrate that Chinese-to-English NMT of 15k and 30k vocabulary size can acquire best results using subword model and with 60k vocabulary size hybrid word-character model obtains the highest performance, while hybrid word-character model is most suitable for English-to-Chinese translation. Our experiment shows that all subword methods are not bounded by the vocabulary size. Furthermore, we carry out the experiments that employ different translation granularities of source side and target side. The translation result shows that when the source granularity is hybrid word-character level and the target sentences are split into subword level by BPE method, it can achieve the best translation performance for Chinese-to-English translation task. As for English-to-Chinese translation task, Hybrid word-character model is most suitable. To the best of our knowledge, this is the first work on an empirical comparison of various translation granularities for bidirectional Chinese-English translations." ], [ "Our models are based on an encoder-decoder architecture with attention mechanism proposed by Luong et al. BIBREF11 , which utilizes stacked LSTM layers for both encoder and decoder as illustrated in Figure FIGREF1 . In this section, we make a review of NMT framework.", "First, the NMT encodes the source sentence INLINEFORM0 into a sequence of context vector representation INLINEFORM1 . Then, the NMT decodes from the context vector representation INLINEFORM2 and generates target translation INLINEFORM3 one word each time by maximizing the probability of INLINEFORM4 . Next, We review the encoder and decoder frameworks briefly.", "Encoder: The context vector representation INLINEFORM0 is generated by the encoder using INLINEFORM1 stacked LSTM layers. Bi-directional connections are used for the bottom encoder layer, and INLINEFORM2 is a concatenation vector as shown in Eq. (1): DISPLAYFORM0 ", "All other encoder layers are unidirectional, and INLINEFORM0 is calculated as follows: DISPLAYFORM0 ", "Decoder: The conditional probability INLINEFORM0 is formulated as DISPLAYFORM0 ", "Specifically, we employ a simple concatenation layer to produce an attentional hidden state INLINEFORM0 : DISPLAYFORM0 ", "where INLINEFORM0 denotes the target hidden state at the top layer of a stacking LSTM. The attention model calculates INLINEFORM1 as the weighted sum of the source-side context vector representation, just as illustrated in the upper left corner of Figure FIGREF1 . DISPLAYFORM0 ", "where INLINEFORM0 is a normalized item calculated as follows: DISPLAYFORM0 ", " INLINEFORM0 is computed by using the following formula: DISPLAYFORM0 ", "If INLINEFORM0 , INLINEFORM1 will be calculated by combining INLINEFORM2 as feed input BIBREF11 : DISPLAYFORM0 ", "Given the bilingual training data INLINEFORM0 , all parameters of the attention-based NMT are optimized to maximize the following conditional log-likelihood: DISPLAYFORM0 " ], [ "We revisit how the source and target sentences ( INLINEFORM0 and INLINEFORM1 ) are represented in NMT. For the source side of any given training corpus, we scan through the whole corpus to build a vocabulary INLINEFORM2 of unique tokens. A source sentence INLINEFORM3 is then built as a sequence of the integer indices. The target sentence is similarly transformed into a target sequence of integer indices.", "The property of NMT allows us great freedom in the choice of token units, and we can segment sentences in different ways. In this section, we will elaborate on four proposed approaches about the choice of translation granularities." ], [ "This translation granularity is easy to implement. For this granularity, what we have to do is split the sentence into a sequence of characters. However, the character-level modeling on the English side is more challenging, as the network has to be able to deal with long and coherent sequence of characters. In this case, the number of characters is often 300 INLINEFORM0 1000 symbols long, where the size of the state space grows exponentially. Therefore, this is a great challenge for us to handle.", "Besides, the alphabet of English is only consist of 26 letters, in which the vocabulary of English side is too small. Considering these facts, we only separate the Chinese side sentences into characters rather than both sides. Figure FIGREF11 shows an example of this translation granularity for character level." ], [ "In regular word-based NMT, for all words outside the source vocabulary, one feeds the universal embedding representing UNK as input to the encoder. This is problematic because it discards valuable information about the source word. To address that, hybrid word-character approach will be adopted. In this part, we will introduce this granularity in detail.", "Unlike in the conventional word model where out-of-vocabulary words are collapsed into a single UNK symbol, we convert these words into the sequence of constituent characters. Special prefixes are prepended to the characters. The purpose of the prefixes is to show the location of the characters in a word, and to distinguish them from normal in-vocabulary characters. There are three prefixes: INLINEFORM0 B INLINEFORM1 , INLINEFORM2 M INLINEFORM3 , and INLINEFORM4 E INLINEFORM5 , indicating beginning of the word, middle of the word and end of the word, respectively. During decoding, the output may also contain sequences of special tokens. With the prefixes, it is trivial to reverse the tokenization to the original words as part of a post-processing step. Using this approach, in Figure FIGREF11 , we can see the word “龙年” is segmented into “ INLINEFORM6 B INLINEFORM7 龙 INLINEFORM8 E INLINEFORM9 年”, and the word “繁花似锦” is segmented into “ INLINEFORM10 B INLINEFORM11 繁 INLINEFORM12 M INLINEFORM13 花 INLINEFORM14 M INLINEFORM15 似 INLINEFORM16 E INLINEFORM17 锦”." ], [ "Considering languages with productive word formation processes such as agglutination and compounding, translation models require mechanisms that segment the sentence below the word level (In this paper, we call this level of symbols as subword units). In this part, we will introduce the two different methods of translation granularity on subword level.", "Byte pair encoding (BPE) BIBREF12 is a compression algorithm. This simple data compression technique iteratively replaces the most frequent pair of bytes in a sequence with a single, unused byte. This compression method is first introduced into translation granularity by Sennrich et al. BIBREF10 . In this approach, instead of merging frequent pairs of bytes, characters or character sequences will be merged.", "A detailed introduction of algorithm in learning BPE operations is showed in Sennrich et al. BIBREF10 . During decoding time, each word first split into sequences of characters, then learned operation will be applied to merge the characters into larger, known symbols. For BPE method, a special symbol is also needed to indicate the merging position. In Figure FIGREF11 , the word “繁花似锦” is segmented into three subword units, and the first three units are appended a special suffix “@@”. In decoding step, the translation results contain the special tokens as well. With these suffixes, we can recover the output easily.", "The wordpiece model (WPM) implementation is initially developed to solve a Japanese/Korean segmentation problem for the speech recognition system BIBREF13 . This approach is completely data-driven and guaranteed to generate a deterministic segmentation for any possible sequence of characters, which is similar to the above method.", "The wordpiece model is generated using a data-driven approach to maximize the language-model likelihood of the training data, given an evolving word definition. The training method of WPM is described in more detail in Schuster and Nakajima BIBREF13 . As shown in Figure FIGREF11 , a special symbol is only prepended at the beginning of the words. In this case, the words “龙年”, “繁花似锦”, “洋溢” and “祥和” are split into subwords, and the rest words remain the same except for a special prefix “_”." ], [ "We perform all these translation granularities on the NIST bidirectional Chinese-English translation tasks. The evaluation metric is BLEU BIBREF14 as calculated by the multi-bleu.perl script.", "Our training data consists of 2.09M sentence pairs extracted from LDC corpus. Table 1 shows the detailed statistics of our training data. To test different approaches on Chinese-to-English translation task, we use NIST 2003(MT03) dataset as the validation set, and NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06) datasets as our test sets. For English-to-Chinese translation task, we also use NIST 2003(MT03) dataset as the validation set, and NIST 2008(MT08) will be used as test set." ], [ "We build the described models modified from the Zoph_RNN toolkit which is written in C++/CUDA and provides efficient training across multiple GPUs. Our training procedure and hyper parameter choices are similar to those used by Luong et al. BIBREF11 . In the NMT architecture as illustrated in Figure FIGREF1 , the encoder has three stacked LSTM layers including a bidirectional layer, followed by a global attention layer, and the decoder contains two stacked LSTM layers followed by the softmax layer.", "The word embedding dimension and the size of hidden layers are all set to 1000. We limit the maximum length in training corpus to 120. Parameter optimization is performed using both stochastic gradient descent(SGD) method and Adam method BIBREF15 . For the first three epoches, We train using the Adam optimizer and a fixed learning rate of 0.001 without decay. For the remaining six epoches, we train using SGD, and we set learning rate to 0.1 at the beginning and halve the threshold while the perplexity go up on the development set. We set minibatch size to 128. Dropout was also applied on each layer to avoid over-fitting, and the dropout rate is set to 0.2. At test time, we employ beam search with beam size b = 12." ], [ "For Chinese word segmentation, we use our in-house segmentation tools. For English corpus, the training data is tokenized with the Moses tokenizer. We carry out Chinese-to-English translation experiment on 30k vocabulary and 15k vocabulary for both sides respectively, and we also conduct English-to-Chinese translation experiment on 30k vocabulary size. The word level translation granularity is set to our baseline method.", "For character level, we only segment the Chinese sentences into characters and the English sentences remain the same. For hybrid word-characters level, we segment training corpus for both sides. We rank the word frequency from greatest to least in training corpus, and in order to prevent the pollution from the very rare word, we have to set a segmentation point relatively higher. For 30k vocabulary, the word frequency below 64 is segmented into characters on Chinese side, and the segmentation point is set to 22 on English side. For 15k vocabulary, we set the segmentation point to 350 and 96 on Chinese side and English side respectively. For 60k vocabulary, the frequency of Chinese words below 14 and that of English words below 6 are split into characters.", "For subword level, two different approaches are used. In BPE method, the number of merge operations is set to 30000 on 30k vocabulary size, 15000 on 15k vocabulary size and 60000 on 60k vocabulary size. For Chinese sentences, we segment the training corpus using our in-house segmentation tools first, and then we can apply the BPE method same as English sentences. Considering the essence of WPM method, we do not have to segment words for Chinese and tokenize sentences for English. That is to say, we can train the WPM without pre-processing step. Hence, for WPM method, we conduct our experiments both on the sentences trained on the raw corpus and the sentences trained on the segmented corpus." ], [ "We list the BLEU scores of different translation granularities on 30k vocabulary in Table TABREF27 .", "Row 1 is translation result of the state-of-the-art NMT system with word level. For the character level granularity (Row 2), the translation quality is higher than the word level by only 0.38 BLEU points. The last three lines in Table TABREF27 are subword level translation granularity, which contains BPE method and WPM method. BPE method (Row 4) achieves the best translation performance, which gets an improvement of 1.64 BLEU points over the word level. As for the WPM method (Row 6), the gap between this method and BPE method is narrow. Moreover, hybrid word-character level model (Row 3) outperforms the word level by 1.46 BLEU points, and translation quality of this method is very close to the BPE method. Experiments show that hybrid word-character level granularity and BPE method of subword level granularity are our choices for translation granularity on Chinese-to-English translation task.", "We execute different translation granularities on the training corpus. To make a comparison, We randomly choose 10000 sentences. Table TABREF29 show the average sentence length of different methods on all granularities.", "A well-known flaw of NMT model is the inability to properly translate long sentences. However, most of translation granularities will go below the word level. Therefore, as shown in Table TABREF29 , we can get longer sentences than the word level. We wonder what the translation performance of different lengths are on all translation granularities. We follow Bahdanau et al. BIBREF7 to group sentences of similar lengths together and compute a BLEU score per group, as demonstrated in Figure FIGREF30 .", "In order to make the comparison fair, length refers to the number of tokens split in word level. As above mentioned, hybrid word-character level model is one of suitable granularity choices for Chinese-to-English translation. We can find when the length of sentences is below 20, the translation result of this model outperforms the other models to a great extent. But with the length going up, the advantage over other models is diminishing. The character level granularity performs bad for the sentences whose length are below 20. We think the reason may be that when the sentences are short, the representation of sentence in character level cannot express the sentence meaning well. As for BPE method, we find a strange phenomenon. When the number of words in source sentence is from 60 to 80, the translation performance of BPE method is not so good. However, this method can achieve almost 3 BLEU points higher than next-best approach when the source sentence is longer than 80 words. As shown in Figure FIGREF30 , we can see WPM method does not perform well lower than 60 words in source language. But when the length of sentences is between 60 and 80, this method even outperforms the BPE method by up to 5.51 BLEU points. In this experiment, we conclude that subword model is more effective than other models in handling long sentences.", "We concern what the translation results of different translation granularities are on smaller vocabulary size. We also carry out the experiment on Chinese-to-English task of 15k vocabulary size.", "Compared to 30k vocabulary size, the translation performance of word level (Row 1) on 15k vocabulary size is reduced by 2.14 BLEU points. However, character level (Row 2) and hybrid word-character level (Row 3) achieve 42.09 and 43.12 BLEU points respectively, which is on par with quality of translation on 30k vocabulary. Both these two models exceed word level to a great extent. We infer the reason is that both character level and hybrid word-character level can represent source side and target side sentences better than the word level even if the vocabulary size is small. For subword model, translation performance of these methods remain almost the same as 30k vocabulary, which is beyond our imagination. We can find in Table TABREF32 , WPM method (Row 6) outperforms other models, and to our surprise, translation results of both WPM method and WPM methods with raw corpus (Row 5) obtain a higher BLEU points than 30k vocabulary size. We analyze the reason of this phenomenon is that the subword model is not constrained by the vocabulary size. Although the WPM method achieves the best results for the 15k vocabulary size, this method also belongs to subword level translation granularity. We can conclude that subword translation granularity is more suitable for Chinese-to-English translation task.", "In order to make a comparison of these translation granularities on larger vocabulary size, we perform the our experiment of 60k vocabulary size on Chinese-to-English translation task.", "We can find in Table TABREF34 , the word and character level (Row 1 and Row 2) on 60k vocabulary size are increased by 1.15 and 1.11 BLEU points respectively compared to 30 vocabulary size. However, to our surprise, all the translation results of subword level granularities on 60k vocabulary are below to the 30k vocabulary size. With the increase of vocabulary size, we add more fine-grained subword segmentation units into vocabulary. We infer that large amount of subword units do not have beneficial effect on the translation results. As for hybrid word-character level, this method achieves 43.97 BLEU points, which is highest among all the translation granularities on 60k vocabulary size. Compared with Table TABREF27 , hybrid word-character level outperforms the best translation result on 30k vocabulary size (BPE method) by 0.22 BLEU points.", "We also conduct experiments that we use different translation granularities on source and target side. In order to carry out the experiments easily, we only compare several granularities pairs.", "In Table TABREF36 , we can find that when the source translation granularity is word level (Row 2 and Row 3), the translation performances are relative poor, even worse than the word level of both sides in Table TABREF27 . As for BPE method on source side, the hybrid word-character on target side obtains 43.73 BLEU points (Row 6), which is close to the best translation result in Table TABREF27 . Hybrid_BPE method achieves up to 44.26 BLEU points (Row 4), which is even higher than BPE method by up to 0.51 BLEU points. This method can acquire best translation result for Chinese-to-English translation task." ], [ "We evaluate different translation granularities on the English-to-Chinese translation tasks, whose results are presented in Table TABREF39 .", "We find that hybrid word-character level (Row 3) granularity obtains significant accuracy improvements over word level and this granularity is also superior to other granularities on large-scale English-to-Chinese translation. BPE method (Row 4) in this task does not perform well as Chinese-to-English task, the translation quality of it is lower than hybrid word-character model by up to 0.97 BLEU points. However, another subword level translation granularity WPM method (Row 6) achieves 22.14 BLEU points, which is near the hybrid word-character level. Although the vocabulary of character level on Chinese side is only 7.2k, it can also obtain 19.64 BLEU points (Row 2), which is on par with translation performance of word level.", "As Chinese-to-English translation task, we carry out experiments on English-to-Chinese translation for different granularities. According to Table TABREF36 , Hybrid_BPE and BPE_Hybrid methods acquire relative higher translation quality than other methods. Therefore, in this section we only use these two methods to test which is most suitable for English-to-Chinese translation task.", "Table TABREF41 shows that translation performances of both two methods are below to the Hybrid word-character granularity in Table TABREF39 . BPE_Hybrid method (Row 2) achieves 22.12 BLEU points, which is higher than Hybrid_BPE method by 0.39 BLEU points and is near the translation quality of WPM method in Table TABREF39 ." ], [ "The recently proposed neural machine translation has drawn more and more attention. Most of existing work in neural machine translation focus on handling rare words BIBREF16 , BIBREF10 , BIBREF17 , integrating SMT strategies BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , designing the better framework BIBREF22 , BIBREF11 , BIBREF23 and addressing the low resource scenario BIBREF24 , BIBREF25 , BIBREF26 .", "As for strategies for dealing with rare and unknown words, a number of authors have endeavored to explore methods for addressing them. Luong et al. BIBREF11 and Li et al. BIBREF16 propose simple alignment-based technique that can replace out-of-vocabulary words with similar words. Jean et al. BIBREF27 use a large vocabulary with a method based on importance sampling.", "In addition, another direction to achieve rare words problem in NMT is changing the granularity of segmentation. Chung et al. BIBREF8 focus on handling translation at the level of characters without any word segmentation only on target side. Luong et al. BIBREF9 propose a novel hybrid architecture that combines the strength of both word and character-based models. Sennrich et al. BIBREF10 use BPE method to encode rare and unknown words as sequences of subword units. Wu et al. BIBREF5 use both WPM method and hybrid word-character model in their online translation system. However, there is no study that shows which translation granularity is suitable for translation tasks involving Chinese language. Our goal in this work is to make an empirical comparison of different translation granularities for bidirectional Chinese-English translation tasks." ], [ "In this work, we provide an extensive comparison for translation granularities in Chinese-English NMT, such as word, character, subword and hybrid word-character. We have also discussed the advantages and disadvantages of various translation granularities in detail. For the same granularity on both sides, the experiments demonstrate that the subword model best fits Chinese-to-English translation with the vocabulary that is not so big, while the hybrid word-character approach obtains the highest performance on English-to-Chinese translation. In addition, experiments on different granularities show that Hybrid_BPE method can acquire best result for Chinese-to-English translation task." ], [ "The research work has been funded by the Natural Science Foundation of China under Grant No. 61333018 and No. 61402478, and it is also supported by the Strategic Priority Research Program of the CAS under Grant No. XDB02070007." ] ] }
{ "question": [ "Where does the vocabulary come from?", "What is the worst performing translation granularity?", "What dataset did they use?" ], "question_id": [ "da495e2f99ee2d5db9cc17eca5517ddaa5ea8e42", "e44a5514d7464993997212341606c2c0f3a72eb4", "310e61b9dd4d75bc1bebbcb1dae578f55807cd04" ], "nlp_background": [ "", "", "" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "LDC corpus" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our training data consists of 2.09M sentence pairs extracted from LDC corpus. Table 1 shows the detailed statistics of our training data. To test different approaches on Chinese-to-English translation task, we use NIST 2003(MT03) dataset as the validation set, and NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06) datasets as our test sets. For English-to-Chinese translation task, we also use NIST 2003(MT03) dataset as the validation set, and NIST 2008(MT08) will be used as test set." ], "highlighted_evidence": [ "Our training data consists of 2.09M sentence pairs extracted from LDC corpus." ] } ], "annotation_id": [ "10de01cb0a016dbba7f443855672264162d2d3f1" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "ea952900020e5644a760ca77dca5760227ba16ad" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "LDC corpus", "NIST 2003(MT03)", "NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06)", "NIST 2008(MT08)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our training data consists of 2.09M sentence pairs extracted from LDC corpus. Table 1 shows the detailed statistics of our training data. To test different approaches on Chinese-to-English translation task, we use NIST 2003(MT03) dataset as the validation set, and NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06) datasets as our test sets. For English-to-Chinese translation task, we also use NIST 2003(MT03) dataset as the validation set, and NIST 2008(MT08) will be used as test set." ], "highlighted_evidence": [ "Our training data consists of 2.09M sentence pairs extracted from LDC corpus.", "To test different approaches on Chinese-to-English translation task, we use NIST 2003(MT03) dataset as the validation set, and NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06) datasets as our test sets. For English-to-Chinese translation task, we also use NIST 2003(MT03) dataset as the validation set, and NIST 2008(MT08) will be used as test set." ] } ], "annotation_id": [ "3cbe430ce10309d266ee031fc8e4e4665a7bccfe" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Fig. 1. The architecture of neural machine translation model.", "Fig. 2. An example of different translation granularities", "Table 1. The characteristics of our training dataset on the LDC corpus.", "Table 2. Translation results (BLEU score) of 30k vocabulary for Chinese-to-English translation.", "Table 3. Sentence length of different translation granularities.", "Fig. 3. Length Analysis - translation qualities(BLEU score) of different lengths.", "Table 4. Translation results (BLEU score) of 15k vocabulary for Chinese-to-English translation.", "Table 5. Translation results (BLEU score) of 60k vocabulary for Chinese-to-English translation.", "Table 6. Translation results (BLEU score) of 30k vocabulary for different granularities on Chinese-to-English translation.", "Table 7. Translation results (BLEU score) for English-to-Chinese translation." ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "6-Table1-1.png", "8-Table2-1.png", "9-Table3-1.png", "9-Figure3-1.png", "10-Table4-1.png", "11-Table5-1.png", "11-Table6-1.png", "12-Table7-1.png" ] }
1907.08501
A Comparative Evaluation of Visual and Natural Language Question Answering Over Linked Data
With the growing number and size of Linked Data datasets, it is crucial to make the data accessible and useful for users without knowledge of formal query languages. Two approaches towards this goal are knowledge graph visualization and natural language interfaces. Here, we investigate specifically question answering (QA) over Linked Data by comparing a diagrammatic visual approach with existing natural language-based systems. Given a QA benchmark (QALD7), we evaluate a visual method which is based on iteratively creating diagrams until the answer is found, against four QA systems that have natural language queries as input. Besides other benefits, the visual approach provides higher performance, but also requires more manual input. The results indicate that the methods can be used complementary, and that such a combination has a large positive impact on QA performance, and also facilitates additional features such as data exploration.
{ "section_name": [ "INTRODUCTION", "RELATED WORK", "SYSTEM DESCRIPTION", "EVALUATION", "Evaluation Setup", "Evaluation Results and Discussion", "CONCLUSIONS", "ACKNOWLEDGEMENTS" ], "paragraphs": [ [ "The Semantic Web provides a large number of structured datasets in form of Linked Data. One central obstacle is to make this data available and consumable to lay users without knowledge of formal query languages such as SPARQL. In order to satisfy specific information needs of users, a typical approach are natural language interfaces to allow question answering over the Linked Data (QALD) by translating user queries into SPARQL BIBREF0 , BIBREF1 . As an alternative method, BIBREF2 propose a visual method of QA using an iterative diagrammatic approach. The diagrammatic approach relies on the visual means only, it requires more user interaction than natural language QA, but also provides additional benefits like intuitive insights into dataset characteristics, or a broader understanding of the answer and the potential to further explore the answer context, and finally allows for knowledge sharing by storing and sharing resulting diagrams.", "In contrast to BIBREF2 , who present the basic method and tool for diagrammatic question answering (DQA), here we evaluate DQA in comparison to natural language QALD systems. Both approaches have different characteristics, therefore we see them as complementary rather than in competition.", "The basic research goals are: i) Given a dataset extracted from the QALD7 benchmark, we evaluate DQA versus state-of-the-art QALD systems. ii) More specifically, we investigate if and to what extent DQA can be complementary to QALD systems, especially in cases where those systems do not find a correct answer. iii) Finally, we want to present the basic outline for the integration of the two methods.", "In a nutshell, users that applied DQA found the correct answer with an F1-score of 79.5%, compared to a maximum of 59.2% for the best performing QALD system. Furthermore, for the subset of questions where the QALD system could not provide a correct answer, users found the answer with 70% F1-score with DQA. We further analyze the characteristics of questions where the QALD or DQA, respectively, approach is better suited.", "The results indicate, that aside from the other benefits of DQA, it can be a valuable component for integration into larger QALD systems, in cases where those systems cannot find an answer, or when the user wants to explore the answer context in detail by visualizing the relevant nodes and relations. Moreover, users can verify answers given by a QALD system using DQA in case of doubt.", "This publication is organized as follows: After the presentation of related work in Section SECREF2 , and a brief system description of the DQA tool in Section SECREF3 , the main focus of the paper is on evaluation setup and results of the comparison of DQA and QALD, including a discussion, in Section SECREF4 . The paper concludes with Section SECREF5 ." ], [ "As introduced in BIBREF2 we understand diagrammatic question answering (DQA) as the process of QA relying solely on visual exploration using diagrams as a representation of the underlying knowledge source. The process includes (i) a model for diagrammatic representation of semantic data which supports data interaction using embedded queries, (ii) a simple method for step-by-step construction of diagrams with respect to cognitive boundaries and a layout that boosts understandability of diagrams, (iii) a library for visual data exploration and sharing based on its internal data model, and (iv) an evaluation of DQA as knowledge understanding and knowledge sharing tool. BIBREF3 propose a framework of five perspectives of knowledge visualization, which can be used to describe certain aspects of the DQA use cases, such as its goal to provide an iterative exploration method, which is accessible to any user, the possibility of knowledge sharing (via saved diagrams), or the general purpose of knowledge understanding and abstraction from technical details.", "Many tools exist for visual consumption and interaction with RDF knowledge bases, however, they are not designed specifically towards the question answering use case. BIBREF4 give an overview of ontology and Linked Data visualization tools, and categorize them based on the used visualization methods, interaction techniques and supported ontology constructs.", "Regarding language-based QA over Linked Data, BIBREF5 discuss and study the usefulness of natural language interfaces to ontology-based knowledge bases in a general way. They focus on usability of such systems for the end user, and conclude that users prefer full sentences for query formulation and that natural language interfaces are indeed useful.", " BIBREF0 describe the challenges of QA over knowledge bases using natural languages, and elaborate the various techniques used by existing QALD systems to overcome those challenges. In the present work, we compare DQA with four of those systems using a subset of questions of the QALD7 benchmark. Those systems are: gAnswer BIBREF6 is an approach for RDF QA that has a “graph-driven” perspective. In contrast to traditional approaches, which first try to understand the question, and then evaluate the query, in gAnswer the intention of the query is modeled in a structured way, which leads to a subgraph matching problem. Secondly, QAKiS BIBREF7 is QA system over structured knowledge bases such as DBpedia that makes use of relational patterns which capture different ways to express a certain relation in a natural language in order to construct a target-language (SPARQL) query. Further, Platypus BIBREF8 is a QA system on Wikidata. It represents questions in an internal format related to dependency-based compositional semantics which allows for question decomposition and language independence. The platform can answer complex questions in several languages by using hybrid grammatical and template-based techniques. And finally, also the WDAqua BIBREF0 system aims for language-independence and for being agnostic of the underlying knowledge base. WDAqua puts more importance on word semantics than on the syntax of the user query, and follows a processes of query expansion, SPARQL construction, query ranking and then making an answer decision.", "For the evaluation of QA systems, several benchmarks have been proposed such as WebQuestions BIBREF9 or SimpleQuestions BIBREF10 . However, the most popular benchmarks in the Semantic Web field arise from the QALD evaluation campaign BIBREF1 . The recent QALD7 evaluation campaign includes task 4: “English question answering over Wikidata” which serves as basis to compile our evaluation dataset." ], [ "The DQA functionality is part of the Ontodia tool. The initial idea of Ontodia was to enable the exploration of semantic graphs for ordinary users. Data exploration is about efficiently extracting knowledge from data even in situations where it is unclear what is being looked for exactly BIBREF11 .", "The DQA tool uses an incremental approach to exploration typically starting from a very small number of nodes. With the context menu of a particular node, relations and related nodes can be added until the diagram fulfills the information need of the user. Figure FIGREF1 gives an example of a start node, where a user wants to learn more about the painting style of Van Gogh.", "To illustrate the process, we give a brief example here. More details about the DQA tool, the motivation for DQA and diagram-based visualizations are found in previous work BIBREF2 , BIBREF12 .", "As for the example, when attempting to answer a question such as “Who is the mayor of Paris?” the first step for a DQA user is finding a suitable starting point, in our case the entity Paris. The user enters “Paris” into the search box, and can then investigate the entity on the tool canvas. The information about the entity stems from the underlying dataset, for example Wikidata. The user can – in an incremental process – search in the properties of the given entity (or entities) and add relevant entities onto the canvas. In the given example, the property “head of government” connects the mayor to the city of Paris, Anne Hidalgo. The final diagram which answers the given question is presented in Figure FIGREF3 ." ], [ "Here we present the evaluation of DQA in comparison to four QALD systems." ], [ "As evaluation dataset, we reuse questions from the QALD7 benchmark task 4 “QA over Wikidata”. Question selection from QALD7 is based on the principles of question classification in QA BIBREF13 . Firstly, it is necessary to define question types which correspond to different scenarios of data exploration in DQA, as well as the type of expected answers and the question focus. The question focus refers to the main information in the question which help a user find the answer. We follow the model of BIBREF14 who categorize questions by their question word into WHO, WHICH, WHAT, NAME, and HOW questions. Given the question and answer type categories, we created four questionnaires with nine questions each resulting in 36 questions from the QALD dataset. The questions were picked in equal number for five basic question categories.", "20 persons participated in the DQA evaluation – 14 male and six female from eight different countries. The majority of respondents work within academia, however seven users were employed in industry. 131 diagrams (of 140 expected) were returned by the users.", "The same 36 questions were answered using four QALD tools: WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8 .", "For the QALD tools, a human evaluator pasted the questions as is into the natural language Web interfaces, and submitted them to the systems. Typically QALD tools provide a distinct answer, which may be a simple literal, or a set of entities which represent the answer, and which can be compared to the gold standard result. However, the WDAqua system, sometimes, additionally to the direct answer to the question, provides links to documents related to the question. We always chose the answer available via direct answer.", "To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given. INLINEFORM1 is the faction of correct answer (parts) given divided by all correct ones in the gold answer, and INLINEFORM2 is the harmonic mean of INLINEFORM3 and INLINEFORM4 . As an example, if the question is “Where was Albert Einstein born?” (gold answer: “Ulm”), and the system gives two answers “Ulm” and “Bern”, then INLINEFORM5 , INLINEFORM6 and INLINEFORM7 .", "For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online." ], [ "Table TABREF8 presents the overall evaluation metrics of DQA, and the four QALD tools studied. With the given dataset, WDAqua (56.1% F1) and gAnswer (59.2% F1) clearly outperform askplatyp.us (8.6% F1) and QAKiS (27.5% F1). Detailed results per question including the calculation of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 scores are available online. DQA led to 79.5% F1 (80.1% precision and 78.5% recall).", "In further evaluations, we compare DQA results to WDAqua in order to study the differences and potential complementary aspects of the approaches. We selected WDAqua as representative of QALD tools, as it provides state-of-the-art results, and is well grounded in the Semantic Web community. ", "Comparing DQA and WDAqua, the first interesting question is: To what extend is DQA helpful on questions that could not be answered by the QALD system? For WDAqua the overall F1 score on our test dataset is INLINEFORM0 . For the subset of questions where WDAqua had no, or only a partial, answer, DQA users found the correct answer in INLINEFORM1 of cases. On the other hand, the subset of questions that DQA users (partially) failed to answer, were answered correctly by WDAqua with an F1 of INLINEFORM2 . If DQA is used as a backup method for questions not correctly answered with WDAqua, then overall F1 can be raised to INLINEFORM3 . The increase from INLINEFORM4 to INLINEFORM5 demonstrates the potential of DQA as complementary component in QALD systems.", "As expected, questions that are difficult to answer with one approach are also harder for the other approach – as some questions in the dataset or just more complex to process and understand than others. However, almost 70% of questions not answered by WDAqua could still be answered by DQA. As examples of cases which are easier to answer for one approach than the other, a question that DQA users could answer, but where WDAqua failed is: “What is the name of the school where Obama's wife studied?”. This complex question formulation is hard to interpret correctly for a machine. In contrast to DQA, QALD systems also struggled with “Who is the son of Sonny and Cher?”. This question needs a lot of real-world knowledge to map the names Sonny and Cher to their corresponding entities. The QALD system needs to select the correct Cher entity from multiple options in Wikidata, and also to understand that “Sonny” refers to the entity Sonny Bono. The resulting answer diagram is given in Figure FIGREF17 . More simple questions, like “Who is the mayor of Paris?” were correctly answered by WDAqua, but not by all DQA users. DQA participants in this case struggled to make the leap from the noun “mayor” to the head-of-government property in Wikidata.", "Regarding the limits of DQA, this method has difficulties when the answer can be obtained only with joins of queries, or when it is hard to find the initial starting entities related to question focus. For example, a question like “Show me the list of African birds that are extinct.” typically requires an intersection of two (large) sets of candidates entities, ie. all African birds and extinct birds. Such a task can easily be represented in a SPARQL query, but is hard to address with diagrams, because it would require placing, and interacting with, a huge amount of nodes on the exploration canvas.", "Overall, the experiments indicate, that additionally to the use cases where QALD and DQA are useful on their own, there is a lot of potential in combining the two approaches, especially by providing a user the opportunity to explore the dataset with DQA if QALD did not find a correct answer, or when a user wants to confirm the QALD answer by checking in the underlying knowledge base. Furthermore, visually exploring the dataset provides added benefits, like understanding the dataset characteristics, sharing of resulting diagrams (if supported by the tool), and finding more information related to the original information need.", "For the integration of QALD and DQA, we envision two scenarios. The first scenario addresses plain question answering, and here DQA can be added to a QALD system for cases where a user is not satisfied with a given answer. The QALD Web interface can for example have a Explore visually with diagrams button, which brings the user to a canvas on which the entities detected by the QALD system within the question and results (if any) are displayed on the canvas as starting nodes. The user will then explore the knowledge graph and find the answers in the same way as the participants in our experiments. The first scenario can lead to a large improvement in answer F1 (see above).", "The second scenario of integration of QALD and DQA focuses on the exploration aspect. Even if the QALD system provides the correct answer, a user might be interested to explore the knowledge graph to validate the result and to discover more interesting information about the target entities. From an implementation and UI point of view, the same Explore visually with diagrams button and pre-population of the canvas can be used. Both scenarios also provide the additional benefits of potentially saving and sharing the created diagrams, which elaborate the relation between question and answer." ], [ "In this work, we compare two approaches to answer questions over Linked Data datasets: a visual diagrammatic approach (DQA) which involves iterative exploration of the graph, and a natural language-based (QALD). The evaluations show, that DQA can be a helpful addition to pure QALD systems, both regarding evaluation metrics (precision, recall, and F1), and also for dataset understanding and further exploration. The contributions include: i) a comparative evaluation of four QALD tools and DQA with a dataset extracted from the QALD7 benchmark, ii) an investigation into the differences and potential complementary aspects of the two approaches, and iii) the proposition of integration scenarios for QALD and DQA.", "In future work we plan to study the integration of DQA and QALD, especially the aspect of automatically creating an initial diagram from a user query, in order to leverage the discussed potentials. We envision an integrated tool, that uses QALD as basic method to find an answer to a question quickly, but also allows to explore the knowledge graph visually to raise answer quality and support exploration with all its discussed benefits." ], [ "This work was supported by the Government of the Russian Federation (Grant 074-U01) through the ITMO Fellowship and Professorship Program." ] ] }
{ "question": [ "How do they measure performance?", "Do they measure the performance of a combined approach?", "Which four QA systems do they use?", "How many iterations of visual search are done on average until an answer is found?", "Do they test performance of their approaches using human judgements?" ], "question_id": [ "bdc6664cec2b94b0b3769bc70a60914795f39574", "e40df8c685a28b98006c47808f506def68f30e26", "9653c89a93ac5c717a0a26cf80e9aa98a5ccf910", "b921a1771ed0ba9dbeff9da000336ecf2bb38322", "412aff0b2113b7d61c914edf90b90f2994390088" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online." ], "highlighted_evidence": [ "For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question." ] } ], "annotation_id": [ "112536f1599e1ce56c95a34ed0380a178c943b84" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "c58bd28ec93c12b1a7284f2cce1c2141a455b58c" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The same 36 questions were answered using four QALD tools: WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8 ." ], "highlighted_evidence": [ "The same 36 questions were answered using four QALD tools: WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8 ." ] } ], "annotation_id": [ "6b7fcc35cd56421312b9f13d1fe2bf835abe365a" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "9b3774718e9daf7fee2754aafe18f5145f17fd31" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online.", "To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given. INLINEFORM1 is the faction of correct answer (parts) given divided by all correct ones in the gold answer, and INLINEFORM2 is the harmonic mean of INLINEFORM3 and INLINEFORM4 . As an example, if the question is “Where was Albert Einstein born?” (gold answer: “Ulm”), and the system gives two answers “Ulm” and “Bern”, then INLINEFORM5 , INLINEFORM6 and INLINEFORM7 ." ], "highlighted_evidence": [ "For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online.", "To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given." ] } ], "annotation_id": [ "f0e134d719b049ee7e02c8f228f20b0de622292e" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: After placing the Wikidata entity Van Gogh onto the canvas, searching properties related to his “style” with Ontodia DQA tool.", "Figure 2: Answering the question: Who is the mayor of Paris?", "Table 1: Overall performance of DQA and the four QALD tools – measured with precision, recall and F1 score.", "Figure 3: Answering the question: Who is the son of Sonny and Cher? with DQA." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png", "5-Figure3-1.png" ] }
2001.02943
Binary and Multitask Classification Model for Dutch Anaphora Resolution: Die/Dat Prediction
The correct use of Dutch pronouns 'die' and 'dat' is a stumbling block for both native and non-native speakers of Dutch due to the multiplicity of syntactic functions and the dependency on the antecedent's gender and number. Drawing on previous research conducted on neural context-dependent dt-mistake correction models (Heyman et al. 2018), this study constructs the first neural network model for Dutch demonstrative and relative pronoun resolution that specifically focuses on the correction and part-of-speech prediction of these two pronouns. Two separate datasets are built with sentences obtained from, respectively, the Dutch Europarl corpus (Koehn 2015) - which contains the proceedings of the European Parliament from 1996 to the present - and the SoNaR corpus (Oostdijk et al. 2013) - which contains Dutch texts from a variety of domains such as newspapers, blogs and legal texts. Firstly, a binary classification model solely predicts the correct 'die' or 'dat'. The classifier with a bidirectional long short-term memory architecture achieves 84.56% accuracy. Secondly, a multitask classification model simultaneously predicts the correct 'die' or 'dat' and its part-of-speech tag. The model containing a combination of a sentence and context encoder with both a bidirectional long short-term memory architecture results in 88.63% accuracy for die/dat prediction and 87.73% accuracy for part-of-speech prediction. More evenly-balanced data, larger word embeddings, an extra bidirectional long short-term memory layer and integrated part-of-speech knowledge positively affects die/dat prediction performance, while a context encoder architecture raises part-of-speech prediction performance. This study shows promising results and can serve as a starting point for future research on machine learning models for Dutch anaphora resolution.
{ "section_name": [ "Introduction", "Related Work", "Dataset", "Preprocessing", "Binary Classification Model ::: Model Architecture", "Binary Classification Model ::: Experimental Set-Up", "Binary Classification Model ::: Results", "Multitask Classification Model ::: Model Architecture", "Multitask Classification Model ::: Experimental Set-up", "Multitask Classification Model ::: Results", "Discussion", "Conclusion" ], "paragraphs": [ [ "Following previous research on automatic detection and correction of dt-mistakes in Dutch BIBREF0, this paper investigates another stumbling block for both native and non-native speakers of Dutch: the correct use of die and dat. The multiplicity of syntactic functions and the dependency on the antecedent's gender and number make this a challenging task for both human and computer. The grammar concerning die and dat is threefold. Firstly, they can be used as dependent or independent demonstrative pronouns (aanwijzend voornaamwoord), with the first replacing the article before the noun it modifies and the latter being a noun phrase that refers to a preceding/following noun phrase or sentence. The choice between the two pronouns depends on the gender and number of the antecedent: dat refers to neuter, singular nouns and sentences, while die refers to masculine, singular nouns and plural nouns independent of their gender. Secondly, die and dat can be used as relative pronouns introducing relative clauses (betrekkelijk voornaamwoord), which provide additional information about the directly preceding antecedent it modifies. Similar rules as for demonstrative pronouns apply: masculine, singular nouns and plural nouns are followed by relative pronoun die, neuter singular nouns by dat. Lastly, dat can be used as a subordinating conjunction (onderschikkend voegwoord) introducing a subordinating clause. An brief overview of the grammar is given in Table TABREF1.", "The aim is to develop (1) a binary classification model that automatically detects, predicts and corrects die and dat instances in texts. Furthermore, the correct die/dat instance and the syntactic function of the predicted die and dat are jointly predicted in (2) a multitask classification model. Whereas research on neural-based, machine learning approaches for Dutch demonstrative and relative pronoun resolution - especially for die and dat - is to our knowledge non-existing, this project can serve as a starting point for further research on machine learning applications concerning Dutch subordinating conjunctions, demonstrative pronouns and relative pronouns." ], [ "The incentive for this research project is the detection and correction system for dt-mistakes in Dutch BIBREF0. For that task, a system with a context encoder - a bidirectional LSTM with attention mechanism - and verb encoder - of which the outputs are then fed to a feedforward neural network - has been developed to predict different verb suffixes. As mentioned above, this project explores the possibility of constructing a neural network system for correcting Dutch demonstrative and relative pronouns die and dat. The task is also called pronoun resolution or anaphora resolution. Anaphora resolution and pronoun prediction has been major research subjects in machine translation research. In BIBREF3, for example, the effect of multiple English coreference resolvers on the pronoun translation in English-Dutch machine translation system with deep transfer has been investigated. Niton, Morawiecki and Ogrodnizuk (2018) developed a fully connected network with three layers in combination with a sieve-based architecture for Polish coreference resolution BIBREF4. Not only in machine translation, but also in general much research has been conducted on machine learning approaches towards coreference resolution BIBREF5BIBREF6BIBREF7 and pronoun resolution BIBREF8, BIBREF9. However, little to no research has been conducted specifically on die/dat correction." ], [ "The datasets used for training, validation and testing contain sentences extracted from the Europarl corpus BIBREF1 and SoNaR corpus BIBREF2. The Europarl corpus is an open-source parallel corpus containing proceedings of the European Parliament. The Dutch section consists of 2,333,816 sentences and 53,487,257 words. The SoNaR corpus comprises two corpora: SONAR500 and SONAR1. The SONAR500 corpus consists of more than 500 million words obtained from different domains. Examples of text types are newsletters, newspaper articles, legal texts, subtitles and blog posts. All texts except for texts from social media have been automatically tokenized, POS tagged and lemmatized. It contains significantly more data and more varied data than the Europarl corpus. Due to the high amount of data in the corpus, only three subparts are used: Wikipedia texts, reports and newspaper articles. These subparts are chosen because the number of wrongly used die and dat is expected to be low." ], [ "The sentences in the Europarl corpus are tokenized and parsed using the Dutch version of TreeTagger BIBREF10. Only sentences which contain at least one die or dat are extracted from the corpora. Subsequently, each single occurrence of die and dat is detected and replaced by a unique token ('PREDICT'). When there are multiple occurrences in one sentence, only one occurrence is replaced at a time. Consequently, a sentence can appear multiple times in the training and test dataset with the unique token for die and dat at a different place in the sentence. Each sentence is paired with its automatically assigned ground truth label for die and dat. The Europarl dataset, on the one hand, contains 70,057 dat-labeled and 33,814 die-labeled sentences. The resulting train and test sets consist of 103,871 (Europarl) and 1,269,091 (SoNaR) sentences. The SoNaR dataset, on the other hand, has more than ten times the number of labeled sentences with 736,987 dat-labeled and 532,104 die-labeled. Considering the imbalance in both datasets, it may be argued that dat occurs more frequently than die due to its syntactic function as subordinating conjunction and not to its use as demonstrative pronoun whereas it can only refer to singular, neutral nouns. As for the multitask classification model, the POS tags for die and dat present in the SoNaR corpus are extracted and stored as ground truth labels: 407,848 subordinating conjunction, 387,292 relative pronoun and 473,951 demonstrative pronoun. From a brief qualitative assessment on the POS tags for die and dat in both corpora, the POS tags in the SoNaR corpus appear to be more reliable than the POS tags generated by TreeTagger in the Europarl corpus. Therefore, only the SoNaR dataset is used for the multitask classification. An overview of the datasets after preprocessing is given in Table TABREF2." ], [ "For the binary classification model that predicts the correct die or dat for each sentence, a Bidirectional Long-Short Term Memory (BiLSTM) neural network is computed. Whereas the antecedent can be rather distant from the relative or demonstrative pronoun due to adjectives and sentence boundaries, an LSTM architecture is chosen over a regular Recurrent Neural Network while the latter does not cope well with learning non-trivial long-distance dependencies BIBREF11. Furthermore, a bidirectional LSTM is chosen over a single left-to-right LSTM, whereas the antecedent can be either before or after the die or dat. The architecture of the binary classification model is provided in Fig. FIGREF7. The input sentence is first sent through an embedding layer where each token is transformed to a 100-dimensional word embedding which have been initially trained on the dataset of sentences containing at least one die or dat using the Word2Vec Skip-gram model BIBREF12. The weights of the embedding layer are trainable. The word embeddings are then sent through a BiLSTM layer. The bidirectional LSTM concatenates the outputs of two LSTMs: the left-to-right $LSTM_{forward}$ computes the states $\\overrightarrow{h_1}..\\overrightarrow{h_N}$ and the right-to-left $LSTM_{backward}$ computes the states $\\overleftarrow{h_N}..\\overleftarrow{h_1}$. This means that at time $t$ for input $x$, represented by its word embedding $E(x)$, the bidirectional LSTM outputs the following:", "The concatenated output is then sent through a maxpooling layer, linear layer and, eventually, a softmax layer to get a probability distribution over the two classes. In order to prevent the model from overfitting and co-adapting too much, dropout regularization is implemented in the embedding layer and the linear layer. In both layers, dropout is set to $p = 0.5$ which randomly zeroes out nodes in the layer using samples from a Bernoulli distribution." ], [ "Each dataset is randomly divided into a training (70%), validation (15%) and test set (15%). The data is fed to the model in batches of 128 samples and reshuffled at every epoch. The objective function that is minimized is Binary Cross-Entropy:", "where $y_i$ is the ground truth label (0 for dat and 1 for die) and $p(\\hat{y}_i)$ is the probability of the predicted label for all $N$ input sentences of the train set. The weights are optimized by Stochastic Gradient Descent with learning rate = 0.01 and momentum = 0.9. The data is fed to the model in 24 epochs." ], [ "An overview of the performance results is given in Table TABREF11. We compare model performance when trained and tested on the two corpora individually and experiment with different settings of the two corpora in order to investigate the effect of dataset changes on model performance. There are three settings: full in which the datasets contain full sentences, windowed in which sentences are windowed around the unique prediction token without exceeding sentence boundaries (five tokens before and after the token, including token), and windowed no_boundaries in which the windows can exceed sentence boundaries. When limiting the input sentences to windowed sentences in the Europarl corpus(2), model performance increases significantly on all metrics, especially for die prediction performance. The difference in model performance when trained and tested on the Europarl (2) and SoNaR (3) windowed datasets is particularly noticeable in the precision, recall and F1 scores. Model performance for dat prediction is better for the Europarl dataset than for the SoNaR dataset, while model performance for die prediction is notably better for the SoNaR dataset than for the Europarl dataset. Lastly, a change in windowing seems to have a positive impact on the overall model performance: the model trained and tested on the SoNaR dataset with windows exceeding sentence boundaries (3) outperforms that on the SoNaR dataset with windows within sentence boundaries (4) on every metric." ], [ "The second model performs two prediction tasks. The first prediction task remains the binary classification of die and dat. The second prediction task concerns the prediction of three parts-of-speech (POS) or word classes, namely subordinating conjunction, relative pronoun and demonstrative pronoun. An overview of the model architectures is given in Fig. FIGREF13. For the BiLSTM model, the first layer is the embedding layer where the weights are initialized by means of the 200-dimensional pre-trained embedding matrix. The weights are updated after every epoch. The second layer consists of two bidirectional LSTMs where the output of the first bidirectional LSTM serves as input to the second bidirectional LSTM. The layer has dropout regularization equal to 0.2. The two-layer bidirectional LSTM concatenates the outputs at time $t$ into a 64-dimensional vector and sends it through a maxpooling layer. Until this point, the two task share the same parameters. The model than splits into two separate linear layers. The left linear layer transforms the 64-dimensional vector to a two-dimensional vector on which the softmax is computed. The softmax outputs the probability distribution over the dat and die labels. The right linear layer transforms the 64-dimensional vector to a three-dimensional vector on which the softmax is computed as well. The softmax outputs the probability distribution over the subordinating conjunction, relative pronoun and demonstrative pronoun labels. The second multitask classification model takes the immediate context around the 'PREDICT' token as additional input. Both the windowed sentence and context are first transformed into their word embedding representations. They are, then, separately sent through a sentence encoder and context encoder, respectively. The sentence encoder has the same architecture as the second and third layer of the BiLSTM model, namely a two-layer bidirectional LSTM and a maxpooling layer. For the context encoder, we experiment with two different architectures: a feedforward neural network and a one-layer bidirectional LSTM with dropout = 0.2 with a maxpooling layer on top. Both sentence and context encoder output a 64-dimensional vector which are, consequently, concatenated to a 128-dimensional vector. As in the BiLSTM model, the resulting vector is sent through two separate linear layers to output probability distributions for both the die/dat and POS prediction task." ], [ "As discussed in Section SECREF4, the POS ground truth labels in SoNaR-based datasets are more reliable than the POS labels in the Europarl-based datasets that are generated by TreeTagger. Consequently, only the SoNaR dataset is used for training and testing. The dataset is randomly divided into a training (70%), validation (15%) and test (15%) set. The data is fed into the model in batches of 516 samples and the data is reshuffled at every epoch. For die/dat prediction, the Binary Cross-Entropy loss function is minimized. The weights are optimized by Stochastic Gradient Descent with learning rate = 0.01 and momentum = 0.9. For POS prediction, Cross-Entropy is minimized:", "where $C$ is the number of classes, in this case three, $y_{i,c}$ is the binary indicator (0 or 1) if class label $c$ is the correct predicted classification for input sentence $i$ and $p$ is the probability of sentence $i$ having class label $c$. The weights are optimized using Adam optimization with learning rate = 0.0001. The data is fed to the model in 35 epochs." ], [ "An overview of the performance results for die/dat prediction is given in Table TABREF19. The same dataset settings as for the binary classification model are used: full in which the datasets contain full sentences, windowed in which sentences are windowed around the unique prediction token without exceeding sentence boundaries (five tokens before and after the token, including token), and windowed no_boundaries in which the windows can exceed sentence boundaries. As mentioned in section SECREF4, we only use the SoNaR dataset. The multitask classification models generally perform better with the windowed no_boundaries dataset setting. Concerning the model architectures, it can be concluded that altering the model architecture has no large impact on model performance for die/dat prediction. However, altering the model architecture from an architecture with merely a sentence encoder to an architecture with both sentence and context encoder does have a more significant positive impact on model performance for POS prediction (Table TABREF20). For that prediction task, the multitask classification model with a bidirectional LSTM context encoder trained and tested on windowed SoNaR sentences reaches best performance results on almost all evaluation metrics." ], [ "In Section SECREF5, a first classification model based on neural networks is computed to predict die and dat labels. The binary classification model consists of an embedding layer, a bidirectional LSTM, a maxpooling layer and a linear layer. The softmax is taken over the output of the last layer and provides a probability distribution over die and dat prediction labels. The sentences receive the prediction label with the highest probability. It is trained, validated and tested four times using four different database settings. From an analysis of the performance metric results, several conclusions can be drawn. Firstly, in all cases, the model appears to predict the dat label more precisely than the die label. This may be caused by the higher number of dat than die instances in training, validation and test datasets extracted from the Europarl and SoNaR corpus. Secondly, when the dataset is more balanced, as in the SoNaR corpus, the difference in performance between die and dat labels decreases as expected. Thirdly, die/dat prediction performance increases when the window over the sentences is not limited to sentence boundaries (SoNaR windowed, no_boundaries). A probable reason for that higher performance is that the model's ability to detect antecedents in the preceeding or following sentence, while it is not able to do so when it is trained and tested on boundary-constraint windowed sentences (SoNaR windowed). Lastly, it appears that performance of the model drops significantly when the binary classification model is trained and tested on full sentences (Europarl full). In conclusion, the binary classification model performs best when it is trained on the larger, more evenly balanced SoNaR corpus that consists of windowed sentences that are not limited to sentence boundaries. A clear performance overview of the best performing binary classification and multitask classification models for die/dat prediction can be found in Table TABREF21.", "In Section SECREF6, several multitask classification models are constructed to jointly execute two prediction tasks: die/dat prediction and POS prediction. The BiLSTM multitask classification model consists of an embedding layer, two consecutive bidirectional LSTMs and a maxpooling layer. The output of the maxpooling layer is used as input to two separate linear layers followed by a softmax layer. The two softmax layers yield a probability distribution for die/dat and POS labels. The model trained and tested on windowed SoNaR sentences that exceed sentence boundaries performs better than the model on boundary-constraint windowed sentences and full sentences. The best performing BiLSTM multitask classification model (Model 2) outperforms the best binary classification model (Model 1) on every evaluation metric for die/dat prediction. This could arguably be due to the increased batch size, the doubled embedding dimension, the extra bidirectional LSTM layer, the influence of the second prediction task and/or the split in sentence and context encoder. Firstly, the data is divided into batch sizes of 512 instead of 128. Table TABREF22 shows, however, that there is little consistent difference in performance when batch size is 512 or 128. Therefore, it can be suggested that an increased batch size has no directly positive influence on model performance. Secondly, the input data is transformed to 200-dimensional word embeddings instead of 100-dimensional word embeddings. From the results displayed in Table TABREF22, it appears that a change in word embedding dimension could be causing an slight increase in model performance. Thirdly, the multitask model contains two bidirectional LSTM layers opposed to the binary model that has only one layer. Table TABREF23 shows the influence of the number of layers on the performance of the binary classification model. When the binary classification model has an additional bidirectional LSTM layer, all the evaluation metrics rise with approximately 2%. However, when the binary classification model has three bidirectional LSTM layers, model performance drops significantly. It appears that the doubled number of layers is indeed one of the reasons why the multitask classification models perform better than the binary classification model. However, not every rise in number of layers necessarily influences a model's performance in a positive manner. Concerning the influence of the POS prediction task on die/dat prediction performance and syntactic knowledge in general, a comparison between a two-layer bidirectional LSTM binary classification model and the two-layer bidirectional LSTM multitask classification model is made and displayed in Table TABREF24. It seems that the integration of POS knowledge positively influences die/dat prediction performance, while all evaluation metrics have increased. When examining the influence of a context encoder on die/dat prediction performance of Model 3 and Model 4, the evaluation metrics of Model 2, 3 and 4 are compared. The metric scores are fairly similar which leads to the conclusion that the addition of a context encoder has little to no further influence on die/dat prediction performance. Moreover, the encoder architecture does not cause a considerable difference in die/dat prediction performance between the model with a feedforward context encoder (Model 3) and the model with a bidirectional LSTM context encoder (Model 4). It can thus be suggested that a model does not necessarily profit from a different architecture and that an extra focus on immediate context is not additionally advantageous for the die/dat prediction task.", "Contrary to the little to no impact on die/dat prediction performance, the context encoder - especially the bidirectional LSTM context encoder - does have a direct positive impact on POS prediction performance. The difference in POS prediction performance between the three multitask prediction models can be found in Table TABREF25. The model with the bidirectional LSTM context encoder outperforms the other two multitask classification models on every evaluation metric. Considering its highest POS prediction performance and high die/dat prediction performance, it is suggested that the multitask prediction model with bidirectional LSTM context encoder (Model 4) is the overall best model." ], [ "Deciding which pronoun to use in various contexts can be a complicated task. The correct use of die and dat as Dutch pronouns entails knowing the antecedent and - if the antecedent is a noun - its grammatical gender and number. We experimented with neural network models to examine whether die and dat instances in sentences can be computationally predicted and, if necessary, corrected. Our binary classification model reaches a promising 84.56 % accuracy. In addition, we extended that model to three multitask classification models that not only predict die and dat, but also predicts the POS (demonstrative pronoun, relative pronoun and subordinating conjunction). By increasing the word embedding dimension, doubling the number of bidirectional LSTM layers and integrating POS knowledge in the model, the multitask classification models raise die/dat prediction performance by approximately 4 %. Concerning POS prediction performance, the multitask classification model consisting of a sentence and context encoder performs best on all evaluation metrics and reaches a accuracy of 87.78 %.", "There are ample opportunities to further analyze, enhance and/or extend the die/dat prediction model. A qualitative study of the learned model weights, for example, could provide more insight in the prediction mechanism of the models. We already obtain excellent results with a simple neural architecture comprising relatively few parameters. We believe that more complex architectures such as a transformer architecture BIBREF13 with multihead attention will improve results. It might also be interesting to look at the possibility of integrating a language model such as BERT BIBREF14 in the classification model (e.g., as pretrained embeddings). Moreover, the binary classification task could be extended to a multiclass classification task to predict not only die and dat labels, but also respectively equivalent deze and dit labels. The difference between die/dat and deze/dat, however, entails a difference in temporal and spatial information: while die/dat indicates a physically near or earlier mentioned antecedent, deze/dit implies that the antecedent is physically distant or later mentioned in the text. That difference may possibly cause a prediction model to base its predictions on other tokens in a text." ] ] }
{ "question": [ "What are the sizes of both datasets?" ], "question_id": [ "010e3793eb1342225857d3f95e147d8f8467192a" ], "nlp_background": [ "" ], "topic_background": [ "" ], "paper_read": [ "" ], "search_query": [ "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "The Dutch section consists of 2,333,816 sentences and 53,487,257 words.", "The SONAR500 corpus consists of more than 500 million words obtained from different domains." ], "yes_no": null, "free_form_answer": "", "evidence": [ "The datasets used for training, validation and testing contain sentences extracted from the Europarl corpus BIBREF1 and SoNaR corpus BIBREF2. The Europarl corpus is an open-source parallel corpus containing proceedings of the European Parliament. The Dutch section consists of 2,333,816 sentences and 53,487,257 words. The SoNaR corpus comprises two corpora: SONAR500 and SONAR1. The SONAR500 corpus consists of more than 500 million words obtained from different domains. Examples of text types are newsletters, newspaper articles, legal texts, subtitles and blog posts. All texts except for texts from social media have been automatically tokenized, POS tagged and lemmatized. It contains significantly more data and more varied data than the Europarl corpus. Due to the high amount of data in the corpus, only three subparts are used: Wikipedia texts, reports and newspaper articles. These subparts are chosen because the number of wrongly used die and dat is expected to be low." ], "highlighted_evidence": [ "The datasets used for training, validation and testing contain sentences extracted from the Europarl corpus BIBREF1 and SoNaR corpus BIBREF2. The Europarl corpus is an open-source parallel corpus containing proceedings of the European Parliament. The Dutch section consists of 2,333,816 sentences and 53,487,257 words. The SoNaR corpus comprises two corpora: SONAR500 and SONAR1. The SONAR500 corpus consists of more than 500 million words obtained from different domains." ] } ], "annotation_id": [ "e6384d6727bc9ea8054643172c7cdd2424fa23e7" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Table 1: Grammar concerning die and dat", "Table 2: Overview of datasets", "Figure 1: Model architecture of the binary classification model", "Table 3: Performance results of the binary classification model on the Europarl dataset containing full sentences (1), the Europarl dataset containing windowed sentences within sentence boundaries (2), the SoNaR dataset containing windowed sentences within sentence boundaries (3) and the SoNaR dataset containing windowed sentences exceeding sentence boundaries (4).", "Figure 2: Overview of the two multitask classification model architectures", "Table 4: Performance of the three multitask classification models for die/dat prediction", "Table 5: Performance results of three multitask classification tasks for POS prediction: subordinating conjunction(sc), relative pronoun (rp) and demonstrative pronoun (dp)", "Table 6: Comparison of die/dat prediction performance between best performing binary classification model (model 1, SoNaR windowed, no boundaries), multitask classification model (model 2, SoNaR windowed, no boundaries), multitask classification model with feedforward context encoder (model 3, SoNaR windowed) and multitask classification model with bidirectional LSTM context encoder (model 4, SoNaR windowed)", "Table 7: The influence of batch size and embedding dimension on performance of the SoNaR-based, sentence-exceeding windowed trained multitask classification model (Model 2, SoNaR windowed, no boundaries)", "Table 8: The influence of number of layers on performance of the SoNaR-based, sentence-exceeding windowed trained binary classification model (Model 1, SoNaR windowed, no boundaries)", "Table 9: The influence of integrated POS knowledge on die/dat prediction performance. Comparison between Model 1 with an extra BiLSTM layer (No) and Model 2 (Yes), both trained and tested using SoNaR windowed, no boundaries dataset", "Table 10: Comparison of POS prediction performance between best performing multitask classification model (model 2, SoNaR windowed, no boundaries), multitask classification model with feedforward context encoder (model 3, SoNaR windowed) and multitask classification model with bidirectional LSTM context encoder (model 4, SoNaR windowed)" ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "4-Figure1-1.png", "5-Table3-1.png", "6-Figure2-1.png", "7-Table4-1.png", "8-Table5-1.png", "9-Table6-1.png", "10-Table7-1.png", "11-Table8-1.png", "11-Table9-1.png", "12-Table10-1.png" ] }
1911.04952
'Warriors of the Word' -- Deciphering Lyrical Topics in Music and Their Connection to Audio Feature Dimensions Based on a Corpus of Over 100,000 Metal Songs
We look into the connection between the musical and lyrical content of metal music by combining automated extraction of high-level audio features and quantitative text analysis on a corpus of 124.288 song lyrics from this genre. Based on this text corpus, a topic model was first constructed using Latent Dirichlet Allocation (LDA). For a subsample of 503 songs, scores for predicting perceived musical hardness/heaviness and darkness/gloominess were extracted using audio feature models. By combining both audio feature and text analysis, we (1) offer a comprehensive overview of the lyrical topics present within the metal genre and (2) are able to establish whether or not levels of hardness and other music dimensions are associated with the occurrence of particularly harsh (and other) textual topics. Twenty typical topics were identified and projected into a topic space using multidimensional scaling (MDS). After Bonferroni correction, positive correlations were found between musical hardness and darkness and textual topics dealing with 'brutal death', 'dystopia', 'archaisms and occultism', 'religion and satanism', 'battle' and '(psychological) madness', while there is a negative associations with topics like 'personal life' and 'love and romance'.
{ "section_name": [ "Introduction", "Methodology", "Methodology ::: Text Corpus Creation and Cleaning", "Methodology ::: Topic Modelling via Latent Dirichlet Allocation", "Methodology ::: High-Level Audio Feature Extraction", "Methodology ::: Investigating the Connection between Audio and Text Features", "Results ::: Textual Topics", "Results ::: Correlations with Musical Dimensions", "Conclusion and Outlook" ], "paragraphs": [ [ "As audio and text features provide complementary layers of information on songs, a combination of both data types has been shown to improve the automatic classification of high-level attributes in music such as genre, mood and emotion BIBREF0, BIBREF1, BIBREF2, BIBREF3. Multi-modal approaches interlinking these features offer insights into possible relations between lyrical and musical information (see BIBREF4, BIBREF5, BIBREF6).", "In the case of metal music, sound dimensions like loudness, distortion and particularly hardness (or heaviness) play an essential role in defining the sound of this genre BIBREF7, BIBREF8, BIBREF9, BIBREF10. Specific subgenres – especially doom metal, gothic metal and black metal – are further associated with a sound that is often described as dark or gloomy BIBREF11, BIBREF12.", "These characteristics are typically not limited to the acoustic and musical level. In a research strand that has so far been generally treated separately from the audio dimensions, lyrics from the metal genre have come under relatively close scrutiny (cf. BIBREF13). Topics typically ascribed to metal lyrics include sadness, death, freedom, nature, occultism or unpleasant/disgusting objects and are overall characterized as harsh, gloomy, dystopian, or satanic BIBREF14, BIBREF13, BIBREF15, BIBREF16, BIBREF17.", "Until now, investigations on metal lyrics were limited to individual cases or relatively small corpora – with a maximum of 1,152 songs in BIBREF17. Besides this, the relation between the musical and the textual domain has not yet been explored. Therefore, we examine a large corpus of metal song lyrics, addressing the following questions:", "Which topics are present within the corpus of metal lyrics?", "Is there a connection between characteristic musical dimensions like hardness and darkness and certain topics occurring within the textual domain?" ], [ "In our sequential research design, the distribution of textual topics within the corpus was analyzed using latent Dirichlet allocation (LDA). This resulted in a topic model, which was used for a probabilistic assignment of topics to each of the song documents. Additionally, for a subset of these songs, audio features were extracted using models for high-level music dimensions. The use of automatic models for the extraction of both text as well as musical features allows for scalability as it enables a large corpus to be studied without depending on the process of manual annotation for each of the songs. The resulting feature vectors were then subjected to a correlation analysis. Figure FIGREF6 outlines the sequence of the steps taken in processing the data. The individual steps are explained in the following subsections." ], [ "For gathering the data corpus, a web crawler was programmed using the Python packages Requests and BeautifulSoup. In total, 152,916 metal music lyrics were extracted from www.darklyrics.com.", "Using Python’s langdetect package, all non-English texts were excluded. With the help of regular expressions, the texts were scanned for tokens indicating meta-information, which is not part of the actual lyrics. To this end, a list of stopwords referring to musical instruments or the production process (e.g. ‘recorded’, ‘mixed’, ‘arrangement by’, ‘band photos’) was defined in addition to common stopwords. After these cleaning procedures, 124,288 texts remained in the subsample. For text normalization, stemming and lemmatization were applied as further preprocessing steps." ], [ "We performed a LDA BIBREF18 on the remaining subsample to construct a probabilistic topic model. The LDA models were created by using the Python library Gensim BIBREF19. The lyrics were first converted to a bag-of-words format, and standard weighting of terms provided by the Gensim package was applied.", "Log perplexity BIBREF20 and log UMass coherence BIBREF21 were calculated as goodness-of-fit measures evaluating topic models ranging from 10 to 100 topics. Considering these performance measures as well as qualitative interpretability of the resulting topic models, we chose a topic model including 20 topics – an approach comparable with BIBREF22. We then examined the most salient and most typical words for each topic.", "Moreover, we used the ldavis package to analyze the structure of the resulting topic space BIBREF23. In order to do so, the Jensen-Shannon divergence between topics was calculated in a first step. In a second step, we applied multidimensional scaling (MDS) to project the inter-topic distances onto a two-dimensional plane. MDS is based on the idea of calculating dissimilarities between pairs of items of an input matrix while minimizing the strain function BIBREF24. In this case, the closer the topics are located to one another on the two-dimensional plane, the more they share salient terms and the more likely a combination of these topics appear in a song." ], [ "The high-level audio feature models used had been constructed in previous examinations BIBREF25, BIBREF26. In those music perception studies, ratings were obtained for 212 music stimuli in an online listening experiment by 40 raters.", "2", "Based on this ground truth, prediction models for the automatic extraction of high-level music dimensions – including the concepts of perceived hardness/heaviness and darkness/gloominess in music – had been trained using machine learning methods. In a second step, the model obtained for hardness had been evaluated using further listening experiments on a new unseen set of audio stimuli BIBREF26. The model has been refined against this backdrop, resulting in an $R^2$ value of 0.80 for hardness/heaviness and 0.60 for darkness/gloominess using five-fold cross-validation.", "The resulting models embedded features implemented in LibROSA BIBREF27, Essentia BIBREF28 as well as the timbral models developed as part of the AudioCommons project BIBREF29." ], [ "Finally, we drew a random sample of 503 songs and used Spearman's $\\rho $ to identify correlations between the topics retrieved and the audio dimensions obtained by the high-level audio feature models. We opted for Spearman’s $\\rho $ since it does not assume normal distribution of the data, is less prone to outliers and zero-inflation than Pearson’s $r$. Bonferroni correction was applied in order to account for multiple-testing." ], [ "Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA. The topics are numbered in descending order according to their prevalence (weight) in the text corpus. For each topic, a qualitative interpretation is given along with the 10 most salient terms.", "The salient terms of the first topic – and in parts also the second – appear relatively generic, as terms like e.g. ‘know’, ‘never’, and ‘time’ occur in many contexts. However, the majority of the remaining topics reveal distinct lyrical themes described as being characteristic for the metal genre. ‘Religion & satanism’ (topic #5) and descriptions of ‘brutal death’ (topic #7) can be considered as being typical for black metal and death metal respectively, whereas ‘battle’ (topic #6), ‘landscape & journey’ (topic #11), ‘struggle for freedom’ (topic #12), and ‘dystopia’ (topic #15), are associated with power metal and other metal subgenres.", "2", "This is highlighted in detail in Figure FIGREF11. Here, the topic distributions for two exemplary bands contained within the sample are presented. For these heat maps, data has been aggregated over individual songs showing the topic distribution at the level of albums over a band’s history. The examples chosen illustrate the dependence between textual topics and musical subgenres. For the band Manowar, which is associated with the genre of heavy metal, power metal or true metal, a prevalence of topic #6 (‘battle’) can be observed, while a distinctive prevalence of topic #7 (‘brutal death’) becomes apparent for Cannibal Corpse – a band belonging to the subgenre of death metal.", "Within the topic configuration obtained via multidimensional scaling (see Figure FIGREF12), two latent dimensions can be identified. The first dimension (PC1) distinguishes topics with more common wordings on the right hand side from topics with less common wording on the left hand side. This also correlates with the weight of the topics within the corpus. The second dimension (PC2) is characterized by an contrast between transcendent and sinister topics dealing with occultism, metaphysics, satanism, darkness, and mourning (#9, #3, .#5, #13, and #16) at the top and comparatively shallow content dealing with personal life and Rock’n’Roll lifestyle using a rather mundane or vulgar vocabulary (#1, #8, and #19) at the bottom. This contrast can be interpreted as ‘otherworldliness / individual-transcending narratives’ vs. ‘worldliness / personal life’." ], [ "In the final step of our analysis, we calculated the association between the twenty topics discussed above and the two high-level audio features hardness and darkness using Spearman’s $\\rho $. The results are visualized in Figure FIGREF13 and the $\\rho $ values listed in table TABREF10.", "Significant positive associations can be observed between musical hardness and the topics ‘brutal death’, ‘dystopia’, ‘archaisms & occultism’, ‘religion & satanism’, and ‘battle’, while it is negatively linked to relatively mundane topics concerning ‘personal life’ and ‘love & romance’. The situation is similar for dark/gloomy sounding music, which in turn is specifically related to themes such as ‘dystopia’ and ‘(psychological) madness’. Overall, the strength of the associations is moderate at best, with a tendency towards higher associations for hardness than darkness. The strongest association exists between hardness and the topic ‘brutal death’ ($\\rho = 0.267$, $p < 0.01$)." ], [ "Applying the example of metal music, our work examined the textual topics found in song lyrics and investigated the association between these topics and high-level music features. By using LDA and MDS in order to explore prevalent topics and the topic space, typical text topics identified in qualitative analyses could be confirmed and objectified based on a large text corpus. These include e.g. satanism, dystopia or disgusting objects. It was shown that musical hardness is particularly associated with harsh topics like ‘brutal death’ and ‘dystopia’, while it is negatively linked to relatively mundane topics concerning personal life and love. We expect that even stronger correlations could be found for metal-specific topics when including more genres covering a wider range of hardness/darkness values.", "Therefore, we suggest transferring the method to a sample including multiple genres. Moreover, an integration with metadata such as genre information would allow for the testing of associations between topics, genres and high-level audio features. This could help to better understand the role of different domains in an overall perception of genre-defining attributes such as hardness." ] ] }
{ "question": [ "Why are the scores for predicting perceived musical hardness and darkness extracted only for subsample of 503 songs?", "How long is the model trained?", "What are lyrical topics present in the metal genre?" ], "question_id": [ "c20bb0847ced490a793657fbaf6afb5ef54dad81", "ff8557d93704120b65d9b597a4fab40b49d24b6d", "447eb98e602616c01187960c9c3011c62afd7c27" ], "nlp_background": [ "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "3aa3494e466955e4dafe6bcc39b0b8860c76fcd9" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "113b8e3d7f892dd6d1686662021e9a13987eb99f" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Table TABREF10 displays the twenty resulting topics" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA. The topics are numbered in descending order according to their prevalence (weight) in the text corpus. For each topic, a qualitative interpretation is given along with the 10 most salient terms.", "FLOAT SELECTED: Table 1: Overview of the resulting topics found within the corpus of metal lyrics (n = 124,288) and their correlation to the dimensions hardness and darkness obtained from the audio signal (see section 3.2)" ], "highlighted_evidence": [ "Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA.", "FLOAT SELECTED: Table 1: Overview of the resulting topics found within the corpus of metal lyrics (n = 124,288) and their correlation to the dimensions hardness and darkness obtained from the audio signal (see section 3.2)" ] } ], "annotation_id": [ "1904a06a5673a96187e2255ef65b9b6d7970150f" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: Processing steps of the approach illustrating the parallel analysis of text and audio features", "Table 1: Overview of the resulting topics found within the corpus of metal lyrics (n = 124,288) and their correlation to the dimensions hardness and darkness obtained from the audio signal (see section 3.2)", "Figure 2: Comparison of the topic distributions for all included albums by the bands Manowar and Cannibal Corpse showing a prevalence of the topics ‘battle’ and ‘brutal death’ respectively", "Figure 3: Topic configuration obtained via multidimensional scaling. The radius of the circles is proportional to the percentage of tokens covered by the topics (topic weight).", "Figure 4: Correlations between lyrical topics and the musical dimensions hardness and darkness; ∗: p < 0.05, ∗∗: p < 0.00125 (Bonferroni-corrected significance level)" ], "file": [ "2-Figure1-1.png", "3-Table1-1.png", "4-Figure2-1.png", "4-Figure3-1.png", "5-Figure4-1.png" ] }
1910.00825
Abstractive Dialog Summarization with Semantic Scaffolds
The demand for abstractive dialog summary is growing in real-world applications. For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction. However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets. We propose an abstractive dialog summarization dataset based on MultiWOZ. If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched. To address these two drawbacks, we propose Scaffold Pointer Network (SPNet)to utilize the existing annotation on speaker role, semantic slot and dialog domain. SPNet incorporates these semantic scaffolds for dialog summarization. Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text. On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics.
{ "section_name": [ "Introduction", "Related Work", "Proposed Method", "Proposed Method ::: Background", "Proposed Method ::: Scaffold Pointer Network (SPNet)", "Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Speaker Role Scaffold", "Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Semantic Slot Scaffold", "Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Dialog Domain Scaffold", "Experimental Settings ::: Dataset", "Experimental Settings ::: Evaluation Metrics", "Experimental Settings ::: Implementation Details", "Results and Discussions ::: Automatic Evaluation Results", "Results and Discussions ::: Human Evaluation Results", "Results and Discussions ::: Case study", "Conclusion and Future Work" ], "paragraphs": [ [ "Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.", "There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary.", "In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics." ], [ "BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15.", "Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization.", "Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation\". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work." ], [ "As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain." ], [ "We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9:", "where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters.", "With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text:", "Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch\" to choose from copy and generation:", "where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$.", "The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows:", "where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\\prime }$, $V$, $b$ and $b^{\\prime }$ are learnable parameters used to calculate such distribution." ], [ "Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold." ], [ "Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:", "The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$:" ], [ "We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.", "We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values:", "Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5." ], [ "We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:", "where $U$, $U^{\\prime }$, $b_{d}$ and $b_{d}^{\\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence:", "The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\\hat{d_i}$ and predict probability $d_i$ for this task:", "where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\\lambda $ and the objective function is:" ], [ "We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing." ], [ "ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations:", "Reference: You are going to [restaurant_name] at [time].", "Summary: You are going to [restaurant_name] at.", "In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to\") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:", "where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.", "CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain." ], [ "We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\\beta _1=0.9$, $\\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively." ], [ "To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation.", "We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics." ], [ "We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).", "We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary." ], [ "Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi\" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property.", "Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix).", "Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting." ], [ "We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.", "Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries." ] ] }
{ "question": [ "By how much does SPNet outperforms state-of-the-art abstractive summarization methods on evaluation metrics?", "What automatic and human evaluation metrics are used to compare SPNet to its counterparts?", "Is proposed abstractive dialog summarization dataset open source?", "Is it expected to have speaker role, semantic slot and dialog domain annotations in real world datasets?", "How does SPNet utilize additional speaker role, semantic slot and dialog domain annotations?", "What are previous state-of-the-art document summarization methods used?", "How does new evaluation metric considers critical informative entities?", "Is new evaluation metric extension of ROGUE?" ], "question_id": [ "f398587b9a0008628278a5ea858e01d3f5559f65", "d5f8707ddc21741d52b3c2a9ab1af2871dc6c90b", "58f3bfbd01ba9768172be45a819faaa0de2ddfa4", "73633afbefa191b36cca594977204c6511f9dad4", "db39a71080e323ba2ddf958f93778e2b875dcd24", "6da2cb3187d3f28b75ac0a61f6562a8adf716109", "c47e87efab11f661993a14cf2d7506be641375e4", "14684ad200915ff1e3fc2a89cb614e472a1a2854" ], "nlp_background": [ "zero", "zero", "zero", "zero", "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "SPNet vs best baseline:\nROUGE-1: 90.97 vs 90.68\nCIC: 70.45 vs 70.25", "evidence": [ "We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.", "FLOAT SELECTED: Table 1: Automatic evaluation results on MultiWOZ. We use Pointer-Generator as the base model and gradually add different semantic scaffolds." ], "highlighted_evidence": [ "We show all the models' results in Table TABREF24", "FLOAT SELECTED: Table 1: Automatic evaluation results on MultiWOZ. We use Pointer-Generator as the base model and gradually add different semantic scaffolds." ] } ], "annotation_id": [ "d214c4bc382c51d8f0cd08b640a46c76afbbbd86" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "ROUGE and CIC", "relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).", "We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics." ], "highlighted_evidence": [ "The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).", "We observe that SPNet reaches the highest score in both ROUGE and CIC" ] } ], "annotation_id": [ "87489cb800ee2bd74ed869331e049f50df8490cd" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "dd2e932f857b22b80622c71fdff3724951a7b2ef" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Not at the moment, but summaries can be additionaly extended with this annotations.", "evidence": [ "Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries." ], "highlighted_evidence": [ "We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries." ] } ], "annotation_id": [ "658e80b812db9c136734de7fac04f01050ba7696" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Our encoder-decoder framework employs separate encoding for different speakers in the dialog.", "We integrate semantic slot scaffold by performing delexicalization on original dialogs.", "We integrate dialog domain scaffold through a multi-task framework." ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:", "We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.", "We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:" ], "highlighted_evidence": [ "Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ .", "We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling.", "We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset." ] } ], "annotation_id": [ "8c16d083a2893633aec9f3bcfddc03ede96237de" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Pointer-Generator", "Transformer" ], "yes_no": null, "free_form_answer": "", "evidence": [ "To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation." ], "highlighted_evidence": [ "To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6." ] } ], "annotation_id": [ "5274d125124da018bd4cea634e16b14af46f9fe4" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Answer with content missing: (formula for CIC) it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities", "evidence": [ "In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to\") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:", "where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.", "CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain." ], "highlighted_evidence": [ "To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:\n\nwhere $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.\n\nCIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities." ] } ], "annotation_id": [ "1162bf54756068e0894e0ec3e15af76802321f63" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain." ], "highlighted_evidence": [ "CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities." ] } ], "annotation_id": [ "5fa3ee21cd7d33a6a7d8bad663cc0b8a8cc5bab4" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: SPNet overview. The blue and yellow box is the user and system encoder respectively. The encoders take the delexicalized conversation as input. The slots values are aligned with their slots position. Pointing mechanism merges attention distribution and vocabulary distribution to obtain the final distribution. We then fill the slots values into the slot tokens to convert the template to a complete summary. SPNet also performs domain classification to improve encoder representation.", "Table 1: Automatic evaluation results on MultiWOZ. We use Pointer-Generator as the base model and gradually add different semantic scaffolds.", "Table 2: An example dialog and Pointer-Generator, SPNet and ground truth summaries. We underline semantic slots in the conversation. Red denotes incorrect slot values and green denotes the correct ones.", "Table 3: The upper is the scoring part and the lower is the the ranking part. SPNet outperforms Pointer-Generator in all three human evaluation metrics and the differences are significant, with the confidence over 99.5% in student t test. In the ranking part, the percentage of each choice is shown in decimal. Win, lose and tie refer to the state of the former summary in ranking." ], "file": [ "4-Figure1-1.png", "6-Table1-1.png", "7-Table2-1.png", "8-Table3-1.png" ] }
1911.03705
CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning
Rational humans can generate sentences that cover a certain set of concepts while describing natural and common scenes. For example, given {apple(noun), tree(noun), pick(verb)}, humans can easily come up with scenes like "a boy is picking an apple from a tree" via their generative commonsense reasoning ability. However, we find this capacity has not been well learned by machines. Most prior works in machine commonsense focus on discriminative reasoning tasks with a multi-choice question answering setting. Herein, we present CommonGen: a challenging dataset for testing generative commonsense reasoning with a constrained text generation task. We collect 37k concept-sets as inputs and 90k human-written sentences as associated outputs. Additionally, we also provide high-quality rationales behind the reasoning process for the development and test sets from the human annotators. We demonstrate the difficulty of the task by examining a wide range of sequence generation methods with both automatic metrics and human evaluation. The state-of-the-art pre-trained generation model, UniLM, is still far from human performance in this task. Our data and code is publicly available at this http URL .
{ "section_name": [ "Introduction", "Problem Formulation", "The CommonGen Dataset", "The CommonGen Dataset ::: Collecting Concept-Sets with Captions", "The CommonGen Dataset ::: Crowd-Sourcing via AMT", "The CommonGen Dataset ::: Statistics", "Methods", "Methods ::: Seq-to-Seq Learning", "Methods ::: A BERT-based Method: UniLM", "Methods ::: Other methods", "Methods ::: Incorporating Commonsense Rationales", "Evaluation", "Evaluation ::: Setup", "Evaluation ::: Automatic Metrics", "Evaluation ::: Experimental Results", "Evaluation ::: Human Evaluation", "Evaluation ::: Qualitative Analysis", "Related Work ::: Machine Common Sense", "Related Work ::: Constrained Text Generation", "Conclusion" ], "paragraphs": [ [ "Commonsense reasoning has long been acknowledged as a critical bottleneck of artificial intelligence and especially in natural language processing. It is an ability of combining commonsense facts and logic rules to make new presumptions about ordinary scenes in our daily life. A distinct property of commonsense reasoning problems is that they are generally trivial for human-beings while challenging for machine reasoners.", "There have been a few recent tasks and datasets for testing machine commonsense, while most of them frame their problems as multi-choice question answering, such as CSQA BIBREF0 and SWAG BIBREF1. We name this kind of tasks as deterministic commonsense reasoning because they focus on modeling the plausibility of given complete scenes. The systems for these tasks thus have to work with biased selection of distractors, and thus are less practical or challenging. Simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF2. On the other hand, few work has been done so far in testing machine commonsense in a generative reasoning setting, where a reasoner is expected to complete scenes with several given concepts.", "Specifically, we would like to investigate if machine-reasoning models can generate a sentence that contains a required set of concepts (i.e. nouns or verbs) while describing a common scene in our daily life. For example, as shown in Figure FIGREF1, given an unordered collection of concepts “{apple (noun), bag (noun), pick (verb), place (verb), tree (noun)}”, a rational reasoner should be able to generate a sentence like “A boy picks some apples from a tree and places them into a bag.”, which describes a natural scene and contains all given concepts. The creation of this sentence is easy for humans while non-trivial for even state-of-the-art conditional language generation models. We argue that such an ability of recovering natural scenes of daily life can benefit a wide range of natural language generation (NLG) tasks including image/video captioning BIBREF3, BIBREF4, scene-based visual reasoning and VQA BIBREF5, storytelling BIBREF6, and dialogue systems BIBREF7, BIBREF8.", "Towards empowering machines with the generative commonsense reasoning ability, we create a large-scale dataset, named CommonGen, for the constrained text generation task. We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes. Through crowd-sourcing via Amazon Mechanical Turk (AMT), we finally obtain $89,028$ human-written sentences as expected outputs. We investigate the performance of sophisticated sequence generation methods for the proposed task with both automatic metrics and human evaluation. The experiments show that all methods are far from human performance in generative commonsense reasoning. Our main contributions are as follows: 1) We introduce the first large-scale constrained text generation dataset targeting at generative commonsense reasoning; 2) We systematically compare methods for this (lexically) constrained text generation with extensive experiments and evaluation. 3) Our code and data are publicly available (w/ the URL in the abstract), so future research in this direction can be directly developed in a unified framework." ], [ "In this section, we formulate our task with mathematical notations and discuss its inherent challenges. The input to the task is a set of $n$ concepts $x=\\lbrace c_1,c_2,\\dots ,c_n\\rbrace \\in \\mathcal {X}$, where $c_i\\in \\mathcal {C}$ is a common noun or verb. $\\mathcal {X}$ denotes the space of concept-sets and $\\mathcal {C}$ stands for the concept vocabulary. The expected output of this task is a simple, grammatical sentence $y\\in \\mathcal {Y}$, describing a natural scene in our daily-life that covers all given concepts in $x$. Note that other forms of given concepts are also accepted, such as plural forms of nouns and verbs. In addition, we also provide rationales as an optional resource to model the generation process. For each pair of $(x, y)$, a rationale $r$ is a list of sentences that explains the background commonsense knowledge used in the scene recovering process.", "The task is to learn a structured predictive function $f:\\mathcal {X} \\rightarrow \\mathcal {Y}$, which maps a concept-set to a sentence. Thus, it can be seen as a special case of constrained text generation BIBREF9. The unique challenges of our proposed task come from two main aspects as follows.", "Constrained Decoding. Lexically constrained decoding for sentence generation has been an important and challenging research topic in machine translation community BIBREF10, where they focus on how to decode sentences when some words/phrases (e.g. terminology) must present in target sentences (Section SECREF6). However, it is still an open problem how to efficiently generate sentences given an unordered set of multiple keywords with potential morphological changes (e.g. “pick” $\\rightarrow $ “picks” in the previous case). Apart from that, the part-of-speech constraints brings even more difficulties (e.g. “place” can be verb/noun).", "Commonsense Reasoning. Apart from the challenge in constrained decoding, a generative commonsense reasoner also has to compositionally use (latent) commonsense knowledge for generating most plausible scenes. Recall the illustrative example in Figure FIGREF1, even such a simple scene generation process needs pretty much commonsense knowledge like: 1) “apples grow in trees”; 2) “bags are containers that you can put something in”; 3) “you usually pick something and then place it in a container”. Expected reasoners have to prioritize target scenes over an infinity number of less plausible scenes like “A boy picks an apple tree and places it into bags.” or “A boy places some bags on a tree and picks an apple.”." ], [ "In this section, we present how we build the CommonGen dataset for testing machine commonsense with generative reasoning. The overall data collection process is as follows. 1) We first collect a large amount of high-quality image/video caption sentences from several existing corpora, 2) Then, we compute co-occurrence statistics about concept-sets of different sizes ($3\\sim 5$), such that we can find the concept-sets that are more likely to be present in the same scene. 3) Finally, we ask human crowd-workers from AMT to write scenes with rationales for every given concept-set, which serve as our development and test sets. The training set consists of carefully post-processed human-written caption sentences, which have little overlap with dev/test sets. We present the statistics and show its inherent challenges at the end of this section." ], [ "Following the general definition in the largest commonsense knowledge graph, ConceptNet BIBREF11, we understand a concept as a common noun or verb. We aim to test the ability of generating natural scenes with a given set of concepts. The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total.", "We assume if a set of concepts are all mentioned together in more caption sentences, then this concept-set is more like to co-occur. Thus, we compute the co-occurrence frequency of all possible concept-sets that have $3\\sim 5$ concepts, named as three/four/five-concept-sets respectively. Each concept-set is associated with at least one caption sentences. We carefully post-process them and take the shortest ones with minimal overlaps as the final data. These initial concept-sets are further divided into three parts: train/dev/test. We then iterate all training concept-sets and remove the ones that have more than two overlapping concepts with any concept-set in the dev or test set. Thus, the dev/test set can better measure the generalization ability of models on unseen combinations of concepts." ], [ "It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.", "We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data." ], [ "We present the statistical information of our final dataset. Firstly, we summarize the basic statistics in Table TABREF9, such as the number of unique concept-sets, scene sentences, and sentence lengths. In total, there are 3,706 unique concepts among all concept-sets, and 3,614/1,018/1,207 in the train/dev/test parts respectively. Note that there are 4% of the dev and 6% of the test concepts never appear in the training data, so we can better understand how well trained models can perform with unseen concepts.", "We analyze the overlap between training concept-sets and dev/test concept-sets. By average, we find that 98.8% of the training instances share no common concept at all with dev/test data, such that the dev/test can help us analyze model performance on new combinations of concepts.", "We also visualize the frequency distribution of our test concept-sets in Figure FIGREF7 by showing the frequency of top 50 single concepts and co-occurred concept pairs." ], [ "In this section, we introduce the methods that we adopt for the proposed constrained text generation task. We group these methods into several types as follows. Basically, we have different kinds of encoder-decoder architectures with copy attention mechanism, including both classic and recently proposed methods. Apart from that, we utilize the state-of-the-art pre-trained sentence generation model for our task. Moreover, we include three typical models for abstractive summarization, story generation respectively, and keywords-based decoding of language models." ], [ "One very straightforward way is to form this problem as a “sequence”-to-sequence task, where input sequences are randomly sorted sets of given concepts. In this way, encoder-decoder seq2seq architectures based on bidirectional RNN (bRNN) BIBREF17 or Transformer (Trans.) BIBREF18 can be directly adopted to the task, just like many other conditional sequence generation problems (translation, summarization, etc.).", "Order-insensitive processing. However, these encoders may degrade because our inputs are actually order-insensitive. We thus try to use multi-layer perceptrons (MLP) with mean-pooling as the encoder (“mean encoder”) over sequences of word vectors to completely eliminate the order sensitivity. Similarly, we consider removing the positional embeddings in Transformers (Trans. w/o Pos).", "Copying mechanism. The above-mentioned architectures with vanilla attention can miss the words in input sequences and thus produce either unknown tokens or synonyms. To force the decoder to produce target sentences with a constraint on input sentence, we utilize the copying mechanism BIBREF19 for all these models. We follow the implementation of these methods by OpenNMT-py BIBREF20.", "Non-autoregressive generation. Recent advances in conditional sentence generation have a focus on edit-based models, which iteratively refine generated sequences (usually bounded by a fixed length). These models potentially get better performance than auto-regressive methods because of their explicit modeling on iterative refinements. We study typical models including iNAT BIBREF21, Insertion Transformer (InsertTrans) BIBREF22, and Levenshtein Transformer (LevenTrans) BIBREF23." ], [ "We employ a new unified pre-trained language model, UniLM BIBREF24, which uses BERT BIBREF25 as the encoder and then fine-tunes the whole architecture with different generation-based objective. To the best of our knowledge, the UniLM model is the state-of-the-art method for a wide range of conditional text generation tasks including summarization, question generation, and dialogue responding." ], [ "Based on the similarity between our task and abstractive summarization and story generation (with given topic words), we also apply Pointer Generator Networks (“PointerGen”) BIBREF26 and Multi-scale Fusion Attention (“Fusion Attn.”) BIBREF27 model respectively for our task." ], [ "We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings)." ], [ "Herein, we present the experimental results for comparing different baseline methods in the proposed setting. We first introduce the setup and automatic metrics, and then we present the results and analysis. Finally, we show human evaluation results and qualitative analysis." ], [ "We use the proposed CommonGen dataset in two setting: knowledge-agnostic and knowledge-aware. For the knowledge-agnostic setting, we simply apply the methods in Section SECREF4 while we concatenate rationales and input concept-sets together as the knowledge-aware inputs (“$+r$”)." ], [ "For automatically evaluating our methods, we propose to use widely used metric for image/video captioning. This is because the proposed CommonGen task can be regarded as also a caption task where the context are incomplete scenes with given concept-sets. Therefore, we choose BLEU-3/4 BIBREF28, ROUGE-2/L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31 as the main metrics. Apart from these classic metrics, we also include a novel embedding-based metric named BERTScore BIBREF32. To make the comparisons more clear, we show the delta of BERTScore results by subtracting the score of merely using input concept-sets as target sentences, named $\\triangle $BERTS.", "To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”." ], [ "We present the experimental results of five groups of methods that are introduced in Section SECREF4. We find that the model UniLM outperforms all other baseline methods by a large margin, which is expected due to it is pre-trained with the BERT encoder towards generation objectives. However, its performance is still way far from the human bound, and this margin is even larger in test data.", "We notice that the most recent edit-based model named LevenTrans archives the best performance among models without pre-training at all. This shows that edit-based sequence generation models can better deal with the cases where target sentences share similar vocabulary with source ones. Nonetheless, the other two models within the same sequence modeling framework (i.e. fairseq) are much worse, which might because of their specialty designed for machine translation.", "An order-insensitive sequence/set encoder, “mean encoder”, outperform order-sensitive counterparts like “bRNN”. However, such a marginal improvement is not seen in the comparison between “Trans.” vs “Trans. w/o Pos”. We assume that for short sequences the order sensitivity does not harm sequential encoders, while positional embeddings in Transformers can better improve the self-attention mechanism. Also, we find that Transformer-based seq2seq architectures are not outperforming simpler models like bRNN.", "As for the use of additional retrieved sentences form OMCS corpus and human-written associated rationales, we find that they are not generally helpful in investigated architectures. Although they increase the BLEU and ROUGE scores, the metrics specially designed for captioning like CIDEr and SPICE are dropping down. We argue that it might because the OMCS sentences are actually not aligned with training data, and more sophisticated methods for encoding such non-sequential facts in a more compositional way." ], [ "From the automatic evaluation results with multiple metrics, we have a rough idea of the performance of all models. However, no automatic metric is perfect, especially for a newly proposed generation task like the CommonGen. We thus ask humans to rank 100 outputs of 6 selected typical models as well as one randomly picked reference sentence, forming seven systems in total. Annotators are educated to rank results by their coverage, fluency, and plausibility in daily life. Then, we compute the cumulative gains of each system in all 100 cases:", "$S^{(k)}_i$ is the final score of the $i$-th system by the $k$-th annotator. $G^{k}_{i, j}$ is the rank position of the $i$-th system output for $j$-th example. In our case, $N=100$, $K = 5$, $G^{k}_{i, j}\\in [1,7]$.", "As shown in Table TABREF22, we compare different systems including human bound for both the above-introduced cumulative ranking scores and the average hit@top3 rates with standard deviations. We find that the correlation between human evaluation and CIDEr and SPICE are better than the other metrics (see Table TABREF15)." ], [ "For more clearly observe the performance of interested models, we present several real system outputs on the test set in Table TABREF24. We find that models usually cannot cover all given concepts, and also can produce repetitions of given concepts (e.g. “a dog catches a dog”, “a couple of couples”, and “at an object and an object .”). Moreover, we find that the order of actions may be mot natural. For example, the model output “a man pulls a sword out of his mouth and swallows it” makes less sense because a man usually swallow a sword first before he pull it out in such performances." ], [ "Machine common sense (MCS) has long been considered as one of the most significant area in artificial intelligence. Recently, there are various emerging datasets for testing machine commonsense from different angles, such as commonsense extraction BIBREF33, BIBREF34, next situation prediction (SWAG BIBREF1, CODAH BIBREF35, HellaSWAG BIBREF36), cultural/social understanding BIBREF37, BIBREF38, BIBREF39, visual scene comprehension BIBREF40, and general commonsense question answering BIBREF0, BIBREF41. Most of them are in a multi-choice QA setting for discriminative commonsense reasoning, among which CSQA BIBREF0 and SWAG BIBREF1 are two typical examples. The input of the CSQA task is a question that needs commonsense reasoning and there are five candidate answers (words/phrases). The SWAG task asks models to select which situation is the most plausible next situation, given a sentence describing an event.", "The two tasks share very similar objectives with large pre-trained language encoders like BERT BIBREF42: Masked-LM can predict the missing words in an incomplete sentence, which is similar to the CSQA setting; NextSentPrediction classifies whether a sentence is the next sentence of the given sentence in the corpora, which can be seen as using distant supervision for the SWAG task. Thus, simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF43, BIBREF2, but it does not necessarily mean machine reasoners can really produce new assumptions in an open and generative setting. The proposed CommonGen, to the best of our knowledge, is the first dataset and task for generative commonsense reasoning." ], [ "Constrained or controllable text generation aims to decode realistic sentences that have expected attributes such as sentiment BIBREF44, BIBREF9, tense BIBREF9, template BIBREF45, style BIBREF46, BIBREF47, BIBREF48, etc. The most similar scenario with our task is lexically constrained sentence encoding, which has been studied mainly in the machine translation community BIBREF49, BIBREF50 for dealing with terminology and additional bilingual dictionaries.", "Classic methods usually modify the (beam) searching algorithms to accommodate lexical constraints like Grid Beam Search BIBREF10. The most recent work in this line is the CGMH BIBREF51 model, which works in the inference stage to sample sentences with a sequence of multiple keywords from language models. However, our task brings more challenges: 1) we do not assume there is a fixed order of keywords in target sentences; 2) we allow morphological changes of the keywords; 3) the decoded sentences must describe highly plausible scenes in our daily life. Current methods cannot well address these issues and also work extremely slow to generate grammatical sentences. We instead mainly investigate sequence-to-sequence architectures, especially models that are based on editing operations and non-autoregressive. Pre-trained seq2seq generation models like UniLM BIBREF24 and BRAT BIBREF52 are usually initialized with pre-trained language encoder and then further fine-tuned with multiple NLG tasks. The UniLM archives the best performance on our proposed CommonGen task, while being far from human-level performance and hardly interpretable." ], [ "In this paper, we purpose a novel constrained text generation task for generative commonsense reasoning. We introduce a new large-scale dataset named CommonGen and investigate various methods on them. Through our extensive experiments and human evaluation, we demonstrate that the inherent difficulties of the new task cannot be addressed by even the state-of-the-art pre-trained language generation model.", "For the future research, we believe the following directions are highly valuable to explore: 1) specially designed metrics for automatic evaluation that focus on commonsense plausibility; 2) better mechanisms for retrieving and imposing useful commonsense knowledge into sentence generation processes; 3) explicitly modeling keyword-centric edits (e.g. insertion, deletion, morphological changes) such that relevant commonsense knowledge can be well utilized. We also believe that models performed well on CommonGen can be easily transferred to other commonsense-required reasoning tasks with few annotations, including image/video captioning, visual question answering, and discriminative multi-choice commonsense question answering." ] ] }
{ "question": [ "What measures were used for human evaluation?", "What automatic metrics are used for this task?", "Are the models required to also generate rationales?", "Are the rationales generated after the sentences were written?", "Are the sentences in the dataset written by humans who were shown the concept-sets?", "Where do the concept sets come from?" ], "question_id": [ "8d1f9d3aa2cc2e2e58d3da0f5edfc3047978f3ee", "5065ff56d3c295b8165cb20d8bcfcf3babe9b1b8", "c34a15f1d113083da431e4157aceb11266e9a1b2", "061682beb3dbd7c76cfa26f7ae650e548503d977", "3518d8eb84f6228407cfabaf509fd63d60351203", "617c77a600be5529b3391ab0c21504cd288cc7c7" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no", "no", "no" ], "search_query": [ "text reasoning", "text reasoning", "text reasoning", "text reasoning", "text reasoning", "text reasoning" ], "question_writer": [ "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself)." ], "yes_no": null, "free_form_answer": "", "evidence": [ "To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”." ], "highlighted_evidence": [ "To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”." ] } ], "annotation_id": [ "11abf8a9688d03f0f9020b5fc7ce0e9a41c3642c" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "BLEU-3/4", "ROUGE-2/L", "CIDEr", "SPICE", "BERTScore" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For automatically evaluating our methods, we propose to use widely used metric for image/video captioning. This is because the proposed CommonGen task can be regarded as also a caption task where the context are incomplete scenes with given concept-sets. Therefore, we choose BLEU-3/4 BIBREF28, ROUGE-2/L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31 as the main metrics. Apart from these classic metrics, we also include a novel embedding-based metric named BERTScore BIBREF32. To make the comparisons more clear, we show the delta of BERTScore results by subtracting the score of merely using input concept-sets as target sentences, named $\\triangle $BERTS." ], "highlighted_evidence": [ "For automatically evaluating our methods, we propose to use widely used metric for image/video captioning. This is because the proposed CommonGen task can be regarded as also a caption task where the context are incomplete scenes with given concept-sets. Therefore, we choose BLEU-3/4 BIBREF28, ROUGE-2/L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31 as the main metrics. Apart from these classic metrics, we also include a novel embedding-based metric named BERTScore BIBREF32. To make the comparisons more clear, we show the delta of BERTScore results by subtracting the score of merely using input concept-sets as target sentences, named $\\triangle $BERTS." ] } ], "annotation_id": [ "a5f6994dbe5280e6fca93898d7c658b1cce3de1e" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings)." ], "highlighted_evidence": [ "We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings)." ] } ], "annotation_id": [ "86bf1f40d410a67ebd40a89af9672808fa26cf2e" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data." ], "highlighted_evidence": [ "We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes." ] } ], "annotation_id": [ "1742bd7774eeaffd07421b5965a23ddbefd41634" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.", "We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data." ], "highlighted_evidence": [ "It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.\n\nWe collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes." ] } ], "annotation_id": [ "f8fc7762ca9f7dab8c3a935c4846286e40a4cecc" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "These concept-sets are sampled from several large corpora of image/video captions" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Towards empowering machines with the generative commonsense reasoning ability, we create a large-scale dataset, named CommonGen, for the constrained text generation task. We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes. Through crowd-sourcing via Amazon Mechanical Turk (AMT), we finally obtain $89,028$ human-written sentences as expected outputs. We investigate the performance of sophisticated sequence generation methods for the proposed task with both automatic metrics and human evaluation. The experiments show that all methods are far from human performance in generative commonsense reasoning. Our main contributions are as follows: 1) We introduce the first large-scale constrained text generation dataset targeting at generative commonsense reasoning; 2) We systematically compare methods for this (lexically) constrained text generation with extensive experiments and evaluation. 3) Our code and data are publicly available (w/ the URL in the abstract), so future research in this direction can be directly developed in a unified framework.", "Following the general definition in the largest commonsense knowledge graph, ConceptNet BIBREF11, we understand a concept as a common noun or verb. We aim to test the ability of generating natural scenes with a given set of concepts. The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total." ], "highlighted_evidence": [ "We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes.", "The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total." ] } ], "annotation_id": [ "5feee9f32509dbe70aec97b4eba68600c4ea973f" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: A motivating example for generative commonsense reasoning and the COMMONGEN task. A reasoner gets a concept-set as the input and should generate a sentence that covers all given concepts while describing a common scene (in the green box) out of less plausible ones (in the red box).", "Figure 2: The frequency of top 50 single concepts (upper) and co-occurred concept-pairs (lower) in the test data.", "Table 1: The basic statistics of COMMONGEN.", "Table 2: Experimental results of different baseline methods on the COMMONGEN.", "Table 3: The average humane evaluation ranking scores and hit@top3 rates for each tested system." ], "file": [ "1-Figure1-1.png", "4-Figure2-1.png", "4-Table1-1.png", "6-Table2-1.png", "6-Table3-1.png" ] }
1910.00458
MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension
Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language. Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it often requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations, compared to the extractive counterpart where answers are usually spans of text within given passages. Moreover, most existing MCQA datasets are small in size, making the learning task even harder. We introduce MMM, a Multi-stage Multi-task learning framework for Multi-choice reading comprehension. Our method involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset to help model generalize better with limited data. Furthermore, we propose a novel multi-step attention network (MAN) as the top-level classifier for this task. We demonstrate MMM significantly advances the state-of-the-art on four representative MCQA datasets.
{ "section_name": [ "Introduction", "Methods", "Methods ::: Model Architecture", "Methods ::: Multi-step Attention Network", "Methods ::: Two Stage Training", "Methods ::: Two Stage Training ::: Coarse-tuning Stage", "Methods ::: Two Stage Training ::: Multi-task Learning Stage", "Experimental Setup ::: Datasets", "Experimental Setup ::: Speaker Normalization", "Experimental Setup ::: Multi-task Learning", "Experimental Setup ::: Training Details", "Results", "Discussion ::: Why does natural language inference help?", "Discussion ::: Can other tasks help with MCQA?", "Discussion ::: NLI dataset helps with convergence", "Discussion ::: Multi-stage or Multi-task", "Discussion ::: Multi-steps reasoning is important", "Discussion ::: Could the source dataset be benefited?", "Discussion ::: Error Analysis", "Related Work", "Conclusions" ], "paragraphs": [ [ "Building a system that comprehends text and answers questions is challenging but fascinating, which can be used to test the machine's ability to understand human language BIBREF0, BIBREF1. Many machine reading comprehension (MRC) based question answering (QA) scenarios and datasets have been introduced over the past few years, which differ from each other in various ways, including the source and format of the context documents, whether external knowledge is needed, the format of the answer, to name a few. We can divide these QA tasks into two categories: 1) extractive/abstractive QA such as SQuAD BIBREF2, and HotPotQA BIBREF3. 2) multiple-choice QA (MCQA) tasks such as MultiRC BIBREF4, and MCTest BIBREF5.", "In comparison to extractive/abstractive QA tasks, the answers of the MCQA datasets are in the form of open, natural language sentences and not restricted to spans in text. Various question types exist such as arithmetic, summarization, common sense, logical reasoning, language inference, and sentiment analysis. Therefore it requires more advanced reading skills for the machine to perform well on this task. Table TABREF1 shows one example from one of MCQA datasets, DREAM BIBREF6. To answer the first question in Table TABREF1, the system needs to comprehend the whole dialogue and use some common sense knowledge to infer that such a conversation can only happen between classmates rather than brother and sister. For the second question, the implicit inference relationship between the utterance “You'll forget your head if you're not careful.” in the passage and the answer option “He is too careless.” must be figured out by the model to obtain the correct answer. Many MCQA datasets were collected from language or science exams, which were purposely designed by educational experts and consequently require non-trivial reasoning techniques BIBREF7. As a result, the performance of machine readers on these tasks can more accurately gauge comprehension ability of a model.", "Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11.", "We proposed MMM, a Multi-stage Multi-task learning framework for Multi-choice question answering. Our framework involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset. For the first stage, we coarse-tuned our model with natural language inference (NLI) tasks. For the second multi-task fine-tuning stage, we leveraged the current largest MCQA dataset, RACE, as the in-domain source dataset and simultaneously fine-tuned the model on both source and target datasets via multi-task learning. Through extensive experiments, we demonstrate that the two-stage sequential fine-tuning strategy is the optimal choice for BERT-based model on MCQA datasets. Moreover, we also proposed a Multi-step Attention Network (MAN) as the top-level classifier instead of the typical fully-connected neural network for this task and obtained better performance. Our proposed method improves BERT-based baseline models by at least 7% in absolute accuracy for all the MCQA datasets (except the SemEval dataset that already achieves 88.1% for the baseline). As a result, by leveraging BERT and its variant, RoBERTa BIBREF10, our approach advanced the SOTA results for all the MCQA datasets, surpassing the previous SOTA by at least 16% in absolute accuracy (except the SemEval dataset)." ], [ "In MCQA, the inputs to the model are a passage, a question, and answer options. The passage, denoted as $P$, consists of a list of sentences. The question and each of the answer options, denoted by $Q$ and $O$, are both single sentences. A MCQA model aims to choose one correct answer from answer options based on $P$ and $Q$." ], [ "Figure FIGREF3 illustrates the model architecture. Specifically, we concatenate the passage, question and one of the answer options into a long sequence. For a question with $n$ answer options, we obtain $n$ token sequences of length $l$. Afterwards, each sequence will be encoded by a sentence encoder to get the representation vector $H \\in \\mathbb {R}^{d\\times l}$, which is then projected into a single value $p=C(H)$ ($p\\in \\mathbb {R}^{1}$) via a top-level classifier $C$. In this way, we obtain the logit vector $\\mathbf {p}=[p_1,p_2,...,p_n]$ for all options of a question, which is then transformed into the probability vector through a softmax layer. We choose the option with highest logit value $p$ as the answer. Cross entropy loss is used as the loss function. We used the pre-trained bidirectional transformer encoder, i.e., BERT and RoBERTa as the sentence encoder. The top-level classifier will be detailed in the next subsection." ], [ "For the top-level classifier upon the sentence encoder, the simplest choice is a two-layer full-connected neural network (FCNN), which consist of one hidden layer with $tanh$ activation and one output layer without activation. This has been widely adopted when BERT is fine-tuned for the down-streaming classification tasks and performs very well BIBREF8. Inspired from the success of the attention network widely used in the span-based QA task BIBREF11, we propose the multi-step attention network (MAN) as our top-level classifier. Similar to the dynamic or multi-hop memory network BIBREF12, BIBREF13, MAN maintains a state and iteratively refines its prediction via the multi-step reasoning.", "The MAN classifier works as follows. A pair of question and answer option together is considered as a whole segment, denoted as $QO$. Suppose the sequence length of the passage is $p$ and that of the question and option pair is $q$. We first construct the working memory of the passage $H^P\\in \\mathbb {R}^{d\\times p}$ by extracting the hidden state vectors of the tokens that belong to $P$ from $H$ and concatenating them together in the original sequence order. Similarly, we obtain the working memory of the (question, option) pair, denoted as $H^{QO}\\in \\mathbb {R}^{d\\times q}$. Alternatively, we can also encode the passage and (question, option) pair individually to get their representation vectors $H^P$ and $H^{QO}$, but we found that processing them in a pair performs better.", "We then perform $K$-step reasoning over the memory to output the final prediction. Initially, the initial state $\\mathbf {s}^0$ in step 0 is the summary of $H^P$ via self-attention: $\\mathbf {s}^0=\\sum _i \\alpha _i H_i^P$, where $\\alpha _i=\\frac{exp(w_1^TH_i^P)}{\\sum _j exp(w_1^TH_j^P)}$. In the following steps $k \\in {1,2,...,K-1}$, the state is calculated by:", "where $\\mathbf {x}^k=\\sum _i\\beta _iH_i^{QO}$ and $\\beta _i=\\frac{exp(w_2^T[\\mathbf {s}^{k-1};H_i^{QO}])}{\\sum _j exp(w_2^T[\\mathbf {s}^{k-1};H_j^{QO}])}$. Here $[x;y]$ is concatenation of the vectors $x$ and $y$. The final logit value is determined using the last step state:", "Basically, the MAN classifier calculates the attention scores between the passage and (question, option) pair step by step dynamically such that the attention can refine itself through several steps of deliberation. The attention mechanism can help filter out irrelevant information in the passage against (question, option) pair." ], [ "We adopt a two-stage procedure to train our model with both in-domain and out-of-domain datasets as shown in Figure FIGREF10." ], [ "We first fine-tune the sentence encoder of our model with natural language inference (NLI) tasks. For exploration, we have also tried to fine-tune the sentence encoder on other types of tasks such as sentiment analysis, paraphrasing, and span-based question answering at this stage. However, we found that only NLI task shows robust and significant improvements for our target multi-choice task. See Section SECREF5 for details." ], [ "After corase-tuning stage, we simultaneously fine-tune our model on a large in-domain source dataset and the target dataset together via multi-task learning. We share all model parameters including the sentence encoder as well as the top-level classifier for these two datasets." ], [ "We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits." ], [ "Passages in DREAM dataset are dialogues between two persons or more. Every utterance in a dialogue starts with the speaker name. For example, in utterance “m: How would he know?”, “m” is the abbreviation of “man” indicating that this utterance is from a man. More than 90% utterances have the speaker names as “w,” “f,” and “m,” which are all abbreviations. However, the speaker names mentioned in the questions are full names such as “woman” and “man.” In order to make it clear for the model to learn which speaker the question is asking about, we used a speaker normalization strategy by replacing “w” or “f” with “woman” and “m” with “man” for the speaker names in the utterances. We found this simple strategy is quite effective, providing us with 1% improvement. We will always use this strategy for the DREAM dataset for our method unless explicitly mentioned." ], [ "For the multi-task learning stage, at each training step, we randomly selected a dataset from the two datasets (RACE and the target dataset) and then randomly fetched a batch of data from that dataset to train the model. This process was repeated until the predefined maximum number of steps or the early stopping criterion has been met. We adopted the proportional sampling strategy, where the probability of sampling a task is proportional to the relative size of each dataset compared to the cumulative size of all datasets BIBREF17." ], [ "We used a linear learning rate decay schedule with warm-up proportion of $0.1$. We set the dropout rate as $0.1$. The maximum sequence length is set to 512. We clipped the gradient norm to 5 for DREAM dataset and 0 for other datasets. The learning rate and number of training epochs vary for different datasets and encoder types, which are summarized in Section 1 of the Supplementary Material.", "More than 90% of passages have more than 512 words in the TOEFL dataset, which exceed the maximum sequence length that BERT supports, thus we cannot process the whole passage within one forward pass. To solve this issue, we propose the sliding window strategy, in which we split the long passage into several snippets of length 512 with overlaps between subsequent snippets and each snippet from the same passage will be assigned with the same label. In training phase, all snippets will be used for training, and in inference phase, we aggregate the logit vectors of all snippets from the same passage and pick the option with highest logit value as the prediction. In experiments, we found the overlap of 256 words is the optimal, which can improve the BERT-Base model from accuracy of 50.0% to 53.2%. We adopted this sliding window strategy only for the TOEFL dataset." ], [ "We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%.", "We also test our method on three other MCQA datasets: MCTest including MC160 and MC500, TOEFL, and SemEval-2018 Task 11. The results are summarized in Table TABREF17. Similarly, we list the previous SOTA models with their scores for comparison. We compared our method with the baselines that use the same sentence encoder. Except for the SemEval dataset, our method can improve the BERT-Large model by at least 10%. For both MCTest and SemEval datasets, our best scores are very close to the reported human performance. The MC160 and MC500 datasets were curated in almost the same way BIBREF9 with only one difference that MC160 is around three times smaller than MC500. We can see from Table TABREF17 that both the BERT and RoBERTa baselines perform much worse on MC160 than MC500. We think the reason is that the data size of MC160 is not enough to well fine-tune the large models with a huge amount of trainable parameters. However, by leveraging the transfer learning techniques we proposed, we can significantly improve the generalization capability of BERT and RoBERTa models on the small datasets so that the best performance of MC160 can even surpass that of MC500. This demonstrates the effectiveness of our method.", "To better understand why MMM can be successful, we conducted an ablation study be removing one feature at a time on the BERT-Base model. The results are shown in Table TABREF18. We see that the removal of the second stage multi-task learning part hurts our method most significantly, indicating that the majority of improvement is coming from the knowledge transferred from the in-domain dataset. The first stage of coarse-tuning using NLI datasets is also very important, which provides the model with enhanced language inference ability. As for the top-level classifier, i.e., the MAN module, if we replace it with a typical two-layer FCNN as in BIBREF8, we have 1–2% performance drop. Lastly, for the DREAM dataset, the speaker normalization strategy gives us another $\\sim $1% improvement." ], [ "As shown in Table TABREF18, coarse-tuning on NLI tasks can help improve the performance of MCQA. We conjecture one of the reasons is that, in order to pick the correct answer, we need to rely on the language inference capability in many cases. As an example in Table TABREF1, the utterance highlighted in the bold and italic font in the dialogue is the evidence sentence from which we can obtain the correct answer to Question 2. There is no token overlap between the evidence sentence and the correct answer, indicating that the model cannot solve this question by surface matching. Nevertheless, the correct answer is an entailment to the evidence sentence while the wrong answers are not. Therefore, the capability of language inference enables the model to correctly predict the answer. On the other hand, we can deem the passage and the pair of (question, answer) as a pair of premise and hypothesis. Then the process of choosing the right answer to a certain question is similar to the process of choosing the hypothesis that can best entail the premise. In this sense, the part of MCQA task can be deemed as a NLI task. This also agrees with the argument that NLI is a fundamental ability of a natural language processing model and it can help support other tasks that require higher level of language processing abilities BIBREF21. We provided several more examples that require language inference reading skills in the Section 2 of the Supplementary Material; they are wrongly predicted by the BERT-Base baseline model but can be correctly solved by exposing the model to NLI data with the coarse-tuning stage." ], [ "By analyzing the MCQA datasets, we found that some questions ask about the attitude of one person towards something and in some cases, the correct answer is simply a paraphrase of the evidence sentence in the passage. This finding naturally leads to the question: could other kinds of tasks such as sentiment classification, paraphrasing also help with MCQA problems?", "To answer this question, we select several representative datasets for five categories as the up-stream tasks: sentiment analysis, paraphrase, span-based QA, NLI, and MCQA. We conduct experiments where we first train the BERT-Base models on each of the five categories and then further fine-tune our models on the target dataset: DREAM and MC500 (MCTest-MC500). For the sentiment analysis category, we used the Stanford Sentiment Treebank (SST-2) dataset from the GLUE benchmark BIBREF22 (around 60k train examples) and the Yelp dataset (around 430k train examples). For the paraphrase category, three paraphrasing datasets are used from the GLUE benchmark: Microsoft Research Paraphrase Corpus (MRPC), Semantic Textual Similarity Benchmark (STS-B), and Quora Question Pairs (QQP), which are denoted as “GLUE-Para.”. For the span-based QA, we use the SQuAD 1.1, SQuAD 2.0 , and MRQA which is a joint dataset including six popular span-based QA datasets. Table TABREF23 summarizes the results. We see that sentiment analysis datasets do not help much with our target MCQA datasets. But the paraphrase datasets do bring some improvements for MCQA. For span-based QA, only SQuAD 2.0 helps to improve the performance of the target dataset. Interestingly, although MRQA is much larger than other QA datasets (at least six times larger), it makes the performance worst. This suggests that span-based QA might not the appropriate source tasks for transfer learning for MCQA. We hypothesis this could due to the fact that most of the questions are non-extractive (e.g., 84% of questions in DREAM are non-extractive) while all answers are extractive in the span-based QA datasets.", "For the completeness of our experiments, we also used various NLI datasets: MultiNLI, SNLI, Question NLI (QLI), Recognizing Textual Entailment (RTE), and Winograd NLI (WNLI) from the GLUE benchmark. We used them in three kinds of combinations: MultiNLI alone, MultiNLI plus SNLI denoted as “NLI”, and combining all five datasets together, denoted as “GLUE-NLI”. As the results shown in Table TABREF23, NLI and GLUE-NLI are comparable and both can improve the target dataset by a large margin.", "Lastly, among all these tasks, using the MCQA task itself, i.e., pretraining on RACE dataset, can help boost the performance, most. This result agrees with the intuition that the in-domain dataset can be the most ideal data for transfer learning.", "In conclusion, we find that for out-of-domain datasets, the NLI datasets can be most helpful to the MCQA task, indicating that the natural language inference capability should be an important foundation of the MCQA systems. Besides, a larger in-domain dataset, i.e. another MCQA dataset, can also be very useful." ], [ "The first stage of coarse-tuning with NLI data can not only improve the accuracy but also help the model converge faster and better. Especially for the BERT-Large and RoBERTa-Large models that have much larger amount of trainable parameters, convergence is very sensitive to the optimization settings. However, with the help of NLI datasets , convergence for large models is no longer an issue, as shown in Figure FIGREF25. Under the same optimization hyper-parameters, compared with the baseline, coarse-tuning can make the training loss of the BERT-Base model decrease much faster. More importantly, for the BERT-Large model, without coarse-tuning, the model does not converge at all at the first several epochs, which can be completely resolved by the help of NLI data." ], [ "In a typical scenario where we have one source and one target dataset, we naturally have a question about whether we should simultaneously train a model on them via multi-task learning or first train on the source dataset then on the target sequentially. Many previous works adopted the latter way BIBREF19, BIBREF20, BIBREF23 and BIBREF20 demonstrated that the sequential fine-tuning approach outperforms the multi-task learning setting in their experiments. However, we had contradictory observations in our experiments. Specifically, we conducted a pair of control experiments: one is that we first fine-tune the BERT-Base model on the source dataset RACE and then further fine-tune on the target dataset, and the other is that we simultaneously train the model on RACE and the target dataset via multi-task learning. The comparison results are shown in Table TABREF27. We see that compared with sequential fine-tuning, the multi-task learning achieved better performance. We conjecture that in the sequential fine-tuning setting, while the model is being fine-tuned on the target dataset, some information or knowledge learned from the source dataset may be lost since the model is no longer exposed to the source dataset in this stage. In comparison, this information can be kept in the multi-task learning setting and thus can better help improve the target dataset.", "Now that the multi-task learning approach outperforms the sequential fine-tuning setting, we naturally arrive at another question: what if we merged the coarse-tuning and multi-task learning stages together? That is, what if we simultaneously trained the NLI, source, and target datasets altogether under the multi-task learning framework? We also conducted a pair of control experiments for investigation. The results in Table TABREF27, show that casting the fine-tuning process on three datasets into separate stages performs better, indicating that multi-stage training is also necessary. This verifies our MMM framework with coarse-tuning on out-of-domain datasets and fine-tuning on in-domain datesets." ], [ "Previous results show that the MAN classifier shows improvement compared with the FCNN classifier, but we are also interested in how the performance change while varying the number of reasoning steps $K$ as shown in Figure FIGREF29. $K=0$ means that we do not use MAN but FCNN as the classifier. We observe that there is a gradual improvement as we increase $K=1$ to $K=5$, but after 5 steps the improvements have saturated. This verifies that an appropriate number of steps of reasoning is important for the memory network to reflect its benefits." ], [ "So far we have been discussing the case where we do multi-task learning with the source dataset RACE and various much smaller target datasets to help improve the targets. We also want to see whether our proposed techniques can also benefit the source dataset itself. Table TABREF31 summarizes the results of BERT-Base model on the RACE dataset obtained by adding the coarse-tuning stage, adding the multi-task training together with DREAM, and adding the MAN module. From this table, we see that all three techniques can bring in improvements over the baseline model for the source dataset RACE, among which NLI coarse-tuning stage can help elevate the scores most.", "Since we found all parts of MMM can work well for the source dataset, we tried to use them to improve the accuracy on RACE. The results are shown in Table TABREF32. We used four kinds of pre-trained sentence encoders: BERT-Base, BERT-Large, XLNet-Large, and RoBERTa-Large. For each encoder, we listed the official report of scores from the leaderboard. Compared with the baselines, MMM leads to improvements ranging from 0.5% to 3.0% in accuracy. Our best result is obtained by the RoBERTa-Large encoder." ], [ "In order to investigate how well our model performs for different types of questions, we did an error analysis by first randomly selecting 150 samples that had wrong predictions by the BERT-Base baseline model from the development set of DREAM dataset. We then manually classified them into several question types, as shown in Table TABREF34. The annotation criterion is described in the Section 3 of the Supplementary Material. We see that the BERT-Base baseline model still does not do well on matching problems. We then evaluate our best model on these samples and report the accuracy of each question type in the last column of Table TABREF34. We find that our best model can improve upon every question type significantly especially for the matching problems, and most surprisingly, our best model can even greatly improve its ability on solving the arithmetic problems, achieving the accuracy of 73.7%.", "However, could our model really do math? To investigate this question, we sampled some arithmetic questions that are correctly predicted by our model, made small alterations to the passage or question, and then checked whether our model can still make correct choices. We found our model is very fragile to these minor alterations, implicating that the model is actually not that good at arithmetic problems. We provided one interesting example in the Section 3 of the Supplementary Material." ], [ "There are increasing interests in machine reading comprehension (MRC) for question answering (QA). The extractive QA tasks primarily focus on locating text spans from the given document/corpus to answer questions BIBREF2. Answers in abstractive datasets such as MS MARCO BIBREF24, SearchQA BIBREF25, and NarrativeQA BIBREF26 are human-generated and based on source documents or summaries in free text format. However, since annotators tend to copy spans as answers BIBREF27, the majority of answers are still extractive in these datasets. The multi-choice QA datasets are collected either via crowd sourcing, or collected from examinations designed by educational experts BIBREF7. In this type of QA datasets, besides token matching, a significant portion of questions require multi-sentence reasoning and external knowledge BIBREF5.", "Progress of research for MRC first relies on the breakthrough of the sentence encoder, from the basic LSTM to the pre-trained transformer based model BIBREF8, which has elevated the performance of all MRC models by a large margin. Besides, the attention mechanisms between the context and the query can empower the neural models with higher performance BIBREF11. In addition, some techniques such as answer verification BIBREF28, multi-hop reasoning BIBREF29, and synthetic data augmentation can be also helpful.", "Transfer learning has been widely proved to be effective across many domain in NLP. In the QA domain, the most well-known example of transfer learning would be fine-tuning the pre-trained language model such as BERT to the down-streaming QA datasets such as SQuAD BIBREF8. Besides, multi-task learning can also be deemed as a type of transfer learning, since during the training of multiple datasets from different domains for different tasks, knowledge will be shared and transferred from each task to others, which has been used to build a generalized QA model BIBREF30. However, no previous works have investigated that the knowledge from the NLI datasets can also be transferred to improve the MCQA task." ], [ "We propose MMM, a multi-stage multi-task transfer learning method on the multiple-choice question answering tasks. Our two-stage training strategy and the multi-step attention network achieved significant improvements for MCQA. We also did detailed analysis to explore the importance of both our training strategies as well as different kinds of in-domain and out-of-domain datasets. We hope our work here can also shed light on new directions for other NLP domains." ] ] }
{ "question": [ "How big are improvements of MMM over state of the art?", "What out of domain datasets authors used for coarse-tuning stage?", "What are state of the art methods MMM is compared to?", "What four representative datasets are used for bechmark?" ], "question_id": [ "53d6cbee3606dd106494e2e98aa93fdd95920375", "9dc844f82f520daf986e83466de0c84d93953754", "9fe4a2a5b9e5cf29310ab428922cc8e7b2fc1d11", "36d892460eb863220cd0881d5823d73bbfda172c" ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "test accuracy of 88.9%, which exceeds the previous best by 16.9%" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%." ], "highlighted_evidence": [ "Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%." ] } ], "annotation_id": [ "11e9dc8da152c948ba3f0ed165402dffad6fae49" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "MultiNLI BIBREF15 and SNLI BIBREF16 " ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits." ], "highlighted_evidence": [ "For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. " ] } ], "annotation_id": [ "fb11cb05fe3d851cc4d17da20a5b958dad0af096" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "FTLM++, BERT-large, XLNet", "evidence": [ "FLOAT SELECTED: Table 3: Accuracy on the DREAM dataset. Performance marked by ? is reported by (Sun et al. 2019). Numbers in parentheses indicate the accuracy increased by MMM compared to the baselines." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Accuracy on the DREAM dataset. Performance marked by ? is reported by (Sun et al. 2019). Numbers in parentheses indicate the accuracy increased by MMM compared to the baselines." ] } ], "annotation_id": [ "6f65f4be18453d162510778c0b8c582ffc5f27f7" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "DREAM, MCTest, TOEFL, and SemEval-2018 Task 11" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11." ], "highlighted_evidence": [ "To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11." ] } ], "annotation_id": [ "605df693493ead557174f3a1ebb05efb09517f15" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Table 1: Data samples of DREAM dataset. ( √ : the correct answer)", "Figure 1: Model architecture. “Encoder”is a pre-trained sentence encoder such as BERT. “Classifier” is a top-level classifier.", "Figure 2: Multi-stage and multi-task fine-tuning strategy.", "Table 2: Statistics of MCQA datasets. (crowd.: crowd-sourcing; ?: answer options are not text snippets from reference documents.)", "Table 3: Accuracy on the DREAM dataset. Performance marked by ? is reported by (Sun et al. 2019). Numbers in parentheses indicate the accuracy increased by MMM compared to the baselines.", "Table 4: Performance in accuracy (%) on test sets of other datasets: MCTest (MC160 and MC500), TOEFL, and SemEval. Performance marked by ? is reported by (Richardson, Burges, and Renshaw 2013) and that marked by † is from (Ostermann et al. 2018). Numbers in the parentheses indicate the accuracy increased by MMM. “-B” means the base model and “-L” means the large model.", "Table 5: Ablation study on the DREAM and MCTest-MC160 (MC160) datasets. Accuracy (%) is on the development set.", "Table 6: Transfer learning results for DREAM and MC500. The BERT-Base model is first fine-tuned on each source dataset and then further fine-tuned on the target dataset. Accuracy is on the the development set. A two-layer FCNN is used as the classifier.", "Table 7: Comparison between multi-task learning and sequential fine-tuning. BERT-Base model is used and the accuracy is on the development set. Target refers to the target dataset in transfer learning. A two-layer FCNN instead of MAN is used as the classifier.", "Figure 3: Train loss curve with respect to optimization steps. With prior coarse-tuning on NLI data, convergence becomes much faster and easier.", "Figure 4: Effects of the number of reasoning steps for the MAN classifier. 0 steps means using FCNN instead of MAN. The BERTBase model and DREAM dataset are used.", "Table 8: Ablation study for the RACE dataset. The accuracy is on the development set. All parts of MMM improve this source dataset.", "Table 9: Comparison of the test accuracy of the RACE dataset between our approach MMM and the official reports that are from the dataset leaderboard.", "Table 10: Error analysis on DREAM. The column of “Percent” reports the percentage of question types among 150 samples that are from the development set of DREAM dataset that are wrongly predicted by the BERT-Base baseline model. The column of “Accuracy” reports the accuracy of our best model (RoBERTa-Large+MMM) on these samples." ], "file": [ "1-Table1-1.png", "2-Figure1-1.png", "3-Figure2-1.png", "4-Table2-1.png", "4-Table3-1.png", "5-Table4-1.png", "5-Table5-1.png", "5-Table6-1.png", "6-Table7-1.png", "6-Figure3-1.png", "6-Figure4-1.png", "7-Table8-1.png", "7-Table9-1.png", "7-Table10-1.png" ] }
2001.11268
Data Mining in Clinical Trial Text: Transformers for Classification and Question Answering Tasks
This research on data extraction methods applies recent advances in natural language processing to evidence synthesis based on medical texts. Texts of interest include abstracts of clinical trials in English and in multilingual contexts. The main focus is on information characterized via the Population, Intervention, Comparator, and Outcome (PICO) framework, but data extraction is not limited to these fields. Recent neural network architectures based on transformers show capacities for transfer learning and increased performance on downstream natural language processing tasks such as universal reading comprehension, brought forward by this architecture's use of contextualized word embeddings and self-attention mechanisms. This paper contributes to solving problems related to ambiguity in PICO sentence prediction tasks, as well as highlighting how annotations for training named entity recognition systems are used to train a high-performing, but nevertheless flexible architecture for question answering in systematic review automation. Additionally, it demonstrates how the problem of insufficient amounts of training annotations for PICO entity extraction is tackled by augmentation. All models in this paper were created with the aim to support systematic review (semi)automation. They achieve high F1 scores, and demonstrate the feasibility of applying transformer-based classification methods to support data mining in the biomedical literature.
{ "section_name": [ "INTRODUCTION", "INTRODUCTION ::: Tools for SR automation and PICO classification", "INTRODUCTION ::: Sentence classification data", "INTRODUCTION ::: Question answering data ::: SQuAD", "INTRODUCTION ::: Question answering data ::: Ebm-nlp", "INTRODUCTION ::: Introduction to transformers", "INTRODUCTION ::: Weaknesses in the previous sentence classification approach", "INTRODUCTION ::: Contributions of this research", "METHODOLOGY ::: Feature representation and advantages of contextualization", "METHODOLOGY ::: Sentence classification ::: Preparation of the data", "METHODOLOGY ::: Sentence classification ::: Fine-tuning", "METHODOLOGY ::: Sentence classification ::: Post-training assignment of classes", "METHODOLOGY ::: Question answering ::: Preparation of the data", "METHODOLOGY ::: Question answering ::: Fine-tuning", "RESULTS ::: Feature representation and contextualization", "RESULTS ::: Sentence classification", "RESULTS ::: Question answering", "DISCUSSION", "DISCUSSION ::: Limitations", "CONCLUSION", "ACKNOWLEDGEMENTS", "FUNDING", "Availability of the code and data" ], "paragraphs": [ [ "Systematic reviews (SR) of randomized controlled trials (RCTs) are regarded as the gold standard for providing information about the effects of interventions to healthcare practitioners, policy makers and members of the public. The quality of these reviews is ensured through a strict methodology that seeks to include all relevant information on the review topic BIBREF0.", "A SR, as produced by the quality standards of Cochrane, is conducted to appraise and synthesize all research for a specific research question, therefore providing access to the best available medical evidence where needed BIBREF1. The research question is specified using the PICO (population; intervention; comparator; outcomes) framework. The researchers conduct very broad literature searches in order to retrieve every piece of clinical evidence that meets their review's inclusion criteria, commonly all RCTs of a particular healthcare intervention in a specific population. In a search, no piece of relevant information should be missed. In other words, the aim is to achieve a recall score of one. This implies that the searches are broad BIBREF2, and authors are often left to screen a large number of abstracts manually in order to identify a small fraction of relevant publications for inclusion in the SR BIBREF3.", "The number of RCTs is increasing, and with it increases the potential number of reviews and the amount of workload that is implied for each. Research on the basis of PubMed entries shows that both the number of publications and the number of SRs increased rapidly in the last ten years BIBREF4, which is why acceleration of the systematic reviewing process is of interest in order to decrease working hours of highly trained researchers and to make the process more efficient.", "", "In this work, we focus on the detection and annotation of information about the PICO elements of RCTs described in English PubMed abstracts. In practice, the comparators involved in the C of PICO are just additional interventions, so we often refer to PIO (populations; interventions; outcomes) rather than PICO. Focus points for the investigation are the problems of ambiguity in labelled PIO data, integration of training data from different tasks and sources and assessing our model's capacity for transfer learning and domain adaptation.", "Recent advances in natural language processing (NLP) offer the potential to be able to automate or semi-automate the process of identifying information to be included in a SR. For example, an automated system might attempt to PICO-annotate large corpora of abstracts, such as RCTs indexed on PubMed, or assess the results retrieved in a literature search and predict which abstract or full text article fits the inclusion criteria of a review. Such systems need to be able to classify and extract data of interest. We show that transformer models perform well on complex data-extraction tasks. Language models are moving away from the semantic, but static representation of words as in Word2Vec BIBREF5, hence providing a richer and more flexible contextualized representation of input features within sentences or long sequences of text.", "The rest of this paper is organized as follows. The remainder of this section introduces related work and the contributions of our work. Section 2 describes the process of preparing training data, and introduces approaches to fine-tuning for sentence classification and question answering tasks. Results are presented in section 3, and section 4 includes a critical evaluation and implications for practice." ], [ "The website systematicreviewtools.com BIBREF6 lists 36 software tools for study selection to date. Some tools are intended for organisational purposes and do not employ PICO classification, such as Covidence BIBREF7. The tool Rayyan uses support vector machines BIBREF8. RobotReviewer uses neural networks, word embeddings and recently also a transformer for named entity recognition (NER) BIBREF9. Question answering systems for PICO data extraction exist based on matching words from knowledge bases, hand-crafted rules and naïve Bayes classification, both on entity and sentence level BIBREF10, BIBREF11, but commonly focus on providing information to practicing clinicians rather than systematic reviewers BIBREF12.", "In the following we introduce models related to our sentence and entity classification tasks and the data on which our experiments are based. We made use of previously published training and testing data in order to ensure comparability between models." ], [ "In the context of systematic review (semi)automation, sentence classification can be used in the screening process, by highlighting relevant pieces of text. A long short-term memory (LSTM) neural network trained with sentences of structured abstracts from PubMed was published in 2018 BIBREF13. It uses a pre-trained Word2Vec embedding in order to represent each input word as a fixed vector. Due to the costs associated with labelling, its authors acquired sentence labels via automated annotation. Seven classes were assigned on the basis of structured headings within the text of each abstract. Table TABREF4 provides an overview of class abbreviations and their meaning.In the following we refer to it as the PubMed data.", "The LSTM itself yields impressive results with F1 scores for annotation of up to 0.85 for PIO elements, it generalizes across domains and assigns one label per sentence. We were able to confirm these scores by replicating a local version of this model." ], [ "The Stanford Question Answering Dataset (SQuAD) is a reading-comprehension dataset for machine learning tasks. It contains question contexts, questions and answers and is available in two versions. The older version contains only questions that can be answered based on the given context. In its newer version, the dataset also contains questions which can not be answered on the basis of the given context. The SQuAD creators provide an evaluation script, as well as a public leader board to compare model performances BIBREF14." ], [ "In the PICO domain, the potential of NER was shown by Nye and colleagues in using transformers, as well as LSTM and conditional random fields. In the following, we refer to these data as the ebm-nlp corpus. BIBREF15. The ebm-nlp corpus provided us with 5000 tokenized and annotated RCT abstracts for training, and 190 expert-annotated abstracts for testing. Annotation in this corpus include PIO classes, as well as more detailed information such as age, gender or medical condition. We adapted the human-annotated ebm-nlp corpus of abstracts for training our QA-BERT question answering system." ], [ "In the following, the bidirectional encoder representations from transformers (BERT) architecture is introduced BIBREF16. This architecture's key strengths are rooted in both feature representation and training. A good feature representation is essential to ensure any model's performance, but often data sparsity in the unsupervised training of embedding mechanisms leads to losses in overall performance. By employing a word piece vocabulary, BERT eliminated the problem of previously unseen words. Any word that is not present in the initial vocabulary is split into a sub-word vocabulary. Especially in the biomedical domain this enables richer semantic representations of words describing rare chemical compounds or conditions. A relevant example is the phrase ’two drops of ketorolac tromethamine’, where the initial three words stay intact, while the last words are tokenized to ’ket’, ’#oro’, ’#lac’, ’tro’, ’#meth’, ’#amine’, hence enabling the following model to focus on relevant parts of the input sequence, such as syllables that indicate chemical compounds. When obtaining a numerical representation for its inputs, transformers apply a ’self-attention’ mechanism, which leads to a contextualized representation of each word with respect to its surrounding words.", "BERT's weights are pre-trained in an unsupervised manner, based on large corpora of unlabelled text and two pre-training objectives. To achieve bidirectionality, its first pre-training objective includes prediction of randomly masked words. Secondly, a next-sentence prediction task trains the model to capture long-term dependencies. Pre-training is computationally expensive but needs to be carried out only once before sharing the weights together with the vocabulary. Fine-tuning to various downstream tasks can be carried out on the basis of comparably small amounts of labelled data, by changing the upper layers of the neural network to classification layers for different tasks.", "SCIBERT is a model based on the BERT-base architecture, with further pre-trained weights based on texts from the Semantic Scholar search engine BIBREF17. We used these weights as one of our three starting points for fine-tuning a sentence classification architecture BIBREF18. Furthermore, BERT-base (uncased) and Bert multilingual (cased, base architecture) were included in the comparison BIBREF16." ], [ "In the following, we discuss weaknesses in the PubMed data, and LSTM models trained on this type of labelled data. LSTM architectures commonly employ a trimmed version of Word2Vec embeddings as embedding layer. In our case, this leads to 20% of the input data being represented by generic `Unknown' tokens. These words are missing because they occur so rarely that no embedding vector was trained for them. Trimming means that the available embedding vocabulary is then further reduced to the known words of the training, development and testing data, in order to save memory and increase speed. The percentage of unknown tokens is likely to increase when predicting on previously unseen and unlabelled data. We tested our locally trained LSTM on 5000 abstracts from a study-based register BIBREF19 and found that 36% of all unique input features did not have a known representation.", "In the case of the labelled training and testing data itself, automatic annotation carries the risk of producing wrongly labelled data. But it also enables the training of neural networks in the first place because manual gold standard annotations for a project on the scale of a LSTM are expensive and time-consuming to produce. As we show later, the automated annotation technique causes noise in the evaluation because as the network learns, it can assign correct tags to wrongly labelled data. We also show that sentence labels are often ambiguous, and that the assignment of a single label limits the quality of the predictions for their use in real-world reviewing tasks.", "We acknowledge that the assignment of classes such as `Results' or `Conclusions' to sentences is potentially valuable for many use-cases. However, those sentences can contain additional information related to the PICO classes of interest. In the original LSTM-based model the A, M, R, and C data classes in Table TABREF4 are utilized for sequence optimization, which leads to increased classification scores. Their potential PICO content is neglected, although it represents crucial information in real-world reviewing tasks.", "A general weakness of predicting labels for whole sentences is the practical usability of the predictions. We will show sentence highlighting as a potential use-case for focusing reader's attention to passages of interest. However, the data obtained through this method are not fine-grained enough for usage in data extraction, or for the use in pipelines for automated evidence synthesis. Therefore, we expand our experiments to include QA-BERT, a question-answering model that predicts the locations of PICO entities within sentences." ], [ "In this work we investigate state-of-the-art methods for language modelling and sentence classification. Our contributions are centred around developing transformer-based fine-tuning approaches tailored to SR tasks. We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13. We demonstrate that models based on the BERT architecture solve problems related to ambiguous sentence labels by learning to predict multiple labels reliably. Further, we show that the improved feature representation and contextualization of embeddings lead to improved performance in biomedical data extraction tasks. These fine-tuned models show promising results while providing a level of flexibility to suit reviewing tasks, such as the screening of studies for inclusion in reviews. By predicting on multilingual and full text contexts we showed that the model's capabilities for transfer learning can be useful when dealing with diverse, real-world data.", "In the second fine-tuning approach, we apply a question answering architecture to the task of data extraction. Previous models for PICO question answering relied on vast knowledge bases and hand-crafted rules. Our fine-tuning approach shows that an abstract as context, together with a combination of annotated PICO entities and SQuAD data can result in a system that outperforms contemporary entity recognition systems, while retaining general reading comprehension capabilities." ], [ "A language processing model's performance is limited by its capability of representing linguistic concepts numerically. In this preliminary experiment, we used the PubMed corpus for sentence classification to show the quality of PICO sentence embeddings retrieved from BERT. We mapped a random selection of 3000 population, intervention, and outcome sentences from the PubMed corpus to BERT-base uncased and SCIBERT. This resulted in each sentence being represented by a fixed length vector of 768 dimensions in each layer respectively, as defined by the model architecture's hidden size. These vectors can be obtained for each of the network's layers, and multiple layers can be represented together by concatenation and pooling. We used the t-distributed Stochastic Neighbour Embedding (t-SNE) algorithm to reduce each layer-embedding into two-dimensional space, and plotted the resulting values. Additionally, we computed adjusted rand scores in order to evaluate how well each layer (or concatenation thereof, always using reduce_mean pooling) represents our input sequence. The rand scores quantify the extent to which a naïve K-means (N=3) clustering algorithm in different layers alone led to correct grouping of the input sentences." ], [ "We used the PubMed corpus to fine-tune a sentence classification architecture. Class names and abbreviations are displayed in Table TABREF4. The corpus was supplied in pre-processed form, comprising 24,668 abstracts. For more information about the original dataset we refer to its original publication BIBREF13. Because of the PICO framework, methods for systematic review semi(automation) commonly focus on P, I, and O detection. A, M, R, and C classes are an additional feature of this corpus. They were included in the following experiment because they represent important information in abstracts and they occur in a vast majority of published trial text. Their exclusion can lead to false classification of sentences in full abstracts. In a preliminary experiment we summarized A, M, R, and C sentences as a generic class named ’Other’ in order to shift the model's focus to PIO classes. This resulted in high class imbalance, inferior classification scores and a loss of ability to predict these classes when supporting systematic reviewers during the screening process.", "In the following, abstracts that did not include a P, I, and O label were excluded. This left a total of 129,095 sentences for training, and 14,344 for testing (90:10 split)." ], [ "We carried out fine-tuning for sentence classification based on BERT-base (uncased), multilingual BERT (cased), and on SCIBERT. We changed the classification layer on top of the original BERT model. It remains as linear, fully connected layer but now employs the sigmoid cross-entropy loss with logits function for optimization. During training, this layer is optimised for predicting probabilities over all seven possible sentence labels. Therefore, this architecture enables multi-class, multi-label predictions. In comparison, the original BERT fine-tuning approach for sentence classification employed a softmax layer in order to obtain multi-class, single-label predictions of the most probable class only. During the training process the model then predicts class labels from Table 1 for each sentence. After each training step, backpropagation then adjusts the model's internal weights. To save GPU resources, a maximal sequence length of 64, batch size 32, learning rate of $2\\times 10^{-5}$, a warm-up proportion of 0.1 and two epochs for training were used." ], [ "In the scope of the experiments for this paper, the model returns probabilities for the assignment of each class for every sentence. These probabilities were used to show effects of different probability thresholds (or simply assignment to the most probable class) on recall, precision and F1 scores. The number of classes was set to 7, thereby making use of the full PubMed dataset." ], [ "Both the training and testing subsets from the ebm-nlp data were adapted to fit the SQuAD format. We merged both datasets in order to train a model which firstly correctly answers PICO questions on the basis of being trained with labelled ebm-nlp data, and secondly retains the flexibility of general-purpose question answering on the basis of SQuAD. We created sets of general, differently phrased P, I, and O questions for the purpose of training a broad representation of each PICO element question.", "In this section we describe the process of adapting the ebm-nlp data to the second version of the SQuAD format, and then augmenting the training data with some of the original SQuAD data. Figure FIGREF19 shows an example of the converted data, together with a high-level software architecture description for our QA-BERT model. We created a conversion script to automate this task. To reduce context length, it first split each ebm-nlp abstract into sentences. For each P, I, and O class it checked the presence of annotated entity spans in the ebm-nlp source files. Then, a question was randomly drawn from our set of general questions for this class, to complete a context and a span-answer pair in forming a new SQuAD-like question element. In cases where a sentence did not contain a span, a question was still chosen, but the answer was marked as impossible, with the plausible answer span set to begin at character 0. In the absence of impossible answers, the model would always return some part of the context as answer, and hence be of no use for rarer entities such as P, which only occurs in only 30% of all context sentences.", "For the training data, each context can contain one possible answer, whereas for testing multiple question-answer pairs are permitted. An abstract is represented as a domain, subsuming its sentences and question answer-text pairs. In this format, our adapted data are compatible with the original SQuAD v.2 dataset, so we chose varying numbers of original SQuAD items and shuffled them into the training data. This augmentation of the training data aims to reduce the dependency on large labelled corpora for PICO entity extraction. Testing data can optionally be enriched in the same way, but for the presentation of our results we aimed to be comparable with previously published models and therefore chose to evaluate only on the subset of expert-annotated ebm-nlp testing data." ], [ "The python Huggingface Transformers library was used for fine-tuning the question-answering models. This classification works by adding a span-classification head on top of a pre-trained transformer model. The span-classification mechanism learns to predict the most probable start and end positions of potential answers within a given context BIBREF22.", "The Transformers library offers classes for tokenizers, BERT and other transformer models and provides methods for feature representation and optimization. We used BertForQuestionAnswering. Training was carried out on Google's Colab, using the GPU runtime option. We used a batch size of 18 per GPU and a learning rate of $3^{-5}$. Training lasted for 2 epochs, context length was limited to 150. To reduce the time needed to train, we only used BERT-base (uncased) weights as starting points, and used a maximum of 200 out of the 442 SQuAD domains.", "To date, the Transformers library includes several BERT, XLM, XLNet, DistilBERT and ALBERT question answering models that can be fine-tuned with the scripts and data that we describe in this paper." ], [ "Figure FIGREF23 shows the dimensionality-reduced vectors for 3000 sentences in BERT-base, along with the positions of three exemplary sentences. All three examples were labelled as 'P' in the gold standard. This visualization highlights overlaps between the sentence data and ambiguity or noise in the labels.", "UTF8bsmi", "Sentences 1 and 2 are labelled incorrectly, and clearly appear far away from the population class centroid. Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. This supports a need for multiple tags per sentence, and the fine-tuning of weights within the network.", "Figure FIGREF23 shows the same set of sentences, represented by concatenations of SCIBERT outputs. SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. When clustered, its embeddings yielded an adjusted rand score of 0.57 for a concatenation of the two layers, compared with 0.25 for BERT-base." ], [ "Precision, recall, and F1 scores, including a comparison with the LSTM, are summarized in Table TABREF22. Underlined scores represent the top score across all models, and scores in bold are the best results for single- and multi-label cases respectively. The LSTM assigns one label only and was outperformed in all classes of main interest (P, I, and O).", "A potential pitfall of turning this task into multi-label classification is an increase of false-positive predictions, as more labels are assigned than given in the single-labelled testing data in the first place. However, the fine-tuned BERT models achieved high F1 scores, and large improvements in terms of recall and precision. In its last row, Table TABREF22 shows different probability thresholds for class assignment when using the PubMed dataset and our fine-tuned SCIBERT model for multi-label prediction. After obtaining the model's predictions, a simple threshold parameter can be used to obtain the final class labels. On our labelled testing data, we tested 50 evenly spaced thresholds between 0 and 1 in order to obtain these graphs. Here, recall and precision scores in ranges between 0.92 and 0.97 are possible with F1 scores not dropping below 0.84 for the main classes of interest. In practice, the detachment between model predictions and assignment of labels means that a reviewer who wishes to switch between high recall and high precision results can do so very quickly, without obtaining new predictions from the model itself.", "More visualizations can be found in this project's GitHub repository , including true class labels and a detailed breakdown of true and false predictions for each class. The highest proportion of false classification appears between the results and conclusion classes.", "The fine-tuned multilingual model showed marginally inferior classification scores on the exclusively English testing data. However, this model's contribution is not limited to the English language because its interior weights embed a shared vocabulary of 100 languages, including German and Chinese. Our evaluation of the multilingual model's capacity for language transfer is of a qualitative nature, as there were no labelled Chinese or German data available. Table TABREF24 shows examples of two abstracts, as predicted by the model. Additionally, this table demonstrates how a sentence prediction model can be used to highlight text. With the current infrastructure it is possible to highlight PICOs selectively, to highlight all classes simultaneously, and to adjust thresholds for class assignment in order to increase or decrease the amount of highlighted sentences. When applied to full texts of RCTs and cohort studies, we found that the model retained its ability to identify and highlight key sentences correctly for each class.", "", "We tested various report types, as well as recent and old publications, but remain cautious that large scale testing on labelled data is needed to draw solid conclusions on these model's abilities for transfer learning. For further examples in the English language, we refer to our GitHub repository." ], [ "We trained and evaluated a model for each P, I, and O class. Table TABREF29 shows our results, indicated as QA-BERT, compared with the currently published leader board for the ebm-nlp data BIBREF25 and results reported by the authors of SCIBERT BIBREF18. For the P and I classes, our models outperformed the results on this leader board. The index in our model names indicates the amount of additional SQuAD domains added to the training data. We never used the full SQuAD data in order to reduce time for training but observed increased performance when adding additional data. For classifying I entities, an increase from 20 to 200 additional SQuAD domains resulted in an increase of 8% for the F1 score, whereas the increase for the O domain was less than 1%. After training a model with 200 additional SQuAD domains, we also evaluated it on the original SQuAD development set and obtained a F1 score of 0.72 for this general reading comprehension task.", "In this evaluation, the F1 scores represent the overlap of labelled and predicted answer spans on token level. We also obtained scores for the subgroups of sentences that did not contain an answer versus the ones that actually included PICO elements. These results are shown in Table TABREF30.", "For the P class, only 30% of all sentences included an entity, whereas its sub-classes age, gender, condition and size averaged 10% each. In the remaining classes, these percentages were higher. F1 scores for correctly detecting that a sentence includes no PICO element exceeded 0.92 in all classes. This indicates that the addition of impossible answer elements was successful, and that the model learned a representation of how to discriminate PICO contexts. The scores for correctly predicting PICOs in positive scenarios are lower. These results are presented in Table TABREF30. Here, two factors could influence this score in a negative way. First, labelled spans can be noisy. Training spans were annotated by crowd workers and the authors of the original dataset noted inter-annotator disagreement. Often, these spans include full stops, other punctuation or different levels of detail describing a PICO. The F1 score decreases if the model predicts a PICO, but the predicted span includes marginal differences that were not marked up by the experts who annotated the testing set. Second, some spans include multiple PICOs, sometimes across sentence boundaries. Other spans mark up single PICOS in succession. In these cases the model might find multiple PICOs in a row, and annotate them as one or vice versa." ], [ "In this work, we have shown possibilities for sentence classification and data extraction of PICO characteristics from abstracts of RCTs.", "For sentence classification, models based on transformers can predict multiple labels per sentence, even if trained on a corpus that assigns a single label only. Additionally, these architectures show a great level of flexibility with respect to adjusting precision and recall scores. Recall is an important metric in SR tasks and the architectures proposed in this paper enable a post-classification trade-off setting that can be adjusted in the process of supporting reviewers in real-world reviewing tasks.", "However, tagging whole sentences with respect to populations, interventions and outcomes might not be an ideal method to advance systematic review automation. Identifying a sentence's tag could be helpful for highlighting abstracts from literature searches. This focuses the reader's attention on sentences, but is less helpful for automatically determining whether a specific entity (e.g. the drug aspirin) is mentioned.", "Our implementation of the question answering task has shown that a substantial amount of PICO entities can be identified in abstracts on a token level. This is an important step towards reliable systematic review automation. With our provided code and data, the QA-BERT model can be switched with more advanced transformer architectures, including XLM, XLNet, DistilBERT and ALBERT pre-trained models. More detailed investigations into multilingual predictions BIBREF26 pre-processing and predicting more than one PICO per sentence are reserved for future work." ], [ "Limitations in the automatically annotated PubMed training data mostly consist of incomplete detection or noise P, I, and O entities due to the single labelling. We did not have access to multilingual annotated PICO corpora for testing, and therefore tested the model on German abstracts found on PubMed, as well as Chinese data provided by the Cochrane Schizophrenia Group.", "For the question answering, we limited the use of original SQuAD domains to enrich our data. This was done in order to save computing resources, as an addition of 100 SQuAD domains resulted in training time increases of two hours, depending on various other parameter settings. Adjusted parameters include increased batch size, and decreased maximal context length in order to reduce training time." ], [ "With this paper we aimed to explore state-of-the-art NLP methods to advance systematic review (semi)automation. Both of the presented fine-tuning approaches for transformers demonstrated flexibility and high performance. We contributed an approach to deal with ambiguity in whole-sentence predictions, and proposed the usage of a completely different approach to entity recognition in settings where training data are sparse.", "In conclusion we wish to emphasize our argument that for future applications, interoperability is important. Instead of developing yet another stand-alone organizational interface with a machine learning classifier that works on limited data only, the focus should be to develop and train cross-domain and neural models that can be integrated into the backend of existing platforms. The performance of these models should be comparable on standardized datasets, evaluation scripts and leader boards.", "The logical next step, which remains less explored in the current literature because of its complexity, is the task of predicting an RCT's included or excluded status on the basis of PICOs identified in its text. For this task, more complex architectures that include drug or intervention ontologies could be integrated. Additionally, information from already completed reviews could be re-used as training data." ], [ "We would like to thank Clive Adams for providing testing data and feedback for this project. We thank Vincent Cheng for the Chinese translation. Furthermore, we thank the BERT team at Google Research and Allenai for making their pre-trained model weights available. Finally, we acknowledge the Huggingface team and thank them for implementing the SQuAD classes for Transformers." ], [ "LS was funded by the National Institute for Health Research (NIHR Systematic Review Fellowship, RM-SR-2017-09-028). The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care." ], [ "Scripts and supplementary material, as well as further illustrations are available from https://github.com/L-ENA/HealthINF2020. Training data for sentence classification and question answering are freely available from the cited sources.", "Additionally, the Cochrane Schizophrenia Group extracted, annotated and made available data from studies included in over 200 systematic reviews. This aims at supporting the development of methods for reviewing tasks, and to increase the re-use of their data. These data include risk-of-bias assessment, results including all clean and published outcome data extracted by reviewers, data on PICOs, methods, and identifiers such as PubMed ID and a link to their study-based register. Additionally, a senior reviewer recently carried out a manual analysis of all 33,000 outcome names in these reviews, parsed and allocated to 15,000 unique outcomes in eight main categories BIBREF27." ] ] }
{ "question": [ "What baselines did they consider?", "What are the problems related to ambiguity in PICO sentence prediction tasks?" ], "question_id": [ "4cbc56d0d53c4c03e459ac43e3c374b75fd48efe", "e5a965e7a109ae17a42dd22eddbf167be47fca75" ], "nlp_background": [ "infinity", "infinity" ], "topic_background": [ "familiar", "familiar" ], "paper_read": [ "no", "no" ], "search_query": [ "transformers", "transformers" ], "question_writer": [ "798ee385d7c8105b83b032c7acc2347588e09d61", "798ee385d7c8105b83b032c7acc2347588e09d61" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "LSTM", "SCIBERT" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this work we investigate state-of-the-art methods for language modelling and sentence classification. Our contributions are centred around developing transformer-based fine-tuning approaches tailored to SR tasks. We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13. We demonstrate that models based on the BERT architecture solve problems related to ambiguous sentence labels by learning to predict multiple labels reliably. Further, we show that the improved feature representation and contextualization of embeddings lead to improved performance in biomedical data extraction tasks. These fine-tuned models show promising results while providing a level of flexibility to suit reviewing tasks, such as the screening of studies for inclusion in reviews. By predicting on multilingual and full text contexts we showed that the model's capabilities for transfer learning can be useful when dealing with diverse, real-world data.", "Figure FIGREF23 shows the same set of sentences, represented by concatenations of SCIBERT outputs. SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. When clustered, its embeddings yielded an adjusted rand score of 0.57 for a concatenation of the two layers, compared with 0.25 for BERT-base." ], "highlighted_evidence": [ "We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13.", "SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. " ] } ], "annotation_id": [ "11ea0b3864122600cc8ab3c6e1d34caea0d87c8c" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Some sentences are associated to ambiguous dimensions in the hidden state output", "evidence": [ "Sentences 1 and 2 are labelled incorrectly, and clearly appear far away from the population class centroid. Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. This supports a need for multiple tags per sentence, and the fine-tuning of weights within the network.", "FLOAT SELECTED: Figure 2: Visualization of training sentences using BERTbase. The x and y-axis represent the two most dominant dimensions in the hidden state output, as selected by the t-SNE algorithm. This visualization uses the sixth layer from the top, and shows three examples of labelled P sentences and their embedded positions." ], "highlighted_evidence": [ "Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. ", "FLOAT SELECTED: Figure 2: Visualization of training sentences using BERTbase. The x and y-axis represent the two most dominant dimensions in the hidden state output, as selected by the t-SNE algorithm. This visualization uses the sixth layer from the top, and shows three examples of labelled P sentences and their embedded positions." ] } ], "annotation_id": [ "7c2e7cb2253cdf2c28dc3ebda63e2141052f4290" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] } ] }
{ "caption": [ "Table 1: Classes for the sentence classification task.", "Figure 1: Colour coded example for a population entity annotation, converted to SQuAD v.2 format. Combined data are used to train and evaluate the system.", "Figure 2: Visualization of training sentences using BERTbase. The x and y-axis represent the two most dominant dimensions in the hidden state output, as selected by the t-SNE algorithm. This visualization uses the sixth layer from the top, and shows three examples of labelled P sentences and their embedded positions.", "Figure 3: Visualisation of training sentences using SCIBERT. The x and y-axes represent the two most dominant t-SNE reduced dimensions for each concatenation of layers", "Table 2: Summary of results for the sentence classification. task", "Table 3: Predicting PICOs in Chinese and German. Classes were assigned based on foreign language inputs only. For reference, translations were provided by native speakers.", "Table 4: Question Answering versus entity recognition results.", "Table 5: Subgroups of possible sentences versus impossible sentences.", "Table 6: This table shows two examples for intervention span predictions in QA-BERT200. On the official SQuAD development set, the same model achieved a good score, an exemplary question and prediction for this is given in the bottom row." ], "file": [ "2-Table1-1.png", "5-Figure1-1.png", "6-Figure2-1.png", "6-Figure3-1.png", "7-Table2-1.png", "8-Table3-1.png", "9-Table4-1.png", "10-Table5-1.png", "10-Table6-1.png" ] }
1706.07179
RelNet: End-to-End Modeling of Entities & Relations
We introduce RelNet: a new model for relational reasoning. RelNet is a memory augmented neural network which models entities as abstract memory slots and is equipped with an additional relational memory which models relations between all memory pairs. The model thus builds an abstract knowledge graph on the entities and relations present in a document which can then be used to answer questions about the document. It is trained end-to-end: only supervision to the model is in the form of correct answers to the questions. We test the model on the 20 bAbI question-answering tasks with 10k examples per task and find that it solves all the tasks with a mean error of 0.3%, achieving 0% error on 11 of the 20 tasks.
{ "section_name": [ "Introduction", "RelNet Model", "Related Work", "Experiments", "Conclusion" ], "paragraphs": [ [ "Reasoning about entities and their relations is an important problem for achieving general artificial intelligence. Often such problems are formulated as reasoning over graph-structured representation of knowledge. Knowledge graphs, for example, consist of entities and relations between them BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Representation learning BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 and reasoning BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 with such structured representations is an important and active area of research.", "Most previous work on knowledge representation and reasoning relies on a pipeline of natural language processing systems, often consisting of named entity extraction BIBREF12 , entity resolution and coreference BIBREF13 , relationship extraction BIBREF4 , and knowledge graph inference BIBREF14 . While this cascaded approach of using NLP systems can be effective at reasoning with knowledge bases at scale, it also leads to a problem of compounding of the error from each component sub-system. The importance of each of these sub-component on a particular downstream application is also not clear.", "For the task of question-answering, we instead make an attempt at an end-to-end approach which directly models the entities and relations in the text as memory slots. While incorporating existing knowledge (from curated knowledge bases) for the purpose of question-answering BIBREF11 , BIBREF8 , BIBREF15 is an important area of research, we consider the simpler setting where all the information is contained within the text itself – which is the approach taken by many recent memory based neural network models BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 .", "Recently, BIBREF17 proposed a dynamic memory based neural network for implicitly modeling the state of entities present in the text for question answering. However, this model lacks any module for relational reasoning. In response, we propose RelNet, which extends memory-augmented neural networks with a relational memory to reason about relationships between multiple entities present within the text. Our end-to-end method reads text, and writes to both memory slots and edges between them. Intuitively, the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector. The only supervision signal for our method comes from answering questions on the text.", "We demonstrate the utility of the model through experiments on the bAbI tasks BIBREF18 and find that the model achieves smaller mean error across the tasks than the best previously published result BIBREF17 in the 10k examples regime and achieves 0% error on 11 of the 20 tasks." ], [ "We describe the RelNet model in this section. Figure 1 provides a high-level view of the model. The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory.", "There are three main components to the model: 1) input encoder 2) dynamic memory, and 3) output module. We will describe these three modules in details. The input encoder and output module implementations are similar to the Entity Network BIBREF17 and main novelty lies in the dynamic memory. We describe the operations executed by the network for a single example consisting of a document with $T$ sentences, where each sentence consists of a sequence of words represented with $K$ -dimensional word embeddings $\\lbrace e_1, \\ldots , e_N\\rbrace $ , a question on the document represented as another sequence of words and an answer to the question." ], [ "There is a long line of work in textual question-answering systems BIBREF21 , BIBREF22 . Recent successful approaches use memory based neural networks for question answering, for example BIBREF23 , BIBREF18 , BIBREF24 , BIBREF19 , BIBREF17 . Our model is also a memory network based model and is also related to the neural turing machine BIBREF25 . As described previously, the model is closely related to the Recurrent Entity Networks model BIBREF17 which describes an end-to-end approach to model entities in text but does not directly model relations. Other approaches to question answering use external knowledge, for instance external knowledge bases BIBREF26 , BIBREF11 , BIBREF27 , BIBREF28 , BIBREF9 or external text like Wikipedia BIBREF29 , BIBREF30 .", "Very recently, and in parallel to this work, a method for relational reasoning called relation networks BIBREF31 was proposed. They demonstrated that simple neural network modules are not as effective at relational reasoning and their proposed module is similar to our model. However, relation network is not a memory-based model and there is no mechanism to read and write relevant information for each pair. Moreover, while their approach scales as the square of the number of sentences, our approach scales as the square of the number of memory slots used per QA pair. The output module in our model can be seen as a type of relation network.", "Representation learning and reasoning over graph structured data is also relevant to this work. Graph based neural network models BIBREF32 , BIBREF33 , BIBREF34 have been proposed which take graph data as an input. The relational memory however does not rely on a specified graph structure and such models can potentially be used for multi-hop reasoning over the relational memory. BIBREF35 proposed a method for learning a graphical representation of the text data for question answering, however the model requires explicit supervision for the graph at every step whereas RelNet does not require explicit supervision for the graph." ], [ "We evaluate the model's performance on the bAbI tasks BIBREF18 , a collection of 20 question answering tasks which have become a benchmark for evaluating memory-augmented neural networks. We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17 . Performance is measured in terms of mean percentage error on the tasks.", "Training Details: We used Adam and did a grid search for the learning rate in {0.01, 0.005, 0.001} and choose a fixed learning rate of 0.005 based on performance on the validation set, and clip the gradient norm at 2. We keep all other details similar to BIBREF17 for a fair comparison. embedding dimensions were fixed to be 100, models were trained for a maximum of 250 epochs with mini-batches size of 32 for all tasks except 3 for which the batch size was 16. The document sizes were limited to most recent 70 sentences for all tasks, except for task 3 for which it was limited to 130. The RelNet models were run for 5 times with random seed on each task and the model with best validation performance was chosen as the final model. The baseline EntNet model was run for 10 times for each task BIBREF17 .", "The results are shown in Table 1 . The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks." ], [ "We demonstrated an end-to-end trained neural network augmented with a structured memory representation which can reason about entities and relations for question answering. Future work will investigate the performance of these models on more real world datasets, interpreting what the models learn, and scaling these models to answer questions about entities and relations from reading massive text corpora." ] ] }
{ "question": [ "How is knowledge retrieved in the memory?", "How is knowledge stored in the memory?", "What are the relative improvements observed over existing methods?", "What is the architecture of the neural network?", "What methods is RelNet compared to?" ], "question_id": [ "082c88e132b4f1bf68abdc3a21ac4af180de1113", "74091e10f596428135b0ab06008608e09c051565", "43b4f7eade7a9bcfaf9cc0edba921a41d6036e9c", "a75861e6dd72d69fdf77ebd81c78d26c6f7d0864", "60fd7ef7986a5752b31d3bd12bbc7da6843547a4" ], "nlp_background": [ "five", "five", "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector." ], "yes_no": null, "free_form_answer": "", "evidence": [ "Recently, BIBREF17 proposed a dynamic memory based neural network for implicitly modeling the state of entities present in the text for question answering. However, this model lacks any module for relational reasoning. In response, we propose RelNet, which extends memory-augmented neural networks with a relational memory to reason about relationships between multiple entities present within the text. Our end-to-end method reads text, and writes to both memory slots and edges between them. Intuitively, the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector. The only supervision signal for our method comes from answering questions on the text." ], "highlighted_evidence": [ "Our end-to-end method reads text, and writes to both memory slots and edges between them. Intuitively, the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector." ] } ], "annotation_id": [ "a5d0953d56d8cd11ea834da09e2416aee83102ea" ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "entity memory and relational memory.", "evidence": [ "There are three main components to the model: 1) input encoder 2) dynamic memory, and 3) output module. We will describe these three modules in details. The input encoder and output module implementations are similar to the Entity Network BIBREF17 and main novelty lies in the dynamic memory. We describe the operations executed by the network for a single example consisting of a document with $T$ sentences, where each sentence consists of a sequence of words represented with $K$ -dimensional word embeddings $\\lbrace e_1, \\ldots , e_N\\rbrace $ , a question on the document represented as another sequence of words and an answer to the question." ], "highlighted_evidence": [ "There are three main components to the model: 1) input encoder 2) dynamic memory, and 3) output module. We will describe these three modules in details. The input encoder and output module implementations are similar to the Entity Network BIBREF17 and main novelty lies in the dynamic memory. We describe the operations executed by the network for a single example consisting of a document with $T$ sentences, where each sentence consists of a sequence of words represented with $K$ -dimensional word embeddings $\\lbrace e_1, \\ldots , e_N\\rbrace $ , a question on the document represented as another sequence of words and an answer to the question." ] } ], "annotation_id": [ "48d2fcec8e2a7967bf3f1ab2c12b0e95c778fd7e" ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks." ], "yes_no": null, "free_form_answer": "", "evidence": [ "The results are shown in Table 1 . The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks." ], "highlighted_evidence": [ " The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks.", "The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks." ] } ], "annotation_id": [ "7090d01d80d3d73861302db34a0bea96bcc9af89" ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "extends memory-augmented neural networks with a relational memory to reason about relationships between multiple entities present within the text. ", "The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory." ], "yes_no": null, "free_form_answer": "", "evidence": [ "We describe the RelNet model in this section. Figure 1 provides a high-level view of the model. The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory." ], "highlighted_evidence": [ "The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory." ] } ], "annotation_id": [ "121f0702a2eab76c1ad0119ac520adc61edd716c" ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We evaluate the model's performance on the bAbI tasks BIBREF18 , a collection of 20 question answering tasks which have become a benchmark for evaluating memory-augmented neural networks. We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17 . Performance is measured in terms of mean percentage error on the tasks.", "The results are shown in Table 1 . The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks." ], "highlighted_evidence": [ "We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17 .", " The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks." ] } ], "annotation_id": [ "bd36e3e626f515050572af1723aa2049868fe1ec" ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] } ] }
{ "caption": [ "Figure 1: RelNet Model: The model represents the state of the world as a neural turing machine with relational memory. At each time step, the model reads the sentence into an encoding vector and updates both entity memories and all edges between them representing the relations.", "Table 1: Mean % Error on the 20 Babi tasks." ], "file": [ "2-Figure1-1.png", "4-Table1-1.png" ] }
1909.08824
Modeling Event Background for If-Then Commonsense Reasoning Using Context-aware Variational Autoencoder
Understanding event and event-centered commonsense reasoning are crucial for natural language processing (NLP). Given an observed event, it is trivial for human to infer its intents and effects, while this type of If-Then reasoning still remains challenging for NLP systems. To facilitate this, a If-Then commonsense reasoning dataset Atomic is proposed, together with an RNN-based Seq2Seq model to conduct such reasoning. However, two fundamental problems still need to be addressed: first, the intents of an event may be multiple, while the generations of RNN-based Seq2Seq models are always semantically close; second, external knowledge of the event background may be necessary for understanding events and conducting the If-Then reasoning. To address these issues, we propose a novel context-aware variational autoencoder effectively learning event background information to guide the If-Then reasoning. Experimental results show that our approach improves the accuracy and diversity of inferences compared with state-of-the-art baseline methods.
{ "section_name": [ "Introduction", "Background", "Context-aware Variational Autoencoder", "Context-aware Variational Autoencoder ::: Architecture of CWVAE", "Context-aware Variational Autoencoder ::: Optimizing", "Context-aware Variational Autoencoder ::: Training Details", "Experiments ::: Auxiliary Dataset", "Experiments ::: Baselines", "Experiments ::: Evaluation Metrics ::: Automatic Evaluation", "Experiments ::: Evaluation Metrics ::: Human Evaluation", "Experiments ::: Overall Results", "Experiments ::: Case Study", "Related Work ::: Event-Centered Commonsense Reasoning", "Related Work ::: Variational AutoEncoder-Decoder Based Natural Language Generation", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Recently, event-centered commonsense knowledge has attracted much attention BIBREF0, BIBREF1, BIBREF2, BIBREF3, because of understanding events is an important component of NLP. Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on. However, it still remains a challenging task for NLP systems. This is partly due to most of them are trained for task-specific datasets or objectives, which results in models that are adapt at finding task-specific underlying correlation patterns but have limited capability in simple and explainable commonsense reasoning BIBREF4.", "To facilitate this, BIBREF5 (BIBREF5) build the Event2Mind dataset and BIBREF4 (BIBREF4) present the Atomic dataset, mainly focus on nine If-Then reasoning types to describe causes, effects, intents and participant characteristic about events. Together with these datasets, a simple RNN-based encoder-decoder framework is proposed to conduct the If-Then reasoning.", "However, there still remains two challenging problems. First, as illustrated in Figure FIGREF1, given an event “PersonX finds a new job”, the plausible feeling of PersonX about that event could be multiple (such as “needy/stressed out” and “relieved/joyful”). Previous work showed that for the one-to-many problem, conventional RNN-based encoder-decoder models tend to generate generic responses, rather than meaningful and specific answers BIBREF6, BIBREF7.", "Second, as a commonsense reasoning problem, rich background knowledge is necessary for generating reasonable inferences. For example, as shown in Figure FIGREF1, the feeling of PersonX upon the event “PersonX finds a new job” could be multiple. However, after given a context “PersonX was fired”, the plausible inferences would be narrowed down to “needy” or “stressed out”.", "To better solve these problems, we propose a context-aware variational autoencoder (CWVAE) together with a two-stage training procedure. Variational Autoencoder (VAE) based models have shown great potential in modeling the one-to-many problem and generate diversified inferences BIBREF8, BIBREF9.", "In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.).", "Experiments on the Event2Mind and Atomic dataset show that our proposed approach outperforms baseline methods in both the accuracy and diversity of inferences. The code is released at https://github.com/sjcfr/CWVAE." ], [ "Before specifically describing two dataset —- Event2Mind and Atomic used in this paper as well as the If-Then reasoning task, for clarity, we define the following terminologies:", "Base event: the prerequisite event in If-Then reasoning, organized as a verb phrase with a predicate and its arguments, such as the event “PersonX finds a new job” shown in Figure FIGREF1.", "Inference dimension: a particular If-Then reasoning type, e.g., intents, effects of the base event. Details are shown in Table TABREF2 and Table TABREF3.", "Target: the inferential results. For example, as shown in Figure FIGREF1, given a base event “PersonX finds a new job” and one inference dimension “xReact”, the targets could be “relieved” or “needy”. Notice that each inference dimension can have multiple targets.", "Event2Mind Dataset contains 25K base events and 300K targets, annotated through crowdsourcing. Event2Mind is organized in a hierarchical form: each base event has three types of inference dimensions, and given a base event, under one of inference dimensions, several targets may simultaneously exist. Table TABREF2 shows the (base event-inference dimension-target) hierarchical structure through an example from Event2Mind.", "Atomic Dataset Inspired by Event2Mind, the Atomic dataset shares the same hierarchical structure as Event2Mind, while scales up the size of dataset and expands the scope to nine types of inference dimensions. Table TABREF3 shows the (base event-inference dimension-target) hierarchical structure through an example from Atomic. Though Atomic covers the inference dimensions of Event2Mind, the base event collection of Event2Mind is nonidentical to that of Atomic.", "Problem Definition The If-Then reasoning task could be formally defined as a conditional one-to-many generation problem: given a base event $x$ and one inference dimension $d$, the model is required to generate targets $y=f(x, d)$ as close to the ground truths as possible. Both $x$ and $y$ consist of sequence of words: $x=\\lbrace x_1,\\dots , x_{m}\\rbrace $, and $y=\\lbrace y_1,\\dots , y_{n}\\rbrace $, where $m$ and $n$ denotes the length of $x$ and $y$, respectively.", "Conditional Variational Autoencoder The variational autoencoder (VAE) defines a generative framework suited for one-to-many generation problem BIBREF10. While conditional variational autoencoder (CVAE) BIBREF11 is an extension of VAE on the conditional generation problem. As shown in Figure FIGREF5 (a), CVAE characterizes the conditional one-to-many generation problem using three random variables: event $x$, target $y$ and a latent variable $z$, which is used for modeling the latent distribution of semantic over targets given an event. Hence, under a certain inference dimension, with regard to the latent semantic variable $z$, the conditional generation problem could be expressed as $p(y|x)=\\int p(y|x,z)p(z|x)dz$. CVAE models $p(y|x,z)$ and $p(z|x)$ using deep neural networks (parameterized by $\\theta $) $p_{\\theta }(y|x,z)$ and $p_{\\theta }(z|x)$. Then as illustrated in Figure FIGREF5 (b), $y$ could be generated from $x$ and $z$.", "CVAE is trained to maximize the conditional likelihood $p(y|x)$, which involves an intractable marginalization over the latent variable $z$. Instead, following BIBREF10 (BIBREF10), a practical way is to introduce another deep network (parameterized by $\\phi $) $q_{\\phi }(z|x,y)$ to approximate the true posterior distribution $p(z|x,y)$ and maximize the evidence lower bound (ELBO) of the log-likelihood function:", "Therefore, CVAE is composed of three neural networks in general. We refer to $p_{\\theta }(z|x)$ as a prior network, $q_{\\phi }(z|x,y)$ as a recognition network, and $p_{\\theta }(y|x,z)$ as a neural decoder." ], [ "Traditional CVAE can model the event-target relation. In other words, given an observed event, CVAE can generate its corresponding targets. While in this paper we model the If-Then reasoning as a [(background), event]-target process. It means that in addition to the observed event, we also want to involve the event background knowledge (which can be learned from event contexts) to generate the reasonable targets.", "To this end, we propose a context-aware variational autoencoder (CWVAE), with two additional latent variables: a context-acquiring latent variable $z_c$ to directly acquire context information, and a context-aware latent variable $z_{c^{\\prime }}$ to learn background knowledge from $z_c$, as shown in Figure FIGREF6 (a). However, the event context information is absent in the Event2Mind and Atomic dataset. To learn from the external event context information, we design the following two-stage training procedure for CWVAE.", "Pretrain: Learning Event Background Knowledge from Auxiliary Dataset In the pretrain stage, CWVAE is trained on three narrative story corpora with rich event context information. As shown in Figure FIGREF6 (a), context-acquiring latent variable $z_c$ is directly conditioned on the context $c$. Hence, $z_c$ could be employed for acquiring background knowledge from event contexts. Then, we minimize the distance between $z_c$ and the context-aware latent variable $z_{c^{\\prime }}$, by which the event background knowledge is transferred from $z_c$ to $z_{c^{\\prime }}$.", "Finetune: Adapt Event Background Knowledge to Each Inference Dimension In the finetune stage, as shown in Figure FIGREF6 (b), CWVAE is trained on the Event2Mind and Atomic dataset without the event context information. Pretrained CWVAE is finetuned to learn the specific inferential knowledge of each inference dimension. After the training procedure, as shown in Figure FIGREF6 (c), samples of $z$ is generated based on $x$ and samples of $z_{c^{\\prime }}$, where $z_{c^{\\prime }}$ contains rich event background knowledge helpful for If-Then reasoning." ], [ "As shown in Figure FIGREF8, CWVAE is mainly composed of four parts: a neural encoder that provides distributed representations of base events/targets, a recognition network for inferring $q_{\\phi }(z|x,y)$, $q_{\\phi }(z_c|x,c)$ and $q_{\\phi }(z|z_{c^{\\prime }}, x)$, a prior network for modeling $p_{\\theta }(z_{c^{\\prime }}|x)$ and $p_{\\theta }(z|x, z_{c^{\\prime }})$, and a neural decoder that integrates the information from $z$ and $z_{c^{\\prime }}$ to generate targets.", "Neural Encoder We employ a bidirectional GRU as neural encoder, which encodes context $c$, event $x$ and target $y$ into distributed representations $h^c=\\lbrace h_1^c,\\dots ,h_{l_c}^c\\rbrace $, $h^x=\\lbrace h_1^x,\\dots ,h_{l_x}^x\\rbrace $ and $h^y=\\lbrace h_1^y,\\dots ,h_{l_y}^y\\rbrace $, where $l_c$, $l_x$ and $l_y$ is the length of $c$, $x$ and $y$, respectively.", "Recognition Network The recognition network models $q_{\\phi }(z|x,y)$, $q_{\\phi }(z_c|x,c)$, $q_{\\phi }(z|z_{c^{\\prime }}, x)$ based on $h^x$, $h^y$ and $h^c$.", "Following traditional VAE, the above-mentioned three distributions are assumed to be multivariate Gaussian distribution with a diagonal covariance structure:", "where $\\mu $ denotes the mean of the distribution, $\\sigma $ denotes the standard deviation of the distribution, and $I$ denotes the identity matrix.", "Given $h^x$, $h^y$ and $h^c$, we propose a novel attention-based inferer (ABI) module to estimate the mean and standard deviation of $q_{\\phi }(z_{c}|x,c)$, $q_{\\phi }(z_{c^{\\prime }}|x,y)$ and $q_{\\phi }(z|x,y)$:", "Briefly, through the attention mechanism, ABI can capture the semantic interaction between input sequences, and estimate the parameters of distributions based on it. We will introduce the specific structure of ABI in below.", "Prior Network Prior Network models $p_{\\theta }(z_{c^{\\prime }}|x)$ and $p_{\\theta }(z|x, z_{c^{\\prime }})$ based on $h^x$. The distribution of $p_{\\theta }(z_{c^{\\prime }}|x)$ and $p_{\\theta }(z|x, z_{c^{\\prime }})$ are still assumed to be multivariate Gaussian, whereas the parameters are different:", "where $\\mu ^{^{\\prime }}$ denotes the mean of the distribution, $\\sigma ^{^{\\prime }}$ denotes the standard deviation of the distribution and $I$ denotes the identity matrix.", "Then the attention-based inferer module is still employed to estimate parameters of distributions:", "Neural Decoder Given the base event $x$, the semantic latent variable $z$, and the context-aware latent variable $z_{c^{\\prime }}$, the neural decoder defines the generation probability of $y$ as following:", "where $p(y_j|y<j, z, z_{c^{\\prime }}, x)=g(y_{j-1}, s_{j-1}, e_j)$, $g(\\cdot )$ is an attention-based feed forward model, $e_j=\\sum _i \\alpha _{ji}h_i^{x}$ is the context vector and $s_{j-1}$ is the hidden state of the decoder. We obtain $g(\\cdot )$ and $e_j$ the same way as BIBREF12 (BIBREF12). Whereas our decoder differs from BIBREF12 (BIBREF12) in that our model integrates the context-aware latent variable $z_{c^{\\prime }}$ and semantic latent variable $z$ in the computation of $s_j=\\mathrm {GRU}([E_{yj};s_{j-1},z,z_{j-1}])$, where $E_{yj}$ is the word embeddings of target words.", "Note that through concatenating $z$ and $z_{c^{\\prime }}$ with $E_{yj}$ and $s_{j-1}$, $s_j$ could be affected by context-aware latent variable $z_{c^{\\prime }}$ and semantic latent variable $z$. This allows model to directly access to the event background knowledge from $z_{c^{\\prime }}$. In addition, the randomness of $z$ and $z_{c^{\\prime }}$ would increase the diversity of model generation.", "Attention-based Inferer Attention mechanism has shown strong ability in capturing semantic interactions BIBREF13. Inspired by the co-attention mechanism BIBREF14, we propose an attention-based inferer (ABI) to estimate the mean and standard deviation of a distribution belongs to $p_{\\theta }(\\cdot )$ or $q_{\\phi }(\\cdot )$ by capturing semantic interactions of input sequences.", "Specifically, given two input sequences (e.g., representations of contexts and events) $a=\\lbrace a_1,\\dots ,a_{l_a}\\rbrace $ and $b=\\lbrace b_1,\\dots ,b_{l_b}\\rbrace $ with length $l_a$ and $l_b$, we first obtain the attention scores from each side through:", "where $W_a \\in \\mathbb {R}^{d\\times d_a}$ and $W_b \\in \\mathbb {R}^{d\\times d_b}$ are parameter weights.", "With these attention scores, the context vectors of both sequences are given by:", "Then we perform a mean pooling operation on context vectors of both sequences:", "To obtain the mean and standard deviation, the pooled context vectors $\\bar{c^a}$ and $\\bar{c^b}$ which carry semantic interaction between two sequences, are concatenated and projected into a latent semantic space through a nonlinear transformation:", "Finally the mean and standard deviation are generated through a nonlinear transformation over $h_z$:" ], [ "With the incorporation of $z_{c^{\\prime }}$, the original loglikelihood could be decomposed as:", "Then following traditional CVAE, the ELBO of CWVAE is defined as follows:", "which is the objective function at the finetune stage.", "While in the pretrain stage, as we aim to learn background knowledge through minimizing the distance between $z_c$ and $z_{c^{\\prime }}$, in addition to $L^{ELBO}$, a context-aware regulation term is introduced:", "where the context aware regularization term is the KL distance between $z$ and $z_{c^{\\prime }}$. Through minimizing the context aware regularization term, we aim to pass event context knowledge from $z_c$ to the context aware latent variable $z_{c^{\\prime }}$." ], [ "To test the performance of CWVAE, we split the Event2Mind and Atomic dataset into training, development and test sets (80%, 10%, 10%) in the same way as BIBREF5 (BIBREF5) and BIBREF4 (BIBREF4), respectively. We initialize the embedding layer from 300d GloVe word embeddings. The neural encoder is chosen to be biGRU with 300 hidden units. For the ABI module, size of $W_a$ and $W_b$ is set to be $100 \\times d_a$ and $100 \\times d_b$ respectively. The dimension of $z_c$, $z_{c^{\\prime }}$ and $z$ is all set as 40. The neural decoder is set to be GRU with 300d hidden state. Regulation coefficient $\\lambda $ of context-aware regulation term is set to be 0.1. Models are trained using an Adam optimizer BIBREF15 with a learning rate of 0.001." ], [ "The auxiliary dataset is built upon three human-written story corpora: ROCStories BIBREF16, VIST BIBREF17 and WritingPrompts BIBREF18. ROCStories and VIST are composed of short stories with five sentences. We filter out stories of more than 1,000 words in WritingPrompts, and cut the remaining stories into five-sentence-paragraphs.", "For each five-sentence-paragraph, we define the first three sentences as contexts of the base event, the fourth sentence as the base event, and the fifth sentence as the inference target. For example, as shown in Table TABREF25, the first three sentences describe a context that Jason was unsatisfied about his job and applied for a new job. Hence, after happening the event “he got the job”, a plausible react about the event could be “jason was much happier at his new job”. In total, the auxiliary dataset contains 192,316 $(context, event, target)$ triples." ], [ "We compared our proposed model with the following four baseline methods:", "RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.", "Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.", "VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.", "CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.", "Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively." ], [ "We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens." ], [ "Since automatic evaluation of generations is still a challenging task BIBREF22, we also conduct human evaluations on the model performance. Five human experts are employed to evaluate the coherence, diversity and fluency of generated targets. Experts are asked to vote for if a generation is fluent or coherent for each generated target, and give a 1-5 score for the diversity of generations. For both Event2Mind and Atomic datasets, 100 events are randomly selected from the test set. For each method, top 10 generated targets of each base event are used for evaluation. Finally we report three overall averaged scores of coherence, diversity and fluency on both datasets, respectively." ], [ "We list the perplexity and BLEU score of CWVAE and baseline methods on Event2Mind and Atomic in Table TABREF31 and Table TABREF33, respectively, and show the distinct-1 and distinct-2 score on Event2Mind and Atomic in Table TABREF32 and Table TABREF34, respectively. We find that:", "(1) As shown in Table TABREF32 and Table TABREF34, comparison between RNN-based Seq2Seq and variational-based methods, including Variational Seq2Seq, VRNMT, CWVAE-unpretrained and CWVAE shows that, variational-based methods could increase the diversity of generations. This confirms one of our motivations that variational-based methods could capture the latent semantic distribution within targets and increase the diversity of If-Then reasoning.", "(2) Comparing CWVAE-unpretrained with other baseline methods shows that, in general CWVAE improves the accuracy and diversity on both dataset. These results indicate the efficiency of CWVAE in capturing the latent semantic distribution of targets, and generate more reasonable inferential results.", "(3) Comparison between CWVAE and CWVAE-unpretrained shows that the pretrain stage could enhance the performance of CWVAE in both the accuracy and diversity. This is mainly because event knowledge could offer the guidance for If-Then reasoning. In the pretrain stage, CWVAE could capture the event background knowledge through context-aware latent variable, and such knowledge could be be adapted to our task through the fintune stage.", "To further evaluate the effectiveness of our proposed approach, we also conduct human evaluations, the results of which are shown in Table TABREF39 and Table TABREF40. On both datasets, CWVAE-based methods achieve consistent better coherence, diversity and fluency performances. While comparing with CWVAE-Unpretrained, the pretrain procedure could improves the performance on coherence and fluency. The main reasons are twofold: first, the CWVAE has advantage in capturing the semantic distribution of targets; second, event background learned from the pretrain stage is helpful for the If-Then reasoning." ], [ "Table TABREF41 provides an example of model generations given the base event “PersonX works tirelessly” and the inference dimension “xIntent”. The generations under CWVAE mainly contain four kinds of semantics: (1) be productive, (2) finish his work soon, (3) accomplish goal, (4) earn more money. While the semantics of generations using baseline RNN-based Seq2Seq model is relatively limited. Furthermore, the first three kinds of semantic overlap the three ground truth targets, and the fourth kind of semantic is in accordance with daily-life commonsense. Compared to RNN-based Seq2Seq model, our approach can increase the diversity and rationality of generations, meanwhile keep the accuracy." ], [ "Understanding events and constructing event-centered commonsense knowledge are crucial to many NLP applications, such as intention recognition BIBREF23 and dialog generation BIBREF24. Recently a growing number of studies focus on event-centered commonsense reasoning, which mainly concentrates on two areas, script event prediction and story ending generation/choosing.", "Script event prediction concerns with the temporal relationships between script events BIBREF25, which requires models to choose a correct subsequent triple-organized event among the candidates BIBREF2. Prior work mainly focused on modeling event pairs BIBREF25, event chains BIBREF2 and event graph BIBREF3 to predict the subsequent event. Story ending generation focuses on generating plausible story endings BIBREF16, which requires models to understand the story context, and keep generated endings logically consistent with it BIBREF26, BIBREF27. The above tasks mainly investigate the logical orders of events, whereas the If-Then reasoning task focuses on inferring the mental state of event participants." ], [ "VAE BIBREF10 has been widely applied in various of text generation tasks, such as dialogue and machine translation. In dialogue generation, BIBREF9 (BIBREF9) adapts VAE with encoder-decoder framework to model the latent semantic distribution of answers, which can increase the diversity of generations. For the task of machine translation, BIBREF19 (BIBREF19) and BIBREF28 (BIBREF28) employ a latent variable to capture the semantic interaction between the source and target sentence, and regard the latent variable as a supplementation of attention mechanism. While BIBREF29 (BIBREF29) use the latent variable to model topic distributions in text generation. In this paper, we introduce an additional context-aware latent variable to effectively learn background knowledge and conduct If-Then reasoning on the guidance of it." ], [ "In this paper, we propose a novel context-aware VAE (CWVAE) framework with two training stages for If-Then commonsense reasoning. By introducing an additional context-aware latent variable, CWVAE is able to learn external background knowledge, and conduct If-Then reasoning under its guidance. In the pretrain stage, CWVAE learns event background knowledge, then in the finetune stage CWVAE adapts such knowledge to each inference dimension. Experimental results demonstrate that CWVAE outperforms baseline methods in both the accuracy and diversity of generations." ], [ "We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (SQ2018AAA010010), the National Key Research and Development Program of China (2018YFB1005103), the National Natural Science Foundation of China (NSFC) via Grant 61702137." ] ] }
{ "question": [ "How do they measure the diversity of inferences?", "By how much do they improve the accuracy of inferences over state-of-the-art methods?", "Which models do they use as baselines on the Atomic dataset?", "How does the context-aware variational autoencoder learn event background information?", "What is the size of the Atomic dataset?" ], "question_id": [ "7d59374d9301a0c09ea5d023a22ceb6ce07fb490", "8e2b125426d1220691cceaeaf1875f76a6049cbd", "42bc4e0cd0f3e238a4891142f1b84ebcd6594bf1", "fb76e994e2e3fa129f1e94f1b043b274af8fb84c", "99ef97336c0112d9f60df108f58c8b04b519a854" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ " ", " ", " ", " ", " " ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "by number of distinct n-grams", "evidence": [ "We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens." ], "highlighted_evidence": [ "Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. " ] } ], "annotation_id": [ "7f7d9a78c51f1de52959ee1634d8d01fc56c9efd" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "ON Event2Mind, the accuracy of proposed method is improved by absolute BLUE 2.9, 10.87, 1.79 for xIntent, xReact and oReact respectively.\nOn Atomic dataset, the accuracy of proposed method is improved by absolute BLUE 3.95. 4.11, 4.49 for xIntent, xReact and oReact.respectively.", "evidence": [ "We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens.", "FLOAT SELECTED: Table 4: Average perplexity and BLEU score (reported in percentages) for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.", "FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened." ], "highlighted_evidence": [ "Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. ", "FLOAT SELECTED: Table 4: Average perplexity and BLEU score (reported in percentages) for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.", "FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened." ] } ], "annotation_id": [ "5f5d24e05be705e9487a2032e7c9a8e3c69d41d7" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "RNN-based Seq2Seq", "Variational Seq2Seq", "VRNMT ", "CWVAE-Unpretrained" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We compared our proposed model with the following four baseline methods:", "RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.", "Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.", "VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.", "CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.", "Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.", "FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened." ], "highlighted_evidence": [ "We compared our proposed model with the following four baseline methods:\n\nRNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.\n\nVariational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.\n\nVRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.\n\nCWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.\n\nNote that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.", "FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened." ] } ], "annotation_id": [ "667d47b73133321cfe695db94c2418e8b8c4d9bb" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": " CWVAE is trained on an auxiliary dataset to learn the event background information by using the context-aware latent variable. Then, in finetute stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target.", "evidence": [ "In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.)." ], "highlighted_evidence": [ "In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.)." ] } ], "annotation_id": [ "d01baf34ae2b5ff6b706bad6ad645c4da7d42d1b" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "122017054a7e7b46d0ad276b7a3e5abd76b463ba" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] }
{ "caption": [ "Figure 1: A illustration of two challenging problems in IfThen reasoning. (a) Given an observed event, the feelings about this event could be multiple. (b) Background knowledge is need for generating reasonable inferences, which is absent in the dataset (marked by dashed lines).", "Table 1: Hierarchical structure of Event2Mind dataset. For specific inference dimensions, “x” and “o” refers to PersonX and others respectively.", "Table 2: Hierarchical structure of Atomic dataset. For specific inference dimensions, “x” and “o” refers to PersonX and others respectively.", "Figure 2: Illustration of inference and generation process of CVAE in a directed graph. Dashed lines represent the inference of z. Solid lines represent the generation process.", "Figure 3: Illustration of pretrain, finetune and generation process of CWVAE in a directed graph. Dashed lines represent the inference of z, zc and zc′ . Solid lines represent the generation process. Red circle denotes the context-aware latent variable.", "Figure 4: Architecture of CWVAE. We mark Neural encoder in green, prior network in blue, recognition network in brown and neural decoder in orange, respectively.", "Table 3: An example for the construction of auxiliary dataset. For a five-sentence-paragraph, the first three sentences are taken as event context, while the fourth and fifth sentence is taken as base event and target respectively.", "Table 4: Average perplexity and BLEU score (reported in percentages) for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.", "Table 5: Distinct-1 and distinct-2 scores for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.", "Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened.", "Table 7: Distinct-1 and distinct-2 scores for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened.", "Table 9: Human evaluation results on Atomic.", "Table 8: Human evaluation results on Event2Mind.", "Table 10: An example of inferences made by CWVAE and RNN-based Seq2Seq model under inference dimension “xIntent”." ], "file": [ "1-Figure1-1.png", "2-Table1-1.png", "2-Table2-1.png", "3-Figure2-1.png", "3-Figure3-1.png", "4-Figure4-1.png", "6-Table3-1.png", "6-Table4-1.png", "6-Table5-1.png", "7-Table6-1.png", "7-Table7-1.png", "7-Table9-1.png", "7-Table8-1.png", "8-Table10-1.png" ] }
1708.08615
Comparing Human and Machine Errors in Conversational Speech Transcription
Recent work in automatic recognition of conversational telephone speech (CTS) has achieved accuracy levels comparable to human transcribers, although there is some debate how to precisely quantify human performance on this task, using the NIST 2000 CTS evaluation set. This raises the question what systematic differences, if any, may be found differentiating human from machine transcription errors. In this paper we approach this question by comparing the output of our most accurate CTS recognition system to that of a standard speech transcription vendor pipeline. We find that the most frequent substitution, deletion and insertion error types of both outputs show a high degree of overlap. The only notable exception is that the automatic recognizer tends to confuse filled pauses ("uh") and backchannel acknowledgments ("uhhuh"). Humans tend not to make this error, presumably due to the distinctive and opposing pragmatic functions attached to these words. Furthermore, we quantify the correlation between human and machine errors at the speaker level, and investigate the effect of speaker overlap between training and test data. Finally, we report on an informal"Turing test"asking humans to discriminate between automatic and human transcription error cases.
{ "section_name": [ "Introduction", "Measuring Human Error", "Machine Transcription System", "Error Distribution and Correlation", "Error types", "A Turing-like Experiment", "Conclusions", "Acknowledgments" ], "paragraphs": [ [ "Automatic speech recognition (ASR) systems have seen remarkable advances over the last half-decade from the use of deep, convolutional and recurrent neural network architectures, enabled by a combination of modeling advances, available training data, and increased computational resources. Given these advances, our research group recently embarked on an effort to reach human-level transcription accuracy using state-of-the-art ASR techniques on one of the genres of speech that has historically served as a difficult benchmark task: conversational telephone speech (CTS). About a decade ago, CTS recognition had served as an evaluation task for government-sponsored work in speech recognition, predating the take-over of deep learning approaches and still largely in the GMM-HMM modeling framework BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . It had proven to be a hard problem, due to the variable nature of conversational pronunciations, speaking styles, and regional accents. Seide at al. BIBREF6 demonstrated that deep networks as acoustic models could achieve significant improvements over GMM-HMM models on CTS data, and more recently researchers at IBM had achieved results on this task that represented a further significant advance BIBREF7 , BIBREF8 over those from a decade ago.", "The goal of reaching “human parity” in automatic CTS transcription raises the question of what should be considered human accuracy on this task. We operationalized the question by submitting the chosen test data to the same vendor-based transcription pipeline that is used at Microsoft for production data (for model training and internal evaluation purposes), and then comparing the results to ASR system output under the NIST scoring protocol. Using this methodology, and incorporating state-of-the-art convolutional and recurrent network architectures for both acoustic modeling BIBREF9 , BIBREF10 , BIBREF7 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 and language modeling BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 with extensive use of model combination, we obtained a machine error rate that was very slightly below that of the human transcription process (5.8% versus 5.9% on Switchboard data, and 11.0% versus 11.3% on CallHome English data) BIBREF19 . Since then, Saon et al. have reported even better results, along with a separate transcription experiment that puts the human error rate, on the same test data, at a lower point than measured by us (5.1% for Switchboard, 6.8% for CallHome) BIBREF20 .", "In this paper, we address the question whether there are major qualitative differences between the results of human transcriptions of conversational speech and those obtained by ASR systems, based on a detailed analysis of the data and system output from our human parity experiment BIBREF19 . The question becomes important if ASR is to replace humans as the first step in fully automatic speech understanding systems—if machine transcription errors are qualitatively different from humans then we would have to worry about the possible effects on downstream processing, and mitigation techniques so as to still achieve an overall “natural” user experience (e.g., in real-time conversational speech translation, such as in the Skype application).", "We start by discussing why human error rate on this task must themselves be considered a moving target. Next we ask whether speech that is difficult for ASR also tends to be hard for humans to transcribe (and vice-versa), and whether the speaker overlap with the training data that is found in a portion of the test data has a noticeable effect on the result, as was suggested in BIBREF20 . We then look at the most frequent word error types exhibited by the two transcription systems (human and machine), and finally report on a very preliminary but still informative experiment to see if humans could tell apart the transcription source (again, human versus machine), based on the errors they make." ], [ "The assessment of human transcription error on conversational speech has been somewhat murky. A widely cited figure is 4% word error rate (WER), based on BIBREF21 . However, the reference therein is only a “personal communication” without further data. The Linguistics Data Consortium quantified inter-transcriber disagreement for the NIST 2003 CTS evaluation data at between 4.1% and 4.5% with very careful multiple transcriptions BIBREF22 . For “quick transcription”, the disagreement increased to 9.6%. The CTS data in the NIST study is from the Switchboard (SWB) and Fisher corpora, and is therefore comparable to the SWB portion of our data, i.e., coming from telephone conversations between strangers discussing a general-interest topic. Still, the exact dataset is different, which may account for some of the discrepancy with error rates measured on the NIST 2000 set used by us (5.9%) and IBM (5.1%), although the numbers are remarkably close.", "As briefly described in the introduction, we measured human performance by leveraging an existing pipeline in which Microsoft data is transcribed on a weekly basis. This pipeline uses a large commercial vendor to perform two-pass transcription. In the first pass, a transcriber works from scratch to transcribe the data. In the second pass, a second listener monitors the data to do error correction. Dozens of hours of test data are processed in each batch, with no special instructions to the transcribers. The waveform segments, roughly corresponding to utterances, making up the test set are processed separately. This makes the task easier since the speakers are more clearly separated, but also more difficult since the two sides of the conversation are not interleaved and context may be missing. We performed that text normalization on the human transcripts to remove systematic discrepancies with the NIST scoring references. (Since this was done with some amount of trial and error it effectively was “cheating” for the benefit of the human transcribers.) We then applied the NIST scoring tools to obtain word error rates of 5.9% on the SWB portion, and 11.3% on the CallHome (CH) portion of the NIST 2000 test set. The latter corpus, unlike Switchboard, consists of conversations between friends and family, without seed topic, which would account for the much higher overall error rate. Clearly our method was not designed to achieve the highest possible human transcription accuracy; instead, as pointed out in BIBREF19 , our goal was to establish a benchmark corresponding to industry-standard (i.e. high-volume) professional transcript production.", "The authors in BIBREF20 undertook to measure human error on the same dataset, but using a more involved process. The major differences were: (1) The transcription vendor was cognizant of the experiment and actively involved. (2) Transcribers were chosen based on past performance and familiarized with the conventions used by LDC in generating the reference transcripts. (3) Three independent, parallel transcribers were used, plus a fourth one for 2nd-pass quality control (QC) of the 1st-pass output. All in all, the transcribers performed roughly 12 to 18 listening passes. (4) The final output was obtained by choosing the transcriber (with QC) who had obtained the lowest WER on the test data. As noted earlier, the resulting WERs were 5.1% and 6.8%, respectively. The considerably lower estimate for CH could be a result of the transcribers having access to the entire conversation (as per personal communication with the authors). This would be especially helpful in transcribing unfamiliar vocabulary and speaking styles (allowing the transcriber to “adapt” to the data more effectively).", "Clearly the IBM experiment made a much more thorough effort to probe the boundaries of human accuracy, and may in fact have come close to the inter-transcriber agreement previously measured by LDC on a different data set. However, it is important to realize that further improvements on the human side are no doubt achievable. For example, the number of transcribers could be scaled up further, or they could be allowed to confer with each other, to resolve disagreements. This raises the question of where to draw the line on human effort.", "Finally, it is important to realize that conversational speech has a high degree of inherent ambiguity. For example, conversational pronunciations are highly variable and often reduced BIBREF23 . Another source of ambiguity is the lack of context and knowledge shared by the speakers (especially in the case of CH). In the presence of inherent ambiguity, inter-transcriber agreement can be improved by agreed-upon disambiguation rules, although this would not necessarily reflect true agreement based on speech understanding." ], [ "The details of our conversational speech recognition system are described elsewhere BIBREF19 , so we only give a brief summary here. The system employs independent decodings by diverse acoustic models, including convolutional neural net (CNN) and bidirectional long short-term memory (BLSTM) models that differ by model architecture, number of senones, amount of training data, and other metaparameters. Decoding uses a pruned 4-gram N-gram language model (LM) to generate lattices, which are then expanded into 500-best lists using a larger N-gram LM. The N-best lists are rescored with multiple LSTM-LMs operating in forward and backward directions. Model scores are combined log-linearly at the utterance level and converted to posterior probabilities represented as word confusion networks. The various subsystems making up the final system are selected in a greedy search, and their weights are optimized via an expectation-maximization algorithm, on development data. The acoustic training data comprises all the publicly available CTS data (about 2000 hours), while the LMs are additionally trained on Broadcast News and Web data from U. Washington. The individual subsystems (based on different acoustic models) achieve word error rates between 6.4% and 7.7% on the Switchboard evaluation set, and between 12.2% and 17.0% on the CallHome portion. Combined, the system achieves 5.8% and 11.0% WER, respectively." ], [ "We note in passing that machine and human transcription WERs do not differ significantly according the Wilcoxon and Matched Pairs Sentence Segment Word Error tests as applied by NIST, nor do they differ according to a Sign test comparing error counts at the utterance level.", "A first high-level question regarding the relation between word errors by machine and human transcribers is whether difficulty in one predicts difficulty in the other. Figure FIGREF1 shows scatter plots of speaker-level error rates (machine vs. human), separated by corpus. Each corpus subset has 40 conversation sides.", "Clearly the errors at that level are correlated, with INLINEFORM0 for SWB and INLINEFORM1 for CH. This suggests that properties of the speech, either as a function of the content, the speaker, or the channel (each speaker occurs in exactly one test conversation), cause errors for both machine and human transcription.", "We observe that the CH data has two speakers with outlier machine error rates (37.5% and 64.7% WER, solid red dots in Figure FIGREF1 ). These correspond to secondary speakers in their respective conversation sides, each with only a fraction of the speech of the dominant speaker. Note that the ASR system processes each conversation assuming only a single speaker per side. If we remove these outliers, the machine-human error correlation on CH increases to INLINEFORM0 . With secondary speakers excluded, we can also observe that the machine error rates cluster tighter than the human ones in both corpora (SWB: machine INLINEFORM1 vs. human INLINEFORM2 ; CH: machine INLINEFORM3 vs. human INLINEFORM4 ).", "In BIBREF20 it was sugggested that one of the reasons for the much higher error rate on CH compared to SWB was that 36 of the 40 SWB test speakers occur in the portion of the SWB corpus that is used in training (due to what we surmise to be an oversight in the selection of the NIST 2000 test set). To assess this hypothesis we singled out the four speakers in the SWB portion that are not found in the training set; these are shown as solid black circles in Figure FIGREF1 . At first, it seems that the speaker-averaged WER for the “seen” speakers (machine WER 5.9%) is indeed much lower than for the speakers not found in training (7.5%). However, we can safely attribute this to bad luck and small sample size. The average machine WER of 7.5% for “unseen” speakers is well within one standard deviation of the “seen” speakers' WER distribution ( INLINEFORM0 ), and more tellingly, almost exactly the same relative difference in WERs between “seen” and “unseen” speakers is observed for human transcriptions (6.0% versus 7.7%). Clearly the human transcribers did not have the benefit of training on the “seen” speakers, so the difference must be due to the intrinsic difficulty of the speakers, which affects both transcription systems." ], [ "Tables TABREF3 – TABREF5 show the top ten types of substitutions, deletions and insertions for both ASR and human transcripts. Inspections reveals that the same short function words, discourse markers and filled pauses appear in the top ten errors for both systems. There is one notable exception, however. The top substitution error for the ASR system involves misrecognition of filled pauses (“%hesitation”, a word class label covering “uh” and “um” in various spellings) as backchannel acknowledgments (“%bcack”, standing for ”uhhuh”, “mhm”, etc.). The same substitution error is much less frequent in human transcripts.", "A possible explanation for this asymmetry lies in the discourse functions of filled pauses and backchannels. Filled pauses serve to either claim or retain the floor, signaling that the speaker wants to either start or continue speaking. Backchannels, on the other hand, acknowledge that the speaker is listening, and that the other speaker should carry on. Since the two classes of words thus have exactly opposite functions in turn management, it stands to reason that humans are keenly aware of their differences and use all available phonetic, prosodic, and contextual cues to distinguish then. Our ASR system, by contrast, uses only its standard acoustic-phonetic and language models. Modeling dialog context in particular would be expected to improve this shortcoming." ], [ "Having established that human and machine transcriptions are quite similar in several aspects, including the word token types involved, we were wondering if higher-level error patterns could distinguish the two systems. For example, one might expect that human misrecognitions are guided by a strong “human” language and understanding model, whereas machine errors might be more likely to generate syntactic and semantic nonsense. To get at this question we designed a specialized version of the classic Turing test, in the sense that a human judge is asked to interact with a system with the goal of estimating whether it is underpinned by human or artificial “intelligence.” In our case, the task involved inspecting one randomly chosen utterance from the test set at a time, with a side-by-side display of the reference transcript, the human transcript, and the ASR output (after the text normalizations that are part of the scoring protocol). Only utterances having at least one transcription error and a discrepancy between the two versions are presented. Discrepancies between the transcript versions are highlighted, and the error type (substitution, insertion, deletion) is visually coded as well, as shown in Figure FIGREF7 .", "We ran this informal experiment during four days on the exhibitor floor of the 2017 IEEE ICASSP conference in New Orleans. The players were not formally recruited or characterized, but consisted of conference attendees who for the most part had some background or experience in speech processing. Subjects were introduced to the test by explaining the research background, and were allowed to play as many trials as they wanted. Out of a total of 353 trials, subjects identified the human transcript correctly 188 times, for an overall success rate of 53%. The successes included occasional gimmes like human misspellings or the asymmetry in the filled pause/backchannel substitution (which we pointed out to the subjects). According to a binomial test, this success rate does not differ signficantly from the 50% chance rate ( INLINEFORM0 , one-tailed). While this result is obviously quite preliminary, it was a good demonstration that it is not easy distinguishing machine from human errors, even for technically sophisticated observers." ], [ "We have discussed methodological issues and reported first findings when comparing automatic conversational speech transcriptions to human performance, using data generated by our recent efforts to reach human parity in CTS recognition. While an exact characterization of the human benchmark remains a moving target that is subject to debate, our results so far have shown that machine transcription errors track those made by humans in several important aspects. At the speaker (as well as corpus) level the two error rates are strongly correlated, suggesting that common underlying factors in the speech data determine transcription difficulty for both humans and ASR systems. (A detailed characterization of those factors has precedent in ASR research and should be revisited while also considering human performance.) A partial overlap of Switchboard training and test speakers seems to have no major effect on error rates. We also find that the most frequent error patterns involve the same short function words and discourse particles for both humans and machines. The one notable exception is that ASR tends to confuse filled pauses and backchannels, a functional distinction that humans need to be very good at pragmatically. An informal Turing-like test also demonstrated that error patterns in the two types of transcriptions are not obviously distinguishable. Overall, we conclude that recent advances in ASR technology have not only achieved remarkable levels of accuracy, but also generate results that are qualitatively surprisingly similar to professional human transcriber output." ], [ "We thank our coauthors and collaborators on the Human Parity project: X. Huang, F. Seide, M. Seltzer, W. Xiong, D. Yu, and G. Zweig. Thanks to K. Riedhammer for sharing metadata on train/test speaker overlap." ] ] }
{ "question": [ "what standard speech transcription pipeline was used?" ], "question_id": [ "95d8368b1055d97250df38d1e8c4a2b283d2b57e" ], "nlp_background": [ "" ], "topic_background": [ "" ], "paper_read": [ "" ], "search_query": [ "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "pipeline that is used at Microsoft for production data" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The goal of reaching “human parity” in automatic CTS transcription raises the question of what should be considered human accuracy on this task. We operationalized the question by submitting the chosen test data to the same vendor-based transcription pipeline that is used at Microsoft for production data (for model training and internal evaluation purposes), and then comparing the results to ASR system output under the NIST scoring protocol. Using this methodology, and incorporating state-of-the-art convolutional and recurrent network architectures for both acoustic modeling BIBREF9 , BIBREF10 , BIBREF7 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 and language modeling BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 with extensive use of model combination, we obtained a machine error rate that was very slightly below that of the human transcription process (5.8% versus 5.9% on Switchboard data, and 11.0% versus 11.3% on CallHome English data) BIBREF19 . Since then, Saon et al. have reported even better results, along with a separate transcription experiment that puts the human error rate, on the same test data, at a lower point than measured by us (5.1% for Switchboard, 6.8% for CallHome) BIBREF20 ." ], "highlighted_evidence": [ "We operationalized the question by submitting the chosen test data to the same vendor-based transcription pipeline that is used at Microsoft for production data (for model training and internal evaluation purposes), and then comparing the results to ASR system output under the NIST scoring protocol. " ] } ], "annotation_id": [ "1221d3bb8506cd725f8c6de105786d755804d8d2" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] }
{ "caption": [ "Figure 1: Correlation between machine and human word error rates at speaker level. The solid black circles represent SWB speakers not seen in training. The solid red circles stand for secondary CH speakers that share a conversation side with a dominating primary speaker.", "Figure 2: Turing-like test challenging human players to tell machine from human transcripts", "Table 1: Most common substitutions for ASR system and humans. The number of times each error occurs is followed by the word in the reference, and what appears in the hypothesis instead.", "Table 2: Most common deletions for ASR system and humans.", "Table 3: Most common insertions for ASR system and humans." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png", "4-Table2-1.png", "4-Table3-1.png" ] }
1701.03214
An Empirical Comparison of Simple Domain Adaptation Methods for Neural Machine Translation
In this paper, we propose a novel domain adaptation method named"mixed fine tuning"for neural machine translation (NMT). We combine two existing approaches namely fine tuning and multi domain NMT. We first train an NMT model on an out-of-domain parallel corpus, and then fine tune it on a parallel corpus which is a mix of the in-domain and out-of-domain corpora. All corpora are augmented with artificial tags to indicate specific domains. We empirically compare our proposed method against fine tuning and multi domain methods and discuss its benefits and shortcomings.
{ "section_name": [ "Introduction", "Related Work", "Methods for Comparison", "Fine Tuning", "Multi Domain", "Mixed Fine Tuning", "Experimental Settings", "High Quality In-domain Corpus Setting", "Low Quality In-domain Corpus Setting", "MT Systems", "Results", "Conclusion" ], "paragraphs": [ [ "One of the most attractive features of neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 is that it is possible to train an end to end system without the need to deal with word alignments, translation rules and complicated decoding algorithms, which are a characteristic of statistical machine translation (SMT) systems. However, it is reported that NMT works better than SMT only when there is an abundance of parallel corpora. In the case of low resource domains, vanilla NMT is either worse than or comparable to SMT BIBREF3 .", "Domain adaptation has been shown to be effective for low resource NMT. The conventional domain adaptation method is fine tuning, in which an out-of-domain model is further trained on in-domain data BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . However, fine tuning tends to overfit quickly due to the small size of the in-domain data. On the other hand, multi domain NMT BIBREF8 involves training a single NMT model for multiple domains. This method adds tags “<2domain>\" by modifying the parallel corpora to indicate domains without any modifications to the NMT system architecture. However, this method has not been studied for domain adaptation in particular.", "Motivated by these two lines of studies, we propose a new domain adaptation method called “mixed fine tuning,\" where we first train an NMT model on an out-of-domain parallel corpus, and then fine tune it on a parallel corpus that is a mix of the in-domain and out-of-domain corpora. Fine tuning on the mixed corpus instead of the in-domain corpus can address the overfitting problem. All corpora are augmented with artificial tags to indicate specific domains. We tried two different corpora settings:", "We observed that “mixed fine tuning\" works significantly better than methods that use fine tuning and domain tag based approaches separately. Our contributions are twofold:" ], [ "Besides fine tuning and multi domian NMT using tags, another direction for domain adaptation is using in-domain monolingual data. Either training an in-domain recurrent neural language (RNN) language model for the NMT decoder BIBREF13 or generating synthetic data by back translating target in-domain monolingual data BIBREF5 have been studied." ], [ "All the methods that we compare are simple and do not need any modifications to the NMT system." ], [ "Fine tuning is the conventional way for domain adaptation, and thus serves as a baseline in this study. In this method, we first train an NMT system on a resource rich out-of-domain corpus till convergence, and then fine tune its parameters on a resource poor in-domain corpus (Figure 1 )." ], [ "The multi domain method is originally motivated by BIBREF14 , which uses tags to control the politeness of NMT translations. The overview of this method is shown in the dotted section in Figure 2 . In this method, we simply concatenate the corpora of multiple domains with two small modifications: a. Appending the domain tag “<2domain>\" to the source sentences of the respective corpora. This primes the NMT decoder to generate sentences for the specific domain. b. Oversampling the smaller corpus so that the training procedure pays equal attention to each domain.", "We can further fine tune the multi domain model on the in-domain data, which is named as “multi domain + fine tuning.”" ], [ "The proposed mixed fine tuning method is a combination of the above methods (shown in Figure 2 ). The training procedure is as follows:", "Train an NMT model on out-of-domain data till convergence.", "Resume training the NMT model from step 1 on a mix of in-domain and out-of-domain data (by oversampling the in-domain data) till convergence.", "By default, we utilize domain tags, but we also consider settings where we do not use them (i.e., “w/o tags”). We can further fine tune the model from step 2 on the in-domain data, which is named as “mixed fine tuning + fine tuning.”", "Note that in the “fine tuning” method, the vocabulary obtained from the out-of-domain data is used for the in-domain data; while for the “multi domain” and “mixed fine tuning” methods, we use a vocabulary obtained from the mixed in-domain and out-of-domain data for all the training stages." ], [ "We conducted NMT domain adaptation experiments in two different settings as follows:" ], [ "Chinese-to-English translation was the focus of the high quality in-domain corpus setting. We utilized the resource rich patent out-of-domain data to augment the resource poor spoken language in-domain data. The patent domain MT was conducted on the Chinese-English subtask (NTCIR-CE) of the patent MT task at the NTCIR-10 workshop BIBREF9 . The NTCIR-CE task uses 1000000, 2000, and 2000 sentences for training, development, and testing, respectively. The spoken domain MT was conducted on the Chinese-English subtask (IWSLT-CE) of the TED talk MT task at the IWSLT 2015 workshop BIBREF10 . The IWSLT-CE task contains 209,491 sentences for training. We used the dev 2010 set for development, containing 887 sentences. We evaluated all methods on the 2010, 2011, 2012, and 2013 test sets, containing 1570, 1245, 1397, and 1261 sentences, respectively." ], [ "Chinese-to-Japanese translation was the focus of the low quality in-domain corpus setting. We utilized the resource rich scientific out-of-domain data to augment the resource poor Wikipedia (essentially open) in-domain data. The scientific domain MT was conducted on the Chinese-Japanese paper excerpt corpus (ASPEC-CJ) BIBREF11 , which is one subtask of the workshop on Asian translation (WAT) BIBREF15 . The ASPEC-CJ task uses 672315, 2090, and 2107 sentences for training, development, and testing, respectively. The Wikipedia domain task was conducted on a Chinese-Japanese corpus automatically extracted from Wikipedia (WIKI-CJ) BIBREF12 using the ASPEC-CJ corpus as a seed. The WIKI-CJ task contains 136013, 198, and 198 sentences for training, development, and testing, respectively." ], [ "For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded. We early stopped the training process when we observed that the BLEU score of the development set converges. For testing, we self ensembled the three parameters of the best development loss, the best development BLEU, and the final parameters. Beam size was set to 100.", "For performance comparison, we also conducted experiments on phrase based SMT (PBSMT). We used the Moses PBSMT system BIBREF17 for all of our MT experiments. For the respective tasks, we trained 5-gram language models on the target side of the training data using the KenLM toolkit with interpolated Kneser-Ney discounting, respectively. In all of our experiments, we used the GIZA++ toolkit for word alignment; tuning was performed by minimum error rate training BIBREF18 , and it was re-run for every experiment.", "For both MT systems, we preprocessed the data as follows. For Chinese, we used KyotoMorph for segmentation, which was trained on the CTB version 5 (CTB5) and SCTB BIBREF19 . For English, we tokenized and lowercased the sentences using the tokenizer.perl script in Moses. Japanese was segmented using JUMAN BIBREF20 .", "For NMT, we further split the words into sub-words using byte pair encoding (BPE) BIBREF21 , which has been shown to be effective for the rare word problem in NMT. Another motivation of using sub-words is making the different domains share more vocabulary, which is important especially for the resource poor domain. For the Chinese-to-English tasks, we trained two BPE models on the Chinese and English vocabularies, respectively. For the Chinese-to-Japanese tasks, we trained a joint BPE model on both of the Chinese and Japanese vocabularies, because Chinese and Japanese could share some vocabularies of Chinese characters. The number of merge operations was set to 30,000 for all the tasks." ], [ "Tables 1 and 2 show the translation results on the Chinese-to-English and Chinese-to-Japanese tasks, respectively. The entries with SMT and NMT are the PBSMT and NMT systems, respectively; others are the different methods described in Section \"Methods for Comparison\" . In both tables, the numbers in bold indicate the best system and all systems that were not significantly different from the best system. The significance tests were performed using the bootstrap resampling method BIBREF22 at $p < 0.05$ .", "We can see that without domain adaptation, the SMT systems perform significantly better than the NMT system on the resource poor domains, i.e., IWSLT-CE and WIKI-CJ; while on the resource rich domains, i.e., NTCIR-CE and ASPEC-CJ, NMT outperforms SMT. Directly using the SMT/NMT models trained on the out-of-domain data to translate the in-domain data shows bad performance. With our proposed “Mixed fine tuning\" domain adaptation method, NMT significantly outperforms SMT on the in-domain tasks.", "Comparing different domain adaptation methods, “Mixed fine tuning” shows the best performance. We believe the reason for this is that “Mixed fine tuning” can address the over-fitting problem of “Fine tuning.” We observed that while “Fine tuning” overfits quickly after only 1 epoch of training, “Mixed fine tuning” only slightly overfits until covergence. In addition, “Mixed fine tuning” does not worsen the quality of out-of-domain translations, while “Fine tuning” and “Multi domain” do. One shortcoming of “Mixed fine tuning” is that compared to “fine tuning,” it took a longer time for the fine tuning process, as the time until convergence is essentially proportional to the size of the data used for fine tuning.", "“Multi domain” performs either as well as (IWSLT-CE) or worse than (WIKI-CJ) “Fine tuning,” but “Mixed fine tuning” performs either significantly better than (IWSLT-CE) or is comparable to (WIKI-CJ) “Fine tuning.” We believe the performance difference between the two tasks is due to their unique characteristics. As WIKI-CJ data is of relatively poorer quality, mixing it with out-of-domain data does not have the same level of positive effects as those obtained by the IWSLT-CE data.", "The domain tags are helpful for both “Multi domain” and “Mixed fine tuning.” Essentially, further fine tuning on in-domain data does not help for both “Multi domain” and “Mixed fine tuning.” We believe the reason for this is that the “Multi domain” and “Mixed fine tuning” methods already utilize the in-domain data used for fine tuning." ], [ "In this paper, we proposed a novel domain adaptation method named “mixed fine tuning” for NMT. We empirically compared our proposed method against fine tuning and multi domain methods, and have shown that it is effective but is sensitive to the quality of the in-domain data used.", "In the future, we plan to incorporate an RNN model into our current architecture to leverage abundant in-domain monolingual corpora. We also plan on exploring the effects of synthetic data by back translating large in-domain monolingual corpora. " ] ] }
{ "question": [ "How much improvement does their method get over the fine tuning baseline?", "What kinds of neural networks did they use in this paper?", "How did they use the domain tags?" ], "question_id": [ "a978a1ee73547ff3a80c66e6db3e6c3d3b6512f4", "46ee1cbbfbf0067747b28bdf4c8c2f7dc8955650", "4f12b41bd3bb2610abf7d7835291496aa69fb78c" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "domain adaptation", "domain adaptation", "domain adaptation" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "0.08 points on the 2011 test set, 0.44 points on the 2012 test set, 0.42 points on the 2013 test set for IWSLT-CE.", "evidence": [ "FLOAT SELECTED: Table 1: Domain adaptation results (BLEU-4 scores) for IWSLT-CE using NTCIR-CE." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Domain adaptation results (BLEU-4 scores) for IWSLT-CE using NTCIR-CE." ] } ], "annotation_id": [ "f92d4930c3a5af4cac3ed3b914ec9a554dfeade4" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "LSTMs" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded. We early stopped the training process when we observed that the BLEU score of the development set converges. For testing, we self ensembled the three parameters of the best development loss, the best development BLEU, and the final parameters. Beam size was set to 100." ], "highlighted_evidence": [ "For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded." ] } ], "annotation_id": [ "12335d0c788b511cd38f82941b7e5bba2fe24e21" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Appending the domain tag “<2domain>\" to the source sentences of the respective corpora" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The multi domain method is originally motivated by BIBREF14 , which uses tags to control the politeness of NMT translations. The overview of this method is shown in the dotted section in Figure 2 . In this method, we simply concatenate the corpora of multiple domains with two small modifications: a. Appending the domain tag “<2domain>\" to the source sentences of the respective corpora. This primes the NMT decoder to generate sentences for the specific domain. b. Oversampling the smaller corpus so that the training procedure pays equal attention to each domain." ], "highlighted_evidence": [ "In this method, we simply concatenate the corpora of multiple domains with two small modifications: a. Appending the domain tag “<2domain>\" to the source sentences of the respective corpora. " ] } ], "annotation_id": [ "65f0a6719b495621b5ad95e39f4305074795673f" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1: Fine tuning for domain adaptation", "Figure 2: Tag based multi domain NMT", "Table 1: Domain adaptation results (BLEU-4 scores) for IWSLT-CE using NTCIR-CE.", "Table 2: Domain adaptation results (BLEU-4 scores) for WIKI-CJ using ASPEC-CJ." ], "file": [ "2-Figure1-1.png", "2-Figure2-1.png", "3-Table1-1.png", "3-Table2-1.png" ] }
1709.05411
Combining Search with Structured Data to Create a More Engaging User Experience in Open Domain Dialogue
The greatest challenges in building sophisticated open-domain conversational agents arise directly from the potential for ongoing mixed-initiative multi-turn dialogues, which do not follow a particular plan or pursue a particular fixed information need. In order to make coherent conversational contributions in this context, a conversational agent must be able to track the types and attributes of the entities under discussion in the conversation and know how they are related. In some cases, the agent can rely on structured information sources to help identify the relevant semantic relations and produce a turn, but in other cases, the only content available comes from search, and it may be unclear which semantic relations hold between the search results and the discourse context. A further constraint is that the system must produce its contribution to the ongoing conversation in real-time. This paper describes our experience building SlugBot for the 2017 Alexa Prize, and discusses how we leveraged search and structured data from different sources to help SlugBot produce dialogic turns and carry on conversations whose length over the semi-finals user evaluation period averaged 8:17 minutes.
{ "section_name": [ "Introduction", "Modeling Discourse Coherence", "Mixed Initiative Dialogue", "Natural Language Generation", "Conclusions" ], "paragraphs": [ [ "The Alexa Prize funded 12 international teams to compete to create a conversational agent that can discuss any topic for at least 20 minutes. UCSC's Slugbot was one of these funded teams. The greatest challenges with the competition arise directly from the potential for ongoing mixed-initiative multi-turn dialogues, which do not follow a particular plan or pursue a particular fixed information need. This paper describes some of the lessons we learned building SlugBot for the 2017 Alexa Prize, particularly focusing on the challenges of integrating content found via search with content from structured data in order to carry on an ongoing, coherent, open-domain, mixed-initiative conversation. SlugBot's conversations over the semi-finals user evaluation averaged 8:17 minutes.", "Unlike much previous work on conversational AI, SlugBot could not and did not assume that the user had an “information need” BIBREF0 , BIBREF1 , BIBREF2 . Rather, the design of the Alexa Prize was aimed at open conversations that could engage the user, through any type of dialogue or chitchat, discussing films and books, gossiping about celebrities, playing verbal games, telling stories or sharing experiences, or any other of many different types of activities that conversation is often used for.", "This open design foregrounds many longstanding challenges that have not been solved even for task-oriented dialogue systems. These include:", "This paper is structured around the “lessons learned” with respect to these challenges from our experience building SlugBot. To be clear, we are not offering a solution to these problems: instead our intention is simply to highlight the difficulties with developing adequate computational models of these phenomena that particularly arise in the context of open-domain conversations, where users cannot be assumed to be pursuing a particular task or information need. We will attempt to motivate our hypothesis that a comprehensive solution to these challenges for open-domain dialogue requires a much deeper understanding and utilization of the semantic relations that underly dialogue coherence.", "For example, consider dialogue focused on content related to the movie domain. This should be one of the easiest domains because it is well-structured, and there are existing systems handling conversations where there is a specified user information need or task, such as finding films with particular properties, finding out what is playing and where, or booking a movie ticket BIBREF3 , BIBREF4 , BIBREF5 . Moreover, the Internet Movie Database (IMDB) BIBREF6 provides information on plot, rating, and actors that can be leveraged to support conversations. IMDB also makes use of the Schema.org BIBREF7 structure to connect common entities to their related attribute types (such as Actor $\\rightarrow $ Person $\\rightarrow $ birthDate), allowing the system to retrieve a large set of possible next topics and related facts and entities.", "However, remember that SlugBot is based on the assumption that the user might simply enjoy talking about films and related entities and therefore may freely move the conversational focus among different movie entities, along with the vast array of semantically-associated movie attributes: movies have actors, genres, plots, and awards; actors have names, affiliations, other movies they were in, awards, etc. Actors are people, who have spouses, families and friends, and engage in other life activities besides acting, such as political advocacy.", "A potential dialogue is shown in Table 1 . The interaction might appear to be simple enough: the user chooses to discuss movies, and selects Jason Bourne as the specific movie she is interested in, the system finds the movie in IMDB, and then provides information on its rating, lead actor, and plot. The user then changes the topic to other movies with the same actor, and the conversation continues.", "Even with the availability of IMDB, however, the interaction is not totally straightforward. The RHS of Table 1 describes some of the required competencies and decisions SlugBot must make. First, Slugbot must be able to perform coreference resolution and recognize that the movie and it in turns U6 and U8 are coreferential. We estimate the accuracy of noun-phrase coreference resolution to only be about 70% for off-the-shelf tools applied to dialogue, since most of them are targeted to text BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 .", "More challenging is that at each system turn, there are a large number of conversational moves that are possible. Making good decisions about what to say next requires balancing a dialogue policy as to what dialogue acts might be good in this context, with real-time information as to what types of content might be possible to use in this context. Slugbot could offer an opinion as in turn S3, ask a follow-on question as in S3, take the initiative to provide unasked for information, as in S5, or decide, e.g. in the case of the user's request for plot information, to use search to retrieve some relevant content. Search cannot be used effectively here without constructing an appropriate query, or knowing in advance where plot information might be available. In a real-time system, live search may not be able to achieve the required speed and efficiency, so preprocessing or caching of relevant information may be necessary. Finally, most other domains do not have such high quality structured data available, leaving us to develop or try to rely on more general models of discourse coherence." ], [ "In open-domain conversation, dialogue coherence between related turns must be maintained. What underlies dialogue coherence goes beyond simple word overlap or similarity, and its clear that neural models of open-domain conversational dialogue do not yet capture it. Theories of discourse posit that there are a small number of semantic relations that can hold between adjacent turns: at the most general level these are contingency, comparison, expansion, and temporal order BIBREF16 , BIBREF17 , BIBREF18 . We posit that one way to allow SlugBot to take the initiative and produce a turn that maintains discourse coherence is to find content to use in Slugbot's next turn that instantiates a valid semantic relation between the current user turn and SlugBot's next turn. One of the strongest bases for such semantic relations are the relations captured by ontologies or frames, which give us related entities, e.g. movies have actors and directors BIBREF4 , BIBREF21 . These types of relations can be used to instantiate the expansion relation, which basically captures moving to strongly related subtopics, often by chaining off a particular discourse entity. To find content to instantiate the expansion relation to use in Slugbot's next turn (taking the initiative), we carry out the following pipeline:", "In the case of movies, the structure of IMDB, as discussed above, allows us to link between related entities and attributes using various DB keys. However other conversational domains do not have freely available richly structured information such as this. It is rare for a single resource to aggregate all the information that might be useful, so SlugBot must be able to leverage information and integrate information from multiple sources. But state-of-the-art knowledge bases and ontologies are still limited. Table 2 lists some of the resources that we have found to be most useful for search and structured information.", "Like movies, sports is another domain that has rich structure, and in which there is broad user interest. Search results for a query about \"Madison Bumgarner\" are in Figure 1 , showcasing a sample of the different information retrievable from each source (Step 2 of the pipeline).", "From the Google Knowledge Graph (Figure 1 result we are able to ascertain the entity type, a brief description, and a relevant Wikipedia page (Figure 1 ) which we can use to find accurate structured information. We may further augment our knowledge by using the information returned by the Google Knowledge Graph as parameters to our YAGO or DBpedia query which can more easily extract specific relationships between an entity-attribute. For example, the results returned by YAGO for the \"Madison Bumgarner\" query contains a connection to the headline Struggling MadBum might not garner next start, which is contextually relevant data not encapsulated anywhere in the previously examined results.", "There, however, exists a disconnect between the resources, i.e. some entities are available in one resource and not another, or there may be inconsistent information across resources. While it would be nice not to have to anticipate the types of integration that are needed, our take-away from this, is that at present, it appears we have to accomplish the steps in our pipeline by integrating knowledge from different resources in advance, even though projects such as YAGO have already been working on such integration for at least ten years.", "Other discourse coherence relations besides expansion are also viable candidates for selecting content for next turns, but finding content that instantiates these relations can be a challenging problem in itself. For example, in casual conversation, it is common to provide opinions and then perhaps further take the initiative and justify them. The justification of an opinion is a type of contingency relation: we describe how we curate content to provide justifications in Section \"Mixed Initiative Dialogue\" .", "We have also been able to use the temporal relation in a limited way by drawing on narratively structured sources, such as personal stories in blogs. Since these stories are told in temporal order, we can repurpose the content of these blogs to tell stories, maintaining pre-existing narrative coherence when the system produces a sequence of turns BIBREF33 . However, we posit that there is much more that could be done to make better use of deep semantic discourse relations for recognizing discourse relations and generating coherent conversational turns." ], [ "Mixed Initiative dialogue is key to a natural conversational interaction BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 , BIBREF2 , and this is even more important for open domain dialogue than it is for task-oriented or information seeking dialogue. One of our primary hypotheses, as described above, is that good models of discourse coherence will help SlugBot identify content that can be used to take the initiative. However, models of discourse coherence have been rarely applied to conversation BIBREF39 , BIBREF40 , BIBREF41 and thus there is considerable work to be done simply in understanding how these relations can be instantiated in dialogue.", "In addition, a further challenge arises from the fact that both system and user options for dialogue acts are extremely varied at each turn, e.g. user intents can be to provide opinions, give or solicit information, contrast two possibilities, request the system to perform an action, and more. One reasonable taxonomy for the types of dialogue acts that might be available to SlugBot could be based for example on the dialogue act annotations in the Switchboard corpus BIBREF42 .", "Here, we consider a simple case combining discourse relations and dialogue acts that we have implemented in Slugbot in order to take the initiative in a way that we hoped the user would find interesting. Our aim was to utilize the contingency discourse relation to connect a statement of opinion and its justification. We designed a template containing both arguments of the contingency relation, namely I think $\\lbrace entity\\rbrace $ is $\\lbrace sentiment\\rbrace $ because $\\lbrace justification\\rbrace $ . We construct a table of argument pairs that can instantiate this relation, as shown in Table 3 . This table can be populated by crowd-sourcing or by using search as a pre-processing step.", "Table 4 illustrates how this is used in our conversations about comics. At Line 6, when the user asks Who is your favorite character?, it is most appropriate to provide an opinion. It is difficult to imagine retrieving search-based data which contains a contextually relevant opinion, but it is even more difficult to imagine that if search had returned such an opinion, that search could be used a second time in order to retrieve a justification for the provided opinion and answer the user's follow-up question in Line 8, Okay why?. The source text for the search would have to be annotated for the type of content that could be used to provide justifications, and search would have to support these types of semantic relations." ], [ "The current challenges for natural language generation, in our view, arise from the need to combine information from structured and unstructured sources when producing conversational utterances. SlugBot currently uses a combination of pre-written templates, sentence selection, and techniques for telling stories that are based on converting monologic stories to dialogic sequences BIBREF33 .", "Structured data, when available, can do more than structure a search result: it can also be easier to use within a conversation because it provides the necessary structure needed for high precision natural language generation BIBREF22 , BIBREF43 . More precisely, a small set of generic templates with various slots can be filled with information from structured data sources to insure high quality, accurate responses. These generic templates can be hand crafted, or prepared in advance by learning natural language generation templates automatically from appropriate conversational domain sources such as different types of user-generated content BIBREF44 , BIBREF23 , as illustrated in our justification initiatives above in Section \"Mixed Initiative Dialogue\" .", "For general fact-based questions, on the other hand, search content can be used directly. For example, at line 14 in Table 5 when the user asks What was the first movie to feature a vampire?, search provides us with a good response. This introduces however the challenge of updating the discourse context with the right representation of the two movies under discussion, so that they can then be available for follow-on coreference. This is an open problem.", "It is clear that in order to use a semi-structured approach, we need to determine when to utilize each source. Structured data can be easier to formulate into system responses and can often more easily handle on-topic follow-up questions, but is more limited in scope. An obvious approach, also used in the Watson Jeopardy system BIBREF45 , is to pool responses from both sources and rank them. We have not, to date, collected enough data to build a ranker.", "Our plan is to apply a combination of reinforcement learning and learning of ranking functions for utterance variants in a particular context to SlugBot conversations as we move forward with our own data collection, outside of the Alexa Prize competition BIBREF46 , BIBREF47 , BIBREF48 , BIBREF49 , BIBREF50 . The first step however is to use the Alexa Prize competition data to learn a Paradise-Open-Domain evaluation function, with additional metrics relevant to open-domain dialogue, e.g. independent variable metrics that predict overall dialogue quality such as response delay, vocabulary diversity, dialogue act sequence n-grams BIBREF51 , conversational depth, number of reprompts BIBREF52 , and other measures that can be automatically logged. Many of the required measures have been used over the last 20 years in Paradise to evaluate task-oriented dialogue systems and they remain highly relevant to overall dialogue quality in open-domain dialogue systems BIBREF53 , BIBREF54 , BIBREF55 . We predict this can potentially improve the overall performance of the system as demonstrated in Table 6 . Here, the structured data is sparse, resulting in an uninteresting response, while search returns a very robust answer. Our Paradise-Open-Domain evaluation function would need to learn to place priority on the result returned by search, through ranking, despite having structured data.", "For open domain NLG, we have also conducted experiments with neural sequence to sequence approaches using open domain corpora such as film dialogue, Big Bang theory scripts, and open subtitles. These approaches to date do not produce interesting utterances that maintain discourse coherence. It is possible that further curation and semantic annotation of these resources, e.g. by labelling semantic roles and identifying dialogue acts and discourse relations might be helpful, but this could also introduce data sparsity. For example in Switchboard the dialogue act distribution is highly skewed. Integrating information across multiple sources could also be further explored BIBREF33 . Recent work on hybrid neural generation approaches that use knowledge of sentence and discourse planning structures also seem promising BIBREF24 , BIBREF48 , BIBREF56 ." ], [ "In this paper, we describe some of the challenges we encountered building SlugBot, an open domain conversational agent funded by the Amazon Alexa Prize. We have introduced more problems than we have solved, and we have attempted to support our hypothesis that we need richer models of discourse coherence and discourse semantics to allow a conversational agent to take the initiative in open domain conversations. We illustrated how search and structured information can be combined in order for SlugBot to find content to use to take the initiative and respond to the user's utterances. We propose a hybrid approach for language generation that which combines templates to generate responses with sentence selection from search, and we show examples in different domains to demonstrate real-world use cases that make use of our approach. For future work, we plan to bring together resources that provide structured data from different sources into a single, accessible framework, to supply personal assistants with scalable knowledge bases that will power more natural, mixed initiative, and engaging conversations. We believe that it will be possible in the next few years to build conversational agents that can carry on a conversation for 20 minutes about many different topics." ] ] }
{ "question": [ "Why mixed initiative multi-turn dialogs are the greatest challenge in building open-domain conversational agents?" ], "question_id": [ "65e6a1cc2590b139729e7e44dce6d9af5dd2c3b5" ], "nlp_background": [ "infinity" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "do not follow a particular plan or pursue a particular fixed information need", " integrating content found via search with content from structured data", "at each system turn, there are a large number of conversational moves that are possible", "most other domains do not have such high quality structured data available", "live search may not be able to achieve the required speed and efficiency" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The Alexa Prize funded 12 international teams to compete to create a conversational agent that can discuss any topic for at least 20 minutes. UCSC's Slugbot was one of these funded teams. The greatest challenges with the competition arise directly from the potential for ongoing mixed-initiative multi-turn dialogues, which do not follow a particular plan or pursue a particular fixed information need. This paper describes some of the lessons we learned building SlugBot for the 2017 Alexa Prize, particularly focusing on the challenges of integrating content found via search with content from structured data in order to carry on an ongoing, coherent, open-domain, mixed-initiative conversation. SlugBot's conversations over the semi-finals user evaluation averaged 8:17 minutes.", "More challenging is that at each system turn, there are a large number of conversational moves that are possible. Making good decisions about what to say next requires balancing a dialogue policy as to what dialogue acts might be good in this context, with real-time information as to what types of content might be possible to use in this context. Slugbot could offer an opinion as in turn S3, ask a follow-on question as in S3, take the initiative to provide unasked for information, as in S5, or decide, e.g. in the case of the user's request for plot information, to use search to retrieve some relevant content. Search cannot be used effectively here without constructing an appropriate query, or knowing in advance where plot information might be available. In a real-time system, live search may not be able to achieve the required speed and efficiency, so preprocessing or caching of relevant information may be necessary. Finally, most other domains do not have such high quality structured data available, leaving us to develop or try to rely on more general models of discourse coherence." ], "highlighted_evidence": [ "The greatest challenges with the competition arise directly from the potential for ongoing mixed-initiative multi-turn dialogues, which do not follow a particular plan or pursue a particular fixed information need. ", "This paper describes some of the lessons we learned building SlugBot for the 2017 Alexa Prize, particularly focusing on the challenges of integrating content found via search with content from structured data in order to carry on an ongoing, coherent, open-domain, mixed-initiative conversation", "More challenging is that at each system turn, there are a large number of conversational moves that are possible.", " Finally, most other domains do not have such high quality structured data available, leaving us to develop or try to rely on more general models of discourse coherence.", " Search cannot be used effectively here without constructing an appropriate query, or knowing in advance where plot information might be available. In a real-time system, live search may not be able to achieve the required speed and efficiency, so preprocessing or caching of relevant information may be necessary. " ] } ], "annotation_id": [ "124e995b04caa055ccba03e47ab8e7871cdd8af9" ], "worker_id": [ "08f81a5d78e451df16193028defb70150c4201c9" ] } ] }
{ "caption": [ "Table 1: Sample Dialogue about Movies. System content indicated as coming from search† or structured data‡.", "Table 2: Search and Structured Information Resources", "Figure 1: Sample Available Resources for Query “Madison Bumgarner”", "Table 4: Sample Dialogue about Comic Books. System content based on either search† or structured data‡.", "Table 5: Sample Dialogue about Monsters . System content is curated based on search† or structured data‡.", "Table 6: Using Structured Data vs Search" ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "4-Figure1-1.png", "4-Table4-1.png", "5-Table5-1.png", "5-Table6-1.png" ] }
1805.12032
Identifying and Understanding User Reactions to Deceptive and Trusted Social News Sources
In the age of social news, it is important to understand the types of reactions that are evoked from news sources with various levels of credibility. In the present work we seek to better understand how users react to trusted and deceptive news sources across two popular, and very different, social media platforms. To that end, (1) we develop a model to classify user reactions into one of nine types, such as answer, elaboration, and question, etc, and (2) we measure the speed and the type of reaction for trusted and deceptive news sources for 10.8M Twitter posts and 6.2M Reddit comments. We show that there are significant differences in the speed and the type of reactions between trusted and deceptive news sources on Twitter, but far smaller differences on Reddit.
{ "section_name": [ "Introduction", "Reaction Type Classification", "Reddit Data", "Model", "Reaction Type Classification Results", "Measuring Reactions to Trusted and Deceptive News Sources", "Twitter and Reddit News Data", "Methodology", "Results and Discussion", "Related Work", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "As the reliance on social media as a source of news increases and the reliability of sources is increasingly debated, it is important to understand how users react to various sources of news. Most studies that investigate misinformation spread in social media focus on individual events and the role of the network structure in the spread BIBREF0 , BIBREF1 , BIBREF2 or detection of false information BIBREF3 . These studies have found that the size and shape of misinformation cascades within a social network depends heavily on the initial reactions of the users. Other work has focused on the language of misinformation in social media BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 to detect types of deceptive news.", "As an alternative to studying newsworthy events one at a time BIBREF10 , the current work applies linguistically-infused models to predict user reactions to deceptive and trusted news sources. Our analysis reveals differences in reaction types and speed across two social media platforms — Twitter and Reddit.", "The first metric we report is the reaction type. Recent studies have found that 59% of bitly-URLs on Twitter are shared without ever being read BIBREF11 , and 73% of Reddit posts were voted on without reading the linked article BIBREF12 . Instead, users tend to rely on the commentary added to retweets or the comments section of Reddit-posts for information on the content and its credibility. Faced with this reality, we ask: what kind of reactions do users find when they browse sources of varying credibility? Discourse acts, or speech acts, can be used to identify the use of language within a conversation, e.g., agreement, question, or answer. Recent work by Zhang et al. zhang2017characterizing classified Reddit comments by their primary discourse act (e.g., question, agreement, humor), and further analyzed patterns from these discussions.", "The second metric we report is reaction speed. A study by Jin et al. jin2013epidemiological found that trusted news stories spread faster than misinformation or rumor; Zeng et al. zeng2016rumors found that tweets which deny rumors had shorter delays than tweets of support. Our second goal is to determine if these trends are maintained for various types of news sources on Twitter and Reddit.", "Hence, the contributions of this work are two-fold: (1) we develop a linguistically-infused neural network model to classify reactions in social media posts, and (2) we apply our model to label 10.8M Twitter posts and 6.2M Reddit comments in order to evaluate the speed and type of user reactions to various news sources." ], [ "In this section, we describe our approach to classify user reactions into one of eight types of discourse: agreement, answer, appreciation, disagreement, elaboration, humor, negative reaction, or question, or as none of the given labels, which we call “other”, using linguistically-infused neural network models." ], [ "We use a manually annotated Reddit dataset from Zhang et al. zhang2017characterizing to train our reaction classification model. Annotations from 25 crowd-workers labelled the primary discourse act for 101,525 comments within 9,131 comment threads on Reddit. The Reddit IDs, but not the text content of the comments themselves, were released with the annotations. So we collected the content of Reddit posts and comments from a public archive of Reddit posts and comments. Some content was deleted prior to archival, so the dataset shown in Table TABREF3 is a subset of the original content. Despite the inability to capture all of the original dataset, Table TABREF3 shows a similar distribution between our dataset and the original." ], [ "We develop a neural network architecture that relies on content and other linguistic signals extracted from reactions and parent posts, and takes advantage of a “late fusion” approach previously used effectively in vision tasks BIBREF13 , BIBREF14 . More specifically, we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent." ], [ "As shown in Figure FIGREF7 , our linguistically-infused neural network model that relies solely on the content of the reaction and its parent has comparable performance to the more-complex CRF model by Zhang et al. zhang2017characterizing, which relies on content as well as additional metadata like the author, thread (e.g., the size of the the thread, the number of branches), structure (e.g., the position within the thread), and community (i.e., the subreddit in which the comment is posted)." ], [ "In this section, we present key results of our analysis of how often and how quickly users react to content from sources of varying credibility using the reaction types predicted by our linguistically-infused neural network model." ], [ "We focus on trusted news sources that provide factual information with no intent to deceive and deceptive news sources. Deceptive sources are ranked by their intent to deceive as follows: clickbait (attention-grabbing, misleading, or vague headlines to attract an audience), conspiracy theory (uncorroborated or unreliable information to explain events or circumstances), propaganda (intentionally misleading information to advance a social or political agenda), and disinformation (fabricated or factually incorrect information meant to intentionally deceive readers).", "Trusted, clickbait, conspiracy, and propaganda sources were previously compiled by Volkova et al. volkova2017separating through a combination of crowd-sourcing and public resources. Trusted news sources with Twitter-verified accounts were manually labeled and clickbait, conspiracy, and propaganda news sources were collected from several public resources that annotate suspicious news accounts. We collected news sources identified as spreading disinformation by the European Union's East Strategic Communications Task Force from euvsdisinfo.eu. In total, there were 467 news sources: 251 trusted and 216 deceptive.", "We collected reaction data for two popular platforms, Reddit and Twitter, using public APIs over the 13 month period from January 2016 through January 2017. For our Reddit dataset, we collected all Reddit posts submitted during the 13 month period that linked to domains associated with one of our labelled news sources. Then we collected all comments that directly responded to those posts. For our Twitter dataset, we collected all tweets posted in the 13 month period that explicitly @mentioned or directly retweeted content from a source and then assigned a label to each tweet based on the class of the source @mentioned or retweeted. A breakdown of each dataset by source type is shown in Table TABREF10 . Figure FIGREF11 illustrates the distribution of deceptive news sources and reactions across the four sub-categories of deceptive news sources. In our analysis, we consider the set of all deceptive sources and the set excluding the most extreme (disinformation)." ], [ "We use the linguistically-infused neural network model from Figure FIGREF5 to label the reaction type of each tweet or comment. Using these labels, we examine how often response types occur when users react to each type of news source. For clarity, we report the five most frequently occurring reaction types (expressed in at least 5% of reactions within each source type) and compare the distributions of reaction types for each type of news source.", "To examine whether users react to content from trusted sources differently than from deceptive sources, we measure the reaction delay, which we define as the time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred. We report the cumulative distribution functions (CDFs) for each source type and use Mann Whitney U (MWU) tests to compare whether users respond with a given reaction type with significantly different delays to news sources of different levels of credibility." ], [ "For both Twitter and Reddit datasets, we found that the primary reaction types were answer, appreciation, elaboration, question, or “other” (no label was predicted). Figure FIGREF13 illustrates the distribution of reaction types among Reddit comments (top plot) or tweets (bottom plot) responding to each type of source, as a percentage of all comments/tweets reacting to sources of the given type (i.e., trusted, all deceptive, and deceptive excluding disinformation sources).", "For Twitter, we report clear differences in user reactions to trusted vs. deceptive sources. Deceptive (including disinformation) sources have a much higher rate of appreciation reactions and a lower rate of elaboration responses, compared to trusted news sources. Differences are still significant ( INLINEFORM0 ) but the trends reverse if we do not include disinformation sources. We also see an increase in the rate of question-reactions compared to trusted news sources if we exclude disinformation sources.", "For Reddit, there appears to be a very similar distribution across reaction types for trusted and deceptive sources. However, MWU tests still found that the differences between trusted and deceptive news sources were statistically significant ( INLINEFORM0 ) — regardless of whether we include or exclude disinformation sources. Posts that link to deceptive sources have higher rates of question, appreciation, and answering reactions, while posts that link to trusted sources have higher rates of elaboration, agreement, and disagreement.", "Next, we compared the speed with which users reacted to posts of sources of varying credibility. Our original hypothesis was that users react to posts of trusted sources faster than posts of deceptive sources. The CDFs for each source type and platform (solid and dashed lines represent Reddit and Twitter respectively) are shown in Figure FIGREF14 . We observe that the lifetime of direct reactions to news sources on Twitter is often more extended than for sources on Reddit. One exception is answer reactions which almost always occur within the first hour after the Twitter new source originally posted the tweet being answered. This may be due to the different ways that users consume content on the two platforms. Users follow accounts on Twitter, whereas on Reddit users “follow” topics through their subscriptions to various subreddits. Users can view the news feeds of individual sources on Twitter and view all of the sources' posts. Reddit, on the other hand, is not designed to highlight individual users or news sources; instead new posts (regardless of the source) are viewed based on their hotness score within each subreddit.", "In addition, we observe that reactions to posts linked to trusted sources are less heavily concentrated within the first 12 to 15 hours of the post's lifetime on Reddit. The opposite is found on Twitter. Twitter sources may have a larger range of reaction delays, but they are also more heavily concentrated in the lower end of that range ( INLINEFORM0 )." ], [ "As we noted above, most studies that examine misinformation spread focus on individual events such as natural disasters BIBREF17 , political elections BIBREF18 , or crises BIBREF19 and examine the response to the event on social media. A recent study by Vosoughi et al. vosoughi2018spread found that news stories that were fact-checked and found to be false spread faster and to more people than news items found to be true. In contrast, our methodology considers immediate reactions to news sources of varying credibility, so we can determine whether certain reactions or reactions to trusted or deceptive news sources evoke more or faster responses from social media users." ], [ "In the current work, we have presented a content-based model that classifies user reactions into one of nine types, such as answer, elaboration, and question, etc., and a large-scale analysis of Twitter posts and Reddit comments in response to content from news sources of varying credibility.", "Our analysis of user reactions to trusted and deceptive sources on Twitter and Reddit shows significant differences in the distribution of reaction types for trusted versus deceptive news. However, due to differences in the user interface, algorithmic design, or user-base, we find that Twitter users react to trusted and deceptive sources very differently than Reddit users. For instance, Twitter users questioned disinformation sources less often and more slowly than they did trusted news sources; Twitter users also expressed appreciation towards disinformation sources more often and faster than towards trusted sources. Results from Reddit show similar, but far less pronounced, reaction results.", "Future work may focus on analysis of reaction behavior from automated (i.e., 'bot'), individual, or organization accounts; on additional social media platforms and languages; or between more fine-grained categories of news source credibility." ], [ "The research described in this paper is based on Twitter and Reddit data collected by the University of Notre Dame using public APIs. The research was supported by the Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy. This research is also supported by the Defense Advanced Research Projects Agency (DARPA), contract W911NF-17-C-0094. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government." ] ] }
{ "question": [ "How is speed measured?", "What is the architecture of their model?", "What are the nine types?" ], "question_id": [ "b54fc86dc2cc6994e10c1819b6405de08c496c7b", "b43a8a0f4b8496b23c89730f0070172cd5dca06a", "b161febf86cdd58bd247a934120410068b24b7d1" ], "nlp_background": [ "", "", "" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The first metric we report is the reaction type. Recent studies have found that 59% of bitly-URLs on Twitter are shared without ever being read BIBREF11 , and 73% of Reddit posts were voted on without reading the linked article BIBREF12 . Instead, users tend to rely on the commentary added to retweets or the comments section of Reddit-posts for information on the content and its credibility. Faced with this reality, we ask: what kind of reactions do users find when they browse sources of varying credibility? Discourse acts, or speech acts, can be used to identify the use of language within a conversation, e.g., agreement, question, or answer. Recent work by Zhang et al. zhang2017characterizing classified Reddit comments by their primary discourse act (e.g., question, agreement, humor), and further analyzed patterns from these discussions.", "The second metric we report is reaction speed. A study by Jin et al. jin2013epidemiological found that trusted news stories spread faster than misinformation or rumor; Zeng et al. zeng2016rumors found that tweets which deny rumors had shorter delays than tweets of support. Our second goal is to determine if these trends are maintained for various types of news sources on Twitter and Reddit.", "To examine whether users react to content from trusted sources differently than from deceptive sources, we measure the reaction delay, which we define as the time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred. We report the cumulative distribution functions (CDFs) for each source type and use Mann Whitney U (MWU) tests to compare whether users respond with a given reaction type with significantly different delays to news sources of different levels of credibility." ], "highlighted_evidence": [ "The first metric we report is the reaction type.", "The second metric we report is reaction speed. A study by Jin et al. jin2013epidemiological found that trusted news stories spread faster than misinformation or rumor; Zeng et al. zeng2016rumors found that tweets which deny rumors had shorter delays than tweets of support. Our second goal is to determine if these trends are maintained for various types of news sources on Twitter and Reddit.", "To examine whether users react to content from trusted sources differently than from deceptive sources, we measure the reaction delay, which we define as the time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred. We report the cumulative distribution functions (CDFs) for each source type and use Mann Whitney U (MWU) tests to compare whether users respond with a given reaction type with significantly different delays to news sources of different levels of credibility." ] } ], "annotation_id": [ "1253580ddca3f5c80fad5ae7d5499d6e925817e4" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent." ], "yes_no": null, "free_form_answer": "", "evidence": [ "Hence, the contributions of this work are two-fold: (1) we develop a linguistically-infused neural network model to classify reactions in social media posts, and (2) we apply our model to label 10.8M Twitter posts and 6.2M Reddit comments in order to evaluate the speed and type of user reactions to various news sources.", "We develop a neural network architecture that relies on content and other linguistic signals extracted from reactions and parent posts, and takes advantage of a “late fusion” approach previously used effectively in vision tasks BIBREF13 , BIBREF14 . More specifically, we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent." ], "highlighted_evidence": [ "Hence, the contributions of this work are two-fold: (1) we develop a linguistically-infused neural network model to classify reactions in social media posts, and (2) we apply our model to label 10.8M Twitter posts and 6.2M Reddit comments in order to evaluate the speed and type of user reactions to various news sources.", "We develop a neural network architecture that relies on content and other linguistic signals extracted from reactions and parent posts, and takes advantage of a “late fusion” approach previously used effectively in vision tasks BIBREF13 , BIBREF14 . More specifically, we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent." ] } ], "annotation_id": [ "12715a92fe478e5dc21809d69376576407202018" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "agreement", "answer", "appreciation", "disagreement", "elaboration", "humor", "negative reaction", "question", "other" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this section, we describe our approach to classify user reactions into one of eight types of discourse: agreement, answer, appreciation, disagreement, elaboration, humor, negative reaction, or question, or as none of the given labels, which we call “other”, using linguistically-infused neural network models." ], "highlighted_evidence": [ "\n", "In this section, we describe our approach to classify user reactions into one of eight types of discourse: agreement, answer, appreciation, disagreement, elaboration, humor, negative reaction, or question, or as none of the given labels, which we call “other”, using linguistically-infused neural network models." ] } ], "annotation_id": [ "cc1f08762ac577fbe2edb092b9769ae0da03c409" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] } ] }
{ "caption": [ "Figure 1: Architecture of neural network model used to predict reaction types.", "Table 1: Summary of the training data we recovered compared to the data collected by Zhang et al. (2017) reported as distributions of comments across reaction types.", "Figure 2: Comparison of our model’s performance, measured using F1 score, trained only on content features, with the performance reported by Zhang et al. (2017) trained on content, author, thread, structure, and community features.", "Table 2: Summary of Twitter and Reddit datasets used to measure the speed and types of reactions to Trusted and Deceptive news sources excluding (no disinformation) or including (All) the most extreme of the deceptive sources — those identified as spreading disinformation.", "Figure 3: Distributions of Deceptive news sources and reactions to those sources (Reddit comments or tweets, respectively) for the Reddit and Twitter datasets across the four subcategories of deceptive news sources.", "Figure 4: Distributions of five most frequently occurring reaction types within comments on Reddit and tweets on Twitter for each news source type (MWU p < 0.01).", "Figure 5: CDF plots of the volumes of reactions by reaction delays for the frequently occurring reactions (i.e., , reactions that occur in at least 5% of comments) for each source-type, using a step size of one hour. The CDF for Elaboration-reactions to Deceptive (no disinformation) Twitter news sources is occluded by the CDF for Deceptive Twitter news sources. This figure is best viewed in color." ], "file": [ "2-Figure1-1.png", "2-Table1-1.png", "2-Figure2-1.png", "3-Table2-1.png", "3-Figure3-1.png", "4-Figure4-1.png", "4-Figure5-1.png" ] }
1611.02550
Discriminative Acoustic Word Embeddings: Recurrent Neural Network-Based Approaches
Acoustic word embeddings --- fixed-dimensional vector representations of variable-length spoken word segments --- have begun to be considered for tasks such as speech recognition and query-by-example search. Such embeddings can be learned discriminatively so that they are similar for speech segments corresponding to the same word, while being dissimilar for segments corresponding to different words. Recent work has found that acoustic word embeddings can outperform dynamic time warping on query-by-example search and related word discrimination tasks. However, the space of embedding models and training approaches is still relatively unexplored. In this paper we present new discriminative embedding models based on recurrent neural networks (RNNs). We consider training losses that have been successful in prior work, in particular a cross entropy loss for word classification and a contrastive loss that explicitly aims to separate same-word and different-word pairs in a"Siamese network"training setting. We find that both classifier-based and Siamese RNN embeddings improve over previously reported results on a word discrimination task, with Siamese RNNs outperforming classification models. In addition, we present analyses of the learned embeddings and the effects of variables such as dimensionality and network structure.
{ "section_name": [ "Introduction", "Related work", "Approach", "Training", "EXPERIMENTS", "Classification network details", "Siamese network details", "Results", "Effect of model structure", "Effect of embedding dimensionality", "Effect of training vocabulary", "Visualization of embeddings", "Conclusion" ], "paragraphs": [ [ "Many speech processing tasks – such as automatic speech recognition or spoken term detection – hinge on associating segments of speech signals with word labels. In most systems developed for such tasks, words are broken down into sub-word units such as phones, and models are built for the individual units. An alternative, which has been considered by some researchers, is to consider each entire word segment as a single unit, without assigning parts of it to sub-word units. One motivation for the use of whole-word approaches is that they avoid the need for sub-word models. This is helpful since, despite decades of work on sub-word modeling BIBREF0 , BIBREF1 , it still poses significant challenges. For example, speech processing systems are still hampered by differences in conversational pronunciations BIBREF2 . A second motivation is that considering whole words at once allows us to consider a more flexible set of features and reason over longer time spans.", "Whole-word approaches typically involve, at some level, template matching. For example, in template-based speech recognition BIBREF3 , BIBREF4 , word scores are computed from dynamic time warping (DTW) distances between an observed segment and training segments of the hypothesized word. In query-by-example search, putative matches are typically found by measuring the DTW distance between the query and segments of the search database BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . In other words, whole-word approaches often boil down to making decisions about whether two segments are examples of the same word or not.", "An alternative to DTW that has begun to be explored is the use of acoustic word embeddings (AWEs), or vector representations of spoken word segments. AWEs are representations that can be learned from data, ideally such that the embeddings of two segments corresponding to the same word are close, while embeddings of segments corresponding to different words are far apart. Once word segments are represented via fixed-dimensional embeddings, computing distances is as simple as measuring a cosine or Euclidean distance between two vectors.", "There has been some, thus far limited, work on acoustic word embeddings, focused on a number of embedding models, training approaches, and tasks BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . In this paper we explore new embedding models based on recurrent neural networks (RNNs), applied to a word discrimination task related to query-by-example search. RNNs are a natural model class for acoustic word embeddings, since they can handle arbitrary-length sequences. We compare several types of RNN-based embeddings and analyze their properties. Compared to prior embeddings tested on the same task, our best models achieve sizable improvements in average precision." ], [ "We next briefly describe the most closely related prior work.", "Maas et al. BIBREF9 and Bengio and Heigold BIBREF10 used acoustic word embeddings, based on convolutional neural networks (CNNs), to generate scores for word segments in automatic speech recognition. Maas et al. trained CNNs to predict (continuous-valued) embeddings of the word labels, and used the resulting embeddings to define feature functions in a segmental conditional random field BIBREF17 rescoring system. Bengio and Heigold also developed CNN-based embeddings for lattice rescoring, but with a contrastive loss to separate embeddings of a given word from embeddings of other words.", "Levin et al. BIBREF11 developed unsupervised embeddings based on representing each word as a vector of DTW distances to a collection of reference word segments. This representation was subsequently used in several applications: a segmental approach for query-by-example search BIBREF12 , lexical clustering BIBREF18 , and unsupervised speech recognition BIBREF19 . Voinea et al. BIBREF15 developed a representation also based on templates, in their case phone templates, designed to be invariant to specific transformations, and showed their robustness on digit classification.", "Kamper et al. BIBREF13 compared several types of acoustic word embeddings for a word discrimination task related to query-by-example search, finding that embeddings based on convolutional neural networks (CNNs) trained with a contrastive loss outperformed the reference vector approach of Levin et al. BIBREF11 as well as several other CNN and DNN embeddings and DTW using several feature types. There have now been a number of approaches compared on this same task and data BIBREF11 , BIBREF20 , BIBREF21 , BIBREF22 . For a direct comparison with this prior work, in this paper we use the same task and some of the same training losses as Kamper et al., but develop new embedding models based on RNNs.", "The only prior work of which we are aware using RNNs for acoustic word embeddings is that of Chen et al. BIBREF16 and Chung et al. BIBREF14 . Chen et al. learned a long short-term memory (LSTM) RNN for word classification and used the resulting hidden state vectors as a word embedding in a query-by-example task. The setting was quite specific, however, with a small number of queries and speaker-dependent training. Chung et al. BIBREF14 worked in an unsupervised setting and trained single-layer RNN autoencoders to produce embeddings for a word discrimination task. In this paper we focus on the supervised setting, and compare a variety of RNN-based structures trained with different losses.", "" ], [ "", "An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 . The basic embedding model structure we use is shown in Fig. FIGREF1 . The model consists of a deep RNN with some number INLINEFORM3 of stacked layers, whose final hidden state vector is passed as input to a set of INLINEFORM4 of fully connected layers; the output of the final fully connected layer is the embedding INLINEFORM5 .", "The RNN hidden state at each time frame can be viewed as a representation of the input seen thus far, and its value in the last time frame INLINEFORM0 could itself serve as the final word embedding. The fully connected layers are added to account for the fact that some additional transformation may improve the representation. For example, the hidden state may need to be larger than the desired word embedding dimension, in order to be able to \"remember\" all of the needed intermediate information. Some of that information may not be needed in the final embedding. In addition, the information maintained in the hidden state may not necessarily be discriminative; some additional linear or non-linear transformation may help to learn a discriminative embedding.", "Within this class of embedding models, we focus on Long Short-Term Memory (LSTM) networks BIBREF23 and Gated Recurrent Unit (GRU) networks BIBREF24 . These are both types of RNNs that include a mechanism for selectively retaining or discarding information at each time frame when updating the hidden state, in order to better utilize long-term context. Both of these RNN variants have been used successfully in speech recognition BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 .", "In an LSTM RNN, at each time frame both the hidden state INLINEFORM0 and an associated “cell memory\" vector INLINEFORM1 , are updated and passed on to the next time frame. In other words, each forward edge in Figure FIGREF1 can be viewed as carrying both the cell memory and hidden state vectors. The updates are modulated by the values of several gating vectors, which control the degree to which the cell memory and hidden state are updated in light of new information in the current frame. For a single-layer LSTM network, the updates are as follows:", " INLINEFORM0 ", "where INLINEFORM0 , and INLINEFORM1 are all vectors of the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate sizes, INLINEFORM4 and INLINEFORM5 are learned bias vectors, INLINEFORM6 is a componentwise logistic activation, and INLINEFORM7 refers to the Hadamard (componentwise) product.", "Similarly, in a GRU network, at each time step a GRU cell determines what components of old information are retained, overwritten, or modified in light of the next step in the input sequence. The output from a GRU cell is only the hidden state vector. A GRU cell uses a reset gate INLINEFORM0 and an update gate INLINEFORM1 as described below for a single-layer network: INLINEFORM2 ", "where INLINEFORM0 , and INLINEFORM1 are all the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate size, and INLINEFORM4 , INLINEFORM5 and INLINEFORM6 are learned bias vectors.", "All of the above equations refer to single-layer networks. In a deep network, with multiple stacked layers, the same update equations are used in each layer, with the state, cell, and gate vectors replaced by layer-specific vectors INLINEFORM0 and so on for layer INLINEFORM1 . For all but the first layer, the input INLINEFORM2 is replaced by the hidden state vector from the previous layer INLINEFORM3 .", "For the fully connected layers, we use rectified linear unit (ReLU) BIBREF29 activation, except for the final layer which depends on the form of supervision and loss used in training.", "" ], [ "We train the RNN-based embedding models using a set of pre-segmented spoken words. We use two main training approaches, inspired by prior work but with some differences in the details. As in BIBREF13 , BIBREF10 , our first approach is to use the word labels of the training segments and train the networks to classify the word. In this case, the final layer of INLINEFORM0 is a log-softmax layer. Here we are limited to the subset of the training set that has a sufficient number of segments per word to train a good classifier, and the output dimensionality is equal to the number of words (but see BIBREF13 for a study of varying the dimensionality in such a classifier-based embedding model by introducing a bottleneck layer). This model is trained end-to-end and is optimized with a cross entropy loss. Although labeled data is necessarily limited, the hope is that the learned models will be useful even when applied to spoken examples of words not previously seen in the training data. For words not seen in training, the embeddings should correspond to some measure of similarity of the word to the training words, measured via the posterior probabilities of the previously seen words. In the experiments below, we examine this assumption by analyzing performance on words that appear in the training data compared to those that do not.", "The second training approach, based on earlier work of Kamper et al. BIBREF13 , is to train \"Siamese\" networks BIBREF30 . In this approach, full supervision is not needed; rather, we use weak supervision in the form of pairs of segments labeled as same or different. The base model remains the same as before—an RNN followed by a set of fully connected layers—but the final layer is no longer a softmax but rather a linear activation layer of arbitrary size. In order to learn the parameters, we simultaneously feed three word segments through three copies of our model (i.e. three networks with shared weights). One input segment is an “anchor\", INLINEFORM0 , the second is another segment with the same word label, INLINEFORM1 , and the third is a segment corresponding to a different word label, INLINEFORM2 . Then, the network is trained using a “cos-hinge\" loss:", " DISPLAYFORM0 ", "where INLINEFORM0 is the cosine distance between INLINEFORM1 . Unlike cross entropy training, here we directly aim to optimize relative (cosine) distance between same and different word pairs. For tasks such as query-by-example search, this training loss better respects our end objective, and can use more data since neither fully labeled data nor any minimum number of examples of each word should be needed.", "" ], [ "", "Our end goal is to improve performance on downstream tasks requiring accurate word discrimination. In this paper we use an intermediate task that more directly tests whether same- and different-word pairs have the expected relationship. and that allows us to compare to a variety of prior work. Specifically, we use the word discrimination task of Carlin et al. BIBREF20 , which is similar to a query-by-example task where the word segmentations are known. The evaluation consists of determining, for each pair of evaluation segments, whether they are examples of the same or different words, and measuring performance via the average precision (AP). We do this by measuring the cosine similarity between their acoustic word embeddings and declaring them to be the same if the distance is below a threshold. By sweeping the threshold, we obtain a precision-recall curve from which we compute the AP.", "The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 . The word segments range from 50 to 200 frames in length. The acoustic features in each frame (the input to the word embedding models INLINEFORM0 ) are 39-dimensional MFCCs+ INLINEFORM1 + INLINEFORM2 . We use the same train, development, and test partitions as in prior work BIBREF13 , BIBREF11 , and the same acoustic features as in BIBREF13 , for as direct a comparison as possible. The train set contains approximately 10k example segments, while dev and test each contain approximately 11k segments (corresponding to about 60M pairs for computing the dev/test AP). As in BIBREF13 , when training the classification-based embeddings, we use a subset of the training set containing all word types with a minimum of 3 occurrences, reducing the training set size to approximately 9k segments.", "When training the Siamese networks, the training data consists of all of the same-word pairs in the full training set (approximately 100k pairs). For each such training pair, we randomly sample a third example belonging to a different word type, as required for the INLINEFORM0 loss.", "" ], [ "Our classifier-based embeddings use LSTM or GRU networks with 2–4 stacked layers and 1–3 fully connected layers. The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061. The recurrent hidden state dimensionality is fixed at 512 and dropout BIBREF32 between stacked recurrent layers is used with probability INLINEFORM0 . The fully connected hidden layer dimensionality is fixed at 1024. Rectified linear unit (ReLU) non-linearities and dropout with INLINEFORM1 are used between fully-connected layers. However, between the final recurrent hidden state output and the first fully-connected layer no non-linearity or dropout is applied. These settings were determined through experiments on the development set.", "The classifier network is trained with a cross entropy loss and optimized using stochastic gradient descent (SGD) with Nesterov momentum BIBREF33 . The learning rate is initialized at 0.1 and is reduced by a factor of 10 according to the following heuristic: If 99% of the current epoch's average batch loss is greater than the running average of batch losses over the last 3 epochs, this is considered a plateau; if there are 3 consecutive plateau epochs, then the learning rate is reduced. Training stops when reducing the learning rate no longer improves dev set AP. Then, the model from the epoch corresponding to the the best dev set AP is chosen. Several other optimizers—Adagrad BIBREF34 , Adadelta BIBREF35 , and Adam BIBREF36 —were explored in initial experiments on the dev set, but all reported results were obtained using SGD with Nesterov momentum.", "" ], [ "For experiments with Siamese networks, we initialize (warm-start) the networks with the tuned classification network, removing the final log-softmax layer and replacing it with a linear layer of size equal to the desired embedding dimensionality. We explored embeddings with dimensionalities between 8 and 2048. We use a margin of 0.4 in the cos-hinge loss.", "In training the Siamese networks, each training mini-batch consists of INLINEFORM0 triplets. INLINEFORM1 triplets are of the form INLINEFORM2 where INLINEFORM3 and INLINEFORM4 are examples of the same class (a pair from the 100k same-word pair set) and INLINEFORM5 is a randomly sampled example from a different class. Then, for each of these INLINEFORM6 triplets INLINEFORM7 , an additional triplet INLINEFORM8 is added to the mini-batch to allow all segments to serve as anchors. This is a slight departure from earlier work BIBREF13 , which we found to improve stability in training and performance on the development set.", "In preliminary experiments, we compared two methods for choosing the negative examples INLINEFORM0 during training, a uniform sampling approach and a non-uniform one. In the case of uniform sampling, we sample INLINEFORM1 uniformly at random from the full set of training examples with labels different from INLINEFORM2 . This sampling method requires only word-pair supervision. In the case of non-uniform sampling, INLINEFORM3 is sampled in two steps. First, we construct a distribution INLINEFORM4 over word labels INLINEFORM5 and sample a different label from it. Second, we sample an example uniformly from within the subset with the chosen label. The goal of this method is to speed up training by targeting pairs that violate the margin constraint. To construct the multinomial PMF INLINEFORM6 , we maintain an INLINEFORM7 matrix INLINEFORM8 , where INLINEFORM9 is the number of unique word labels in training. Each word label corresponds to an integer INLINEFORM10 INLINEFORM11 [1, INLINEFORM12 ] and therefore a row in INLINEFORM13 . The values in a row of INLINEFORM14 are considered similarity scores, and we can retrieve the desired PMF for each row by normalizing by its sum.", "At the start of each epoch, we initialize INLINEFORM0 with 0's along the diagonal and 1's elsewhere (which reduces to uniform sampling). For each training pair INLINEFORM1 , we update INLINEFORM2 for both INLINEFORM3 and INLINEFORM4 :", " INLINEFORM0 ", "The PMFs INLINEFORM0 are updated after the forward pass of an entire mini-batch. The constant INLINEFORM1 enforces a potentially stronger constraint than is used in the INLINEFORM2 loss, in order to promote diverse sampling. In all experiments, we set INLINEFORM3 . This is a heuristic approach, and it would be interesting to consider various alternatives. Preliminary experiments showed that the non-uniform sampling method outperformed uniform sampling, and in the following we report results with non-uniform sampling.", "We optimize the Siamese network model using SGD with Nesterov momentum for 15 epochs. The learning rate is initialized to 0.001 and dropped every 3 epochs until no improvement is seen on the dev set. The final model is taken from the epoch with the highest dev set AP. All models were implemented in Torch BIBREF37 and used the rnn library of BIBREF38 .", "" ], [ " Based on development set results, our final embedding models are LSTM networks with 3 stacked layers and 3 fully connected layers, with output dimensionality of 1024 in the case of Siamese networks. Final test set results are given in Table TABREF7 . We include a comparison with the best prior results on this task from BIBREF13 , as well as the result of using standard DTW on the input MFCCs (reproduced from BIBREF13 ) and the best prior result using DTW, obtained with frame features learned with correlated autoencoders BIBREF21 . Both classifier and Siamese LSTM embedding models outperform all prior results on this task of which we are aware.", "We next analyze the effects of model design choices, as well as the learned embeddings themselves.", "" ], [ "Table TABREF10 shows the effect on development set performance of the number of stacked layers INLINEFORM0 , the number of fully connected layers INLINEFORM1 , and LSTM vs. GRU cells, for classifier-based embeddings. The best performance in this experiment is achieved by the LSTM network with INLINEFORM2 . However, performance still seems to be improving with additional layers, suggesting that we may be able to further improve performance by adding even more layers of either type. However, we fixed the model to INLINEFORM3 in order to allow for more experimentation and analysis within a reasonable time.", "Table TABREF10 reveals an interesting trend. When only one fully connected layer is used, the GRU networks outperform the LSTMs given a sufficient number of stacked layers. On the other hand, once we add more fully connected layers, the LSTMs outperform the GRUs. In the first few lines of Table TABREF10 , we use 2, 3, and 4 layer stacks of LSTMs and GRUs while holding fixed the number of fully-connected layers at INLINEFORM0 . There is clear utility in stacking additional layers; however, even with 4 stacked layers the RNNs still underperform the CNN-based embeddings of BIBREF13 until we begin adding fully connected layers.", "After exploring a variety of stacked RNNs, we fixed the stack to 3 layers and varied the number of fully connected layers. The value of each additional fully connected layer is clearly greater than that of adding stacked layers. All networks trained with 2 or 3 fully connected layers obtain more than 0.4 AP on the development set, while stacked RNNs with 1 fully connected layer are at around 0.3 AP or less. This may raise the question of whether some simple fully connected model may be all that is needed; however, previous work has shown that this approach is not competitive BIBREF13 , and convolutional or recurrent layers are needed to summarize arbitrary-length segments into a fixed-dimensional representation.", "" ], [ "For the Siamese networks, we varied the output embedding dimensionality, as shown in Fig. FIGREF11 . This analysis shows that the embeddings learned by the Siamese RNN network are quite robust to reduced dimensionality, outperforming the classifier model for all dimensionalities 32 or higher and outperforming previously reported dev set performance with CNN-based embeddings BIBREF13 for all dimensionalities INLINEFORM0 .", "" ], [ "We might expect the learned embeddings to be more accurate for words that are seen in training than for ones that are not. Fig. FIGREF11 measures this effect by showing performance as a function of the number of occurrences of the dev words in the training set. Indeed, both model types are much more successful for in-vocabulary words, and their performance improves the higher the training frequency of the words. However, performance increases more quickly for the Siamese network than for the classifier as training frequency increases. This may be due to the fact that, if a word type occurs at least INLINEFORM0 times in the classifier training set, then it occurs at least INLINEFORM1 times in the Siamese paired training data.", "" ], [ "In order to gain a better qualitative understanding of the differences between clasiffier and Siamese-based embeddings, and of the learned embedding space more generally, we plot a two-dimensional visualization of some of our learned embeddings via t-SNE BIBREF40 in Fig. FIGREF12 . For both classifier and Siamese embeddings, there is a marked difference in the quality of clusters formed by embeddings of words that were previously seen vs. previously unseen in training. However, the Siamese network embeddings appear to have better relative distances between word clusters with similar and dissimilar pronunciations. For example, the word programs appears equidistant from problems and problem in the classifier-based embedding space, but in the Siamese embedding space problems falls between problem and programs. Similarly, the cluster for democracy shifts with respect to actually and especially to better respect differences in pronunciation. More study of learned embeddings, using more data and word types, is needed to confirm such patterns in general. Improvements in unseen word embeddings from the classifier embedding space to the Siamese embedding space (such as for democracy, morning, and basketball) are a likely result of optimizing the model for relative distances between words.", "" ], [ "", "Our main finding is that RNN-based acoustic word embeddings outperform prior approaches, as measured via a word discrimination task related to query-by-example search. Our best results are obtained with deep LSTM RNNs with a combination of several stacked layers and several fully connected layers, optimized with a contrastive Siamese loss. Siamese networks have the benefit that, for any given training data set, they are effectively trained on a much larger set, in the sense that they measure a loss and gradient for every possible pair of data points. Our experiments suggest that the models could still be improved with additional layers. In addition, we have found that, for the purposes of acoustic word embeddings, fully connected layers are very important and have a more significant effect per layer than stacked layers, particularly when trained with the cross entropy loss function.", "These experiments represent an initial exploration of sequential neural models for acoustic word embeddings. There are a number of directions for further work. For example, while our analyses suggest that Siamese networks are better than classifier-based models at embedding previously unseen words, our best embeddings are still much poorer for unseen words. Improvements in this direction may come from larger training sets, or may require new models that better model the shared structure between words. Other directions for future work include additional forms of supervision and training, as well as application to downstream tasks." ] ] }
{ "question": [ "How do they represent input features of their model to train embeddings?", "Which dimensionality do they use for their embeddings?", "Which dataset do they use?", "By how much do they outpeform previous results on the word discrimination task?" ], "question_id": [ "d40662236eed26f17dd2a3a9052a4cee1482d7d6", "1d791713d1aa77358f11501f05c108045f53c8aa", "6b6360fab2edc836901195c0aba973eae4891975", "b6b5f92a1d9fa623b25c70c1ac67d59d84d9eec8" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "a vector of frame-level acoustic features" ], "yes_no": null, "free_form_answer": "", "evidence": [ "An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 . The basic embedding model structure we use is shown in Fig. FIGREF1 . The model consists of a deep RNN with some number INLINEFORM3 of stacked layers, whose final hidden state vector is passed as input to a set of INLINEFORM4 of fully connected layers; the output of the final fully connected layer is the embedding INLINEFORM5 ." ], "highlighted_evidence": [ "An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 ." ] } ], "annotation_id": [ "1fd4f3fbe7b6046c29581d726d5cfe3e080fd7c8" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "1061" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our classifier-based embeddings use LSTM or GRU networks with 2–4 stacked layers and 1–3 fully connected layers. The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061. The recurrent hidden state dimensionality is fixed at 512 and dropout BIBREF32 between stacked recurrent layers is used with probability INLINEFORM0 . The fully connected hidden layer dimensionality is fixed at 1024. Rectified linear unit (ReLU) non-linearities and dropout with INLINEFORM1 are used between fully-connected layers. However, between the final recurrent hidden state output and the first fully-connected layer no non-linearity or dropout is applied. These settings were determined through experiments on the development set." ], "highlighted_evidence": [ "The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061." ] } ], "annotation_id": [ "1296db0535d800668b7dfc49d903edf11643d543" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Switchboard conversational English corpus" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 . The word segments range from 50 to 200 frames in length. The acoustic features in each frame (the input to the word embedding models INLINEFORM0 ) are 39-dimensional MFCCs+ INLINEFORM1 + INLINEFORM2 . We use the same train, development, and test partitions as in prior work BIBREF13 , BIBREF11 , and the same acoustic features as in BIBREF13 , for as direct a comparison as possible. The train set contains approximately 10k example segments, while dev and test each contain approximately 11k segments (corresponding to about 60M pairs for computing the dev/test AP). As in BIBREF13 , when training the classification-based embeddings, we use a subset of the training set containing all word types with a minimum of 3 occurrences, reducing the training set size to approximately 9k segments." ], "highlighted_evidence": [ "The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 ." ] } ], "annotation_id": [ "2aa70ad856356c985fd3ab88b850c08da935d830" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Their best average precision tops previous best result by 0.202", "evidence": [ "FLOAT SELECTED: Table 1: Final test set results in terms of average precision (AP). Dimensionalities marked with * refer to dimensionality per frame for DTW-based approaches. For CNN and LSTM models, results are given as means over several training runs (5 and 10, respectively) along with their standard deviations." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Final test set results in terms of average precision (AP). Dimensionalities marked with * refer to dimensionality per frame for DTW-based approaches. For CNN and LSTM models, results are given as means over several training runs (5 and 10, respectively) along with their standard deviations." ] } ], "annotation_id": [ "e29d3437584259c203f003372b6df706a73753c3" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ] }
{ "caption": [ "Fig. 1: LSTM-based acoustic word embedding model. For GRUbased models, the structure is the same, but the LSTM cells are replaced with GRU cells, and there is no cell activation vector; the recurrent connections only carry the hidden state vector hlt.", "Fig. 2: Effect of embedding dimensionality (left) and occurrences in training set (right).", "Table 1: Final test set results in terms of average precision (AP). Dimensionalities marked with * refer to dimensionality per frame for DTW-based approaches. For CNN and LSTM models, results are given as means over several training runs (5 and 10, respectively) along with their standard deviations.", "Table 2: Average precision on the dev set, using classifier-based embeddings. S = # stacked layers, F = # fully connected layers.", "Fig. 3: t-SNE visualization of word embeddings from the dev set produced by the classifier (top) vs. Siamese (bottom) models. Word labels seen at training time are denoted by triangles and word labels unseen at training time are denoted by circles." ], "file": [ "2-Figure1-1.png", "5-Figure2-1.png", "5-Table1-1.png", "5-Table2-1.png", "6-Figure3-1.png" ] }
2003.05522
Semantic Holism and Word Representations in Artificial Neural Networks
Artificial neural networks are a state-of-the-art solution for many problems in natural language processing. What can we learn about language and meaning from the way artificial neural networks represent it? Word representations obtained from the Skip-gram variant of the word2vec model exhibit interesting semantic properties. This is usually explained by referring to the general distributional hypothesis, which states that the meaning of the word is given by the contexts where it occurs. We propose a more specific approach based on Frege's holistic and functional approach to meaning. Taking Tugendhat's formal reinterpretation of Frege's work as a starting point, we demonstrate that it is analogical to the process of training the Skip-gram model and offers a possible explanation of its semantic properties.
{ "section_name": [ "INTRODUCTION", "INTRODUCTION ::: Related work", "SEMANTIC HOLISM AND ATOMISM", "SEMANTIC HOLISM AND ATOMISM ::: Atomism", "SEMANTIC HOLISM AND ATOMISM ::: Holism", "WORD REPRESENTATIONS IN AI", "WORD REPRESENTATIONS IN AI ::: Semantic properties of the Skip-Gram model", "RELEVANT THEORIES OF MEANING", "RELEVANT THEORIES OF MEANING ::: The distributional hypothesis", "RELEVANT THEORIES OF MEANING ::: The use theory of meaning", "RELEVANT THEORIES OF MEANING ::: Structuralism", "SKIP-GRAM AND TRUTH-VALUE POTENTIAL", "SKIP-GRAM AND TRUTH-VALUE POTENTIAL ::: The truth-value potential", "SKIP-GRAM AND TRUTH-VALUE POTENTIAL ::: Word2vec models and semantic holism", "CONCLUSION AND FUTURE WORK" ], "paragraphs": [ [ "“Meaning is, therefore, something that words have in sentences; and it's something that sentences have in a language.” BIBREF0 On the other hand, meaning could also be something that words have on their own, with sentences being compositions and language a collection of words. This is the question of semantic holism versus atomism, which was important in the philosophy of language in the second half of the 20th century and has not been satisfyingly answered yet.", "Artificial neural networks are the state-of-the-art solution for many problems in natural language processing (and machine learning in general). They produce word representation with interesting properties, but the way they work is little understood from the perspective of linguistics or the philosophy of language.", "We believe that by finding parallels between concepts in AI and the philosophy of language, we can better understand both areas.", "In this paper, we present an analogy between meaning defined as truth-value potential (a reformulation of Fregean holistic and functional approach) and a variant of language representation model, therefore pointing out a possibility that its “striking syntactic and semantic properties” BIBREF1 are formed due to adhering to holistic principles." ], [ "We have found only one work concerning the philosophical aspects of neural language models BIBREF2. It is, however, concentrating on Self-Organizing Maps and Quine's version of semantic holism.", "There are papers showing that Skip-gram with negative sampling is implicitly a factorization of a word-context matrix (e.g. BIBREF3, although this result was later contested by various authors, such as BIBREF4 and BIBREF5), or deriving the equations in an alternative way BIBREF6 (discussed more in Section SECREF3). This may tell us something about the model, but it does not answer the principal question: why should the matrix factorized in a certain way contain semantic information?" ], [ "Semantic holism (or meaning holism) is “the thesis that what a linguistic expression means depends on its relations to many or all other expressions within the same totality. [...] The totality in question may be the language to which the expressions belong, or a theory formulation in that language.” BIBREF7 The opposing view is called semantic atomism, and it claims that there are expressions (typically words), whose meaning does not depend on the meaning of other expressions. The meaning of these expressions is given by something outside language (e.g. their relation to physical or mental objects).", "In the following sections, we will specify the implications of both alternatives for semantics. The question also plays a role in cognitive science (content identity and similarity), epistemology (commensurability of theories) and seems to be strongly connected with the analytic/synthetic distinction BIBREF0. There are other positions in between these two, such as semantic molecularism or the belief that neither relations external nor internal are primary in forming meaning. However, to keep this text simple, we will only concentrate on extreme positions. We will also only talk about words, although the same argument can be used with smaller meaningful language units (e.g. parts of a compound word).", "Our goal is not to asses whether the truth lies with holism, atomism or neither of them. We will only show that holism is a useful perspective when understanding neural language models is concerned.", "Before we get into details of the two perspectives, let us point out two critical aspects of their difference: holism proclaims interdependence of meanings of words, contrary to their independence in atomism. And holism favours decomposition over composition." ], [ "“It is a widely held view that much of the history of the philosophy of language consists of a failed attempt to make semantic atomism work.” BIBREF0", "Atomism played an important role in analytic philosophy, starting with Bertrand Russell's logical atomism and continuing with logical positivism, as exemplified in this quote by Carnap BIBREF8:", "A language consists of a vocabulary and a syntax, i.e. a set of words which have meanings and rules of sentence formation. These rules indicate how sentences may be formed out of the various sorts of words.", "For logical positivists, words have meaning, because they refer to objects (be it physical, sensual, logical, mathematical or other). The rules of composition determine the meaning of sentences (and rule out senseless sequences of words).", "Under this (or similar) view, the fact that words refer to the outside world is presupposed. Their references are independent of each other (that “dog” refers to dog is independent of that “horse” refers to horse). There is strong emphasis on compositionality, that reached its peak in Chomskian linguistics and is still relevant today.", "Crucially, this means that a word can have meaning on its own (e.g. by referring to something). The meaning of larger units, such as sentences, is derived by the rules of composition from the meaning of words." ], [ "Semantic holism accents the interdependence of meaning. The whole (language, theory, ...) is the primary vehicle of meaning. The meaning of smaller units is derived by decomposition.", "This view is motivated by the same word having a different meaning in a different context. Gottlob Frege has shown BIBREF9 that even such seemingly unambiguous words as numbers play distinct roles in different situations: “5 is a prime number” and “there are 5 cows on the meadow” are different at least in that the first “5” signifies a complete (abstract) object, while the second one needs to be supplemented with information that it is cattle of which there are 5 specimens, otherwise the expression would not be grammatical.", "Frege promoted what we could call sentence holism: “Only in the context of a sentence does a word have a meaning.” BIBREF10 We will later use its modern reformulation to show an analogy with certain neural language models and therefore their holistic character.", "Another group of arguments for holism consist of variations on the theme of impossibility of knowing or using a word without being able to use other words. For example, it could be argued that a person could not correctly use the word “mammal”, without also knowing (at least some of) “bird”, “animal” and kinds of animals. Therefore the meaning of words cannot be formed in isolation.", "Something that is harder to explain under holism than under atomism is the fact that words refer to objects. If the meaning of words is given by other words, how is it connected to the world around us? However, not all words refer to something. And even if subscribing to holism makes explaining reference harder, it may be because it is a hard problem to explain.", "Another thing that is simpler under atomism is compositionality. While in atomism it plays a central role as one of the presupposed properties of language, holism may not need it. But it does not claim that words do not have meanining at all, only that it is derived (by some sort of decomposition) from the meaning of the whole." ], [ "Although all artificial neural networks that work with language must have some way of representing it, the most interesting representations come from neural language models. Language modelling is a task of predicting a missing word from a sequence or generating text. There is also a similar class of models that are designed specifically to produce representations of language units, which we will call neural language representation models.", "The representations (also called embeddings) are high dimensional vectors of real numbers. They are either learned together with the rest of the network for the particular task or pretrained by a general language representation model (typically on a larger dataset not specific for the task).", "Some neural language (representation) models produce representation with semantic properties, although the task of language modeling itself is not (at least at the first sight) directly connected with semantics and no explicit semantic annotation is given to the neural network.", "These semantic properties became popular with the invention of the word2vec software and the Skip-gram model, whose author said about it BIBREF1:", "The model itself has no knowledge of syntax or morphology or semantics. Remarkably, training such a purely lexical model to maximize likelihood will induce word representations with striking syntactic and semantic properties.", "However, they did not present any explanation of the phenomenon.", "Goldberg and Levy BIBREF6 present a detailed derivation of the central equation of the Skip-gram model. In the last section they say:", "Why does this produce good word representations?", "Good question. We don't really know.", "The distributional hypothesis states that words in similar contexts have similar meanings. The objective [of the Skip-gram model] clearly tries to increase the [dot product of the context and the word representations] for good word-context pairs, and decrease it for bad ones. Intuitively, this means that words that share many contexts will be similar to each other (note also that contexts sharing many words will also be similar to each other). This is, however, very hand-wavy. Can we make this intuition more precise? We'd really like to see something more formal.", "We believe that the implicit holistic component of this “hand-wavy” approach is central to the quality of Skip-gram representations and we can make the intuition more precise by analogy with the definition of the truth-value potential." ], [ "The Skip-gram model was introduced by Tomáš Mikolov et al. BIBREF11 as a method to efficiently train word embeddings. It exceeded state-of-the-art in various semantic tasks. The embeddings have interesting semantic properties, most notably the vector arithmetic illustrated by Figure FIGREF4 and the following equation BIBREF1:", "meaning that starting with the word “king”, if we subtract the vector for the word “man” and add the vector for the word “woman”, the nearest vector in the embedding space will be the one that corresponds to the word “queen”. This means that queen is to woman as king is to man.", "Hollis et al. BIBREF12 show that it is possible to infer various psycholinguistic and semantic properties of words from the Skip-gram embeddings. Mikolov et al. BIBREF13 also trained the Skip-gram model with phrases, resulting in even simpler and more elegant equations, such as", "Mikolov et al. BIBREF11 proposed another shallow neural language model, Continuous Bag of Words (CBOW). The main difference between CBOW and Skip-gram (see Figure FIGREF6) is that while Skip-gram predicts context words from a given word, CBOW predicts a word from a given context." ], [ "In this section, we discuss theories of meaning that are relevant to word representations in artificial neural networks. Notice that even though they strictly speaking do not require meaning holism, they all lean towards it quite strongly." ], [ "Holism is generally a better alternative in cases where there is nothing beside language itself to anchor meaning to. This is the case of neural language (representation) models. If they represent meaning at all, it must be derived from the training corpus. This may be the reason behind the popularity of the distributional hypothesis in neural language model literature. The famous saying by Firth BIBREF14, “You shall know a word by the company it keeps!”, is quoted in majority of papers concerned with vector space models of language.", "The general distributional hypothesis states that the meaning of a word is given by the contexts in which it occurs. It is, however, worth noticing that in Firth's theory, collocation is just one among multiple levels of meaning and his text does not support the idea of meaning based on context alone.", "A more suitable formulation of the distributional hypothesis (referenced in connection to Skip-gram in BIBREF15) is found in Distributional structure BIBREF16, where it is suggested that distribution may be used for comparing meanings and that “difference of meaning correlates with difference of distribution”.", "Although this certainly describes a basic principle of neural language models, it is still rather vague." ], [ "The use theory of meaning can be summed up as “the meaning of a word is its use in the language” BIBREF17. It is associated with late Wittgenstein's concept of language game. In Philosophical Investigations BIBREF17, he writes:", "To say “This combination of words makes no sense” excludes it from the sphere of language and thereby bounds the domain of language. [...] When a sentence is called senseless, it is not as it were its sense that is senseless. But a combination of words is being excluded from the language, withdrawn from circulation.", "This “bounding of the domain of language” is precisely what language model does, therefore the use theory may be one way to connect language modelling and semantics.", "That “knowledge of language emerges from language use” is also one of main hypotheses of cognitive linguistics BIBREF18." ], [ "In structuralism BIBREF19, the meaning of a word is given by its relation to the other words of the language:", "The elements of a structure have neither extrinsic designation, nor intrinsic signification. Then what is left? [...] [N]othing other than a sense [...]: a sense which is necessarily and uniquely “positional.” BIBREF20", "This holds for word representations in artificial neural networks as well. The vectors representing the words do not have any other meaning than their position among the rest of the vectors and a single vector does not have any significance outside the model. This is also demonstrated by the vectors being different every time the model is trained because of random initialization." ], [ "In this section, we introduce the truth-value potential and show that Skip-gram corresponds to it better than CBOW." ], [ "Tugendhat's compact reformulation of Frege's sentence holism, the definition of meaning as truth-value potential is BIBREF21:", "[T]wo expressions $\\phi $ and $\\psi $ have the same truth-value potential if and only if, whenever each is completed by the same expression to form a sentence, the two sentences have the same truth-value.", "We can also express this definition in the following form:", "where $M$ is the truth-value potential (meaning), $T$ is the truth-value of the sentence and $x(\\omega )$ is the result of completing the expression $\\omega $ by the expression $x$ to form a sentence.", "One important aspect of this definition is that, following Frege BIBREF10, it is based on an assumption that the sentence (or rather the corresponding judgement) is the basic unit of meaning." ], [ "The definition of meaning as truth-value potential is analogous to the process of training a model for word representations. One difference is that when we are training a model, we do not have the whole of language at our disposal. Even after approximating the language with a finite corpus, it still is not practical to compare all the contexts for a given word at the same time, therefore the universal quantifier has to be replaced by an iterative process of examining the contexts one by one (or actually batch by batch, which is a step back towards the totality that is being estimated). And we have no means to asses whether the sentences from the corpus are true or false. We can either assume that they are mostly true, or try to replace the concept of truth with something else (maybe language use). Even the first option seems to be enough—imagine a corpus full of false sentences about cats, e.g. “Cats can fly.”, “Cats are cetaceans.” etc. We cannot expect the representation of the word “cats” in a model trained on this corpus to be any good, therefore the requirement for the corpus to consist mostly of true sentences is not excessive.", "The simplest model that corresponds to this analogy is the Skip-gram model. It does just what is described in the definition – it fixes a word and goes through all the possible contexts. It compares the words based on the context. The context words are predicted and their representations are fixed (in a single training step), while the representation of a single word is learned. By learning the representation of a word from the representation of the context, Skip-gram complies to the principles of semantic holism. The analogy between the definition of truth-value potential and the process of training the Skip-gram model is one possible explanation for its semantic properties and its performance in semantic tasks.", "The complementary CBOW architecture (see Figure FIGREF6) performs much worse in the evaluation of the semantic tasks BIBREF11. In CBOW, a missing word is predicted from its context. Therefore, in a single learning step, the representation of the missing word is fixed. What changes (and is learned) is the representation of the context words. By learning the representation of the context from the representation of the word, CBOW is implicitly conforming to semantic atomism: words are the basic units of meaning and the meaning of the broader context is derived from the atomic meaning of words. This may be the reason why CBOW does not exhibit the same semantic properties as Skip-gram and it performs worse in semantic tasks." ], [ "The distributional hypothesis as an explanation for the semantic properties of neural language models should be expanded into a more detailed account. We show one possible way to do that via a Fregean approach to meaning.", "Both the distributional hypothesis itself and Tugendhat's interpretation of Frege's work are examples of holistic approaches to meaning, where the meaning of the whole determines the meaning of parts. As we demonstrated on the opposition between Skip-gram and CBOW models, the distinction between semantic holism and atomism may play an essential role in semantic properties of neural language representations models.", "We have demonstrated the connection between the Skip-gram model and the definition of meaning as truth-value potential. Although this is an isolated observation of an analogy between a specific model and a specific theory about meaning, it is a crucial step towards finding a theory of meaning that would correspond to the current results of NLP research, increasing our understanding of NLP and ultimately the language itself.", "The direction of research from successful language technologies to properties of language itself offers many opportunities for inquiry, with very few being explored so far.", "Many state-of-the-art models for natural language processing use smaller units than words for their input and output. This analysis could be extended to take this into account.", "It might also be interesting to think about the philosophy of science in technical fields dominated by machine learning, but that is far beyond the scope of this paper.", "This work has been supported by the grant 18-02196S of the Czech Science Foundation. This research was partially supported by SVV project number 260 575." ] ] }
{ "question": [ "How does Frege's holistic and functional approach to meaning relates to general distributional hypothesis?", "What does Frege's holistic and functional approach to meaning states?" ], "question_id": [ "86a93a2d1c19cd0cd21ad1608f2a336240725700", "6090d3187c41829613abe785f0f3665d9ecd90d9" ], "nlp_background": [ "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "interpretation of Frege's work are examples of holistic approaches to meaning" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Both the distributional hypothesis itself and Tugendhat's interpretation of Frege's work are examples of holistic approaches to meaning, where the meaning of the whole determines the meaning of parts. As we demonstrated on the opposition between Skip-gram and CBOW models, the distinction between semantic holism and atomism may play an essential role in semantic properties of neural language representations models." ], "highlighted_evidence": [ "Both the distributional hypothesis itself and Tugendhat's interpretation of Frege's work are examples of holistic approaches to meaning, where the meaning of the whole determines the meaning of parts." ] } ], "annotation_id": [ "12cbe7b5338668d7496f2ee6247b5343f0c35ae3" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Only in the context of a sentence does a word have a meaning." ], "yes_no": null, "free_form_answer": "", "evidence": [ "Frege promoted what we could call sentence holism: “Only in the context of a sentence does a word have a meaning.” BIBREF10 We will later use its modern reformulation to show an analogy with certain neural language models and therefore their holistic character." ], "highlighted_evidence": [ "Frege promoted what we could call sentence holism: “Only in the context of a sentence does a word have a meaning.” BIBREF10" ] } ], "annotation_id": [ "68e12003b1ff69b600deee00c2035adeba083bc3" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1. Examples of embeddings semantic relations according to [18].", "Figure 2. CBOW and Skip-gram language models according to [16]." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png" ] }
1601.06068
Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing
One of the limitations of semantic parsing approaches to open-domain question answering is the lexicosyntactic gap between natural language questions and knowledge base entries -- there are many ways to ask a question, all with the same answer. In this paper we propose to bridge this gap by generating paraphrases of the input question with the goal that at least one of them will be correctly mapped to a knowledge-base query. We introduce a novel grammar model for paraphrase generation that does not require any sentence-aligned paraphrase corpus. Our key idea is to leverage the flexibility and scalability of latent-variable probabilistic context-free grammars to sample paraphrases. We do an extrinsic evaluation of our paraphrases by plugging them into a semantic parser for Freebase. Our evaluation experiments on the WebQuestions benchmark dataset show that the performance of the semantic parser significantly improves over strong baselines.
{ "section_name": [ "Introduction", "Paraphrase Generation Using Grammars", "Paraphrases Generation Algorithm", "Bi-Layered L-PCFGs", "Paraphrase Classification", "Semantic Parsing using Paraphrasing", "Ungrounded Graphs from Paraphrases", "Grounded Graphs from Ungrounded Graphs", "Learning", "Experimental Setup", "Evaluation Data and Metric", "Baselines", "Implementation Details", "Results and Discussion", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "Semantic parsers map sentences onto logical forms that can be used to query databases BIBREF0 , BIBREF1 , instruct robots BIBREF2 , extract information BIBREF3 , or describe visual scenes BIBREF4 . In this paper we consider the problem of semantically parsing questions into Freebase logical forms for the goal of question answering. Current systems accomplish this by learning task-specific grammars BIBREF5 , strongly-typed CCG grammars BIBREF6 , BIBREF7 , or neural networks without requiring any grammar BIBREF8 . These methods are sensitive to the words used in a question and their word order, making them vulnerable to unseen words and phrases. Furthermore, mismatch between natural language and Freebase makes the problem even harder. For example, Freebase expresses the fact that “Czech is the official language of Czech Republic” (encoded as a graph), whereas to answer a question like “What do people in Czech Republic speak?” one should infer people in Czech Republic refers to Czech Republic and What refers to the language and speak refers to the predicate official language.", "We address the above problems by using paraphrases of the original question. Paraphrasing has shown to be promising for semantic parsing BIBREF9 , BIBREF10 , BIBREF11 . We propose a novel framework for paraphrasing using latent-variable PCFGs (L-PCFGs). Earlier approaches to paraphrasing used phrase-based machine translation for text-based QA BIBREF12 , BIBREF13 , or hand annotated grammars for KB-based QA BIBREF10 . We find that phrase-based statistical machine translation (MT) approaches mainly produce lexical paraphrases without much syntactic diversity, whereas our grammar-based approach is capable of producing both lexically and syntactically diverse paraphrases. Unlike MT based approaches, our system does not require aligned parallel paraphrase corpora. In addition we do not require hand annotated grammars for paraphrase generation but instead learn the grammar directly from a large scale question corpus.", "The main contributions of this paper are two fold. First, we present an algorithm (§ \"Paraphrase Generation Using Grammars\" ) to generate paraphrases using latent-variable PCFGs. We use the spectral method of narayan-15 to estimate L-PCFGs on a large scale question treebank. Our grammar model leads to a robust and an efficient system for paraphrase generation in open-domain question answering. While CFGs have been explored for paraphrasing using bilingual parallel corpus BIBREF14 , ours is the first implementation of CFG that uses only monolingual data. Second, we show that generated paraphrases can be used to improve semantic parsing of questions into Freebase logical forms (§ \"Semantic Parsing using Paraphrasing\" ). We build on a strong baseline of reddylargescale2014 and show that our grammar model competes with MT baseline even without using any parallel paraphrase resources." ], [ "Our paraphrase generation algorithm is based on a model in the form of an L-PCFG. L-PCFGs are PCFGs where the nonterminals are refined with latent states that provide some contextual information about each node in a given derivation. L-PCFGs have been used in various ways, most commonly for syntactic parsing BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 .", "In our estimation of L-PCFGs, we use the spectral method of narayan-15, instead of using EM, as has been used in the past by matsuzaki-2005 and petrov-2006. The spectral method we use enables the choice of a set of feature functions that indicate the latent states, which proves to be useful in our case. It also leads to sparse grammar estimates and compact models.", "The spectral method works by identifying feature functions for “inside” and “outside” trees, and then clusters them into latent states. Then it follows with a maximum likelihood estimation step, that assumes the latent states are represented by clusters obtained through the feature function clustering. For more details about these constructions, we refer the reader to cohen-13 and narayan-15.", "The rest of this section describes our paraphrase generation algorithm." ], [ "We define our paraphrase generation task as a sampling problem from an L-PCFG $G_{\\mathrm {syn}}$ , which is estimated from a large corpus of parsed questions. Once this grammar is estimated, our algorithm follows a pipeline with two major steps.", "We first build a word lattice $W_q$ for the input question $q$ . We use the lattice to constrain our paraphrases to a specific choice of words and phrases that can be used. Once this lattice is created, a grammar $G_{\\mathrm {syn}}^{\\prime }$ is then extracted from $G_{\\mathrm {syn}}$ . This grammar is constrained to the lattice.", "We experiment with three ways of constructing word lattices: naïve word lattices representing the words from the input question only, word lattices constructed with the Paraphrase Database BIBREF14 and word lattices constructed with a bi-layered L-PCFG, described in § \"Bi-Layered L-PCFGs\" . For example, Figure 1 shows an example word lattice for the question What language do people in Czech Republic speak? using the lexical and phrasal rules from the PPDB.", "Once $G_{\\mathrm {syn}}^{\\prime }$ is generated, we sample paraphrases of the input question $q$ . These paraphrases are further filtered with a classifier to improve the precision of the generated paraphrases.", "We train the L-PCFG $G_{\\mathrm {syn}}$ on the Paralex corpus BIBREF9 . Paralex is a large monolingual parallel corpus, containing 18 million pairs of question paraphrases with 2.4M distinct questions in the corpus. It is suitable for our task of generating paraphrases since its large scale makes our model robust for open-domain questions. We construct a treebank by parsing 2.4M distinct questions from Paralex using the BLLIP parser BIBREF25 .", "Given the treebank, we use the spectral algorithm of narayan-15 to learn an L-PCFG for constituency parsing to learn $G_{\\mathrm {syn}}$ . We follow narayan-15 and use the same feature functions for the inside and outside trees as they use, capturing contextual syntactic information about nonterminals. We refer the reader to narayan-15 for more detailed description of these features. In our experiments, we set the number of latent states to 24.", "Once we estimate $G_{\\mathrm {syn}}$ from the Paralex corpus, we restrict it for each question to a grammar $G_{\\mathrm {syn}}^{\\prime }$ by keeping only the rules that could lead to a derivation over the lattice. This step is similar to lexical pruning in standard grammar-based generation process to avoid an intermediate derivation which can never lead to a successful derivation BIBREF26 , BIBREF27 .", "Sampling a question from the grammar $G_{\\mathrm {syn}}^{\\prime }$ is done by recursively sampling nodes in the derivation tree, together with their latent states, in a top-down breadth-first fashion. Sampling from the pruned grammar $G_{\\mathrm {syn}}^{\\prime }$ raises an issue of oversampling words that are more frequent in the training data. To lessen this problem, we follow a controlled sampling approach where sampling is guided by the word lattice $W_q$ . Once a word $w$ from a path $e$ in $W_q$ is sampled, all other parallel or conflicting paths to $e$ are removed from $W_q$ . For example, generating for the word lattice in Figure 1 , when we sample the word citizens, we drop out the paths “human beings”, “people's”, “the population”, “people” and “members of the public” from $W_q$ and accordingly update the grammar. The controlled sampling ensures that each sampled question uses words from a single start-to-end path in $W_q$ . For example, we could sample a question what is Czech Republic 's language? by sampling words from the path (what, language, do, people 's, in, Czech, Republic, is speaking, ?) in Figure 1 . We repeat this sampling process to generate multiple potential paraphrases.", "The resulting generation algorithm has multiple advantages over existing grammar generation methods. First, the sampling from an L-PCFG grammar lessens the lexical ambiguity problem evident in lexicalized grammars such as tree adjoining grammars BIBREF27 and combinatory categorial grammars BIBREF28 . Our grammar is not lexicalized, only unary context-free rules are lexicalized. Second, the top-down sampling restricts the combinatorics inherent to bottom-up search BIBREF29 . Third, we do not restrict the generation by the order information in the input. The lack of order information in the input often raises the high combinatorics in lexicalist approaches BIBREF30 . In our case, however, we use sampling to reduce this problem, and it allows us to produce syntactically diverse questions. And fourth, we impose no constraints on the grammar thereby making it easier to maintain bi-directional (recursive) grammars that can be used both for parsing and for generation BIBREF31 ." ], [ "As mentioned earlier, one of our lattice types is based on bi-layered PCFGs introduced here.", "In their traditional use, the latent states in L-PCFGs aim to capture syntactic information. We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions.", "To create the bi-layered L-PCFG, we again use the spectral algorithm of narayan-15 to estimate a grammar $G_{\\mathrm {par}}$ from the Paralex corpus. We use the word alignment of paraphrase question pairs in Paralex to map inside and outside trees of each nonterminals in the treebank to bag of word features. The number of latent states we use is 1,000.", "Once the two feature functions (syntactic in $G_{\\mathrm {syn}}$ and semantic in $G_{\\mathrm {par}}$ ) are created, each nonterminal in the training treebank is assigned two latent states (cluster identifiers). Figure 2 shows an example annotation of trees for three paraphrase questions from the Paralex corpus. We compute the parameters of the bi-layered L-PCFG $G_{\\mathrm {layered}}$ with a simple frequency count maximum likelihood estimate over this annotated treebank. As such, $G_{\\mathrm {layered}}$ is a combination of $G_{\\mathrm {syn}}$ and $G_{\\mathrm {par}}$ , resulting in 24,000 latent states (24 syntactic x 1000 semantic).", "Consider an example where we want to generate paraphrases for the question what day is nochebuena. Parsing it with $G_{\\mathrm {layered}}$ will lead to the leftmost hybrid structure as shown in Figure 2 . The assignment of the first latent states for each nonterminals ensures that we retrieve the correct syntactic representation of the sentence. Here, however, we are more interested in the second latent states assigned to each nonterminals which capture the paraphrase information of the sentence at various levels. For example, we have a unary lexical rule (NN-*-142 day) indicating that we observe day with NN of the paraphrase type 142. We could use this information to extract unary rules of the form (NN-*-142 $w$ ) in the treebank that will generate words $w$ which are paraphrases to day. Similarly, any node WHNP-*-291 in the treebank will generate paraphrases for what day, SBARQ-*-403, for what day is nochebuena. This way we will be able to generate paraphrases when is nochebuena and when is nochebuena celebrated as they both have SBARQ-*-403 as their roots.", "To generate a word lattice $W_q$ for a given question $q$ , we parse $q$ with the bi-layered grammar $G_{\\mathrm {layered}}$ . For each rule of the form $X$ - $m_1$ - $m_2 \\rightarrow w$ in the bi-layered tree with $X \\in {\\cal P}$ , $m_1 \\in \\lbrace 1, \\ldots , 24 \\rbrace $ , $m_2 \\in \\lbrace 1, \\ldots , 1000 \\rbrace $ and $q$0 a word in $q$1 , we extract rules of the form $q$2 - $q$3 - $q$4 from $q$5 such that $q$6 . For each such $q$7 , we add a path $q$8 parallel to $q$9 in the word lattice." ], [ "Our sampling algorithm overgenerates paraphrases which are incorrect. To improve its precision, we build a binary classifier to filter the generated paraphrases. We randomly select 100 distinct questions from the Paralex corpus and generate paraphrases using our generation algorithm with various lattice settings. We randomly select 1,000 pairs of input-sampled sentences and manually annotate them as “correct” or “incorrect” paraphrases. We train our classifier on this manually created training data. We follow madnani2012, who used MT metrics for paraphrase identification, and experiment with 8 MT metrics as features for our binary classifier. In addition, we experiment with a binary feature which checks if the sampled paraphrase preserves named entities from the input sentence. We use WEKA BIBREF32 to replicate the classifier of madnani2012 with our new feature. We tune the feature set for our classifier on the development data." ], [ "In this section we describe how the paraphrase algorithm is used for converting natural language to Freebase queries. Following reddylargescale2014, we formalize the semantic parsing problem as a graph matching problem, i.e., finding the Freebase subgraph (grounded graph) that is isomorphic to the input question semantic structure (ungrounded graph).", "This formulation has a major limitation that can be alleviated by using our paraphrase generation algorithm. Consider the question What language do people in Czech Republic speak?. The ungrounded graph corresponding to this question is shown in Figure 3 . The Freebase grounded graph which results in correct answer is shown in Figure 3 . Note that these two graphs are non-isomorphic making it impossible to derive the correct grounding from the ungrounded graph. In fact, at least 15% of the examples in our development set fail to satisfy isomorphic assumption. In order to address this problem, we use paraphrases of the input question to generate additional ungrounded graphs, with the aim that one of those paraphrases will have a structure isomorphic to the correct grounding. Figure 3 and Figure 3 are two such paraphrases which can be converted to Figure 3 as described in sec:groundedGraphs.", "For a given input question, first we build ungrounded graphs from its paraphrases. We convert these graphs to Freebase graphs. To learn this mapping, we rely on manually assembled question-answer pairs. For each training question, we first find the set of oracle grounded graphs—Freebase subgraphs which when executed yield the correct answer—derivable from the question's ungrounded graphs. These oracle graphs are then used to train a structured perceptron model. These steps are discussed in detail below." ], [ "We use GraphParser BIBREF7 to convert paraphrases to ungrounded graphs. This conversion involves three steps: 1) parsing the paraphrase using a CCG parser to extract syntactic derivations BIBREF33 , 2) extracting logical forms from the CCG derivations BIBREF34 , and 3) converting the logical forms to an ungrounded graph. The ungrounded graph for the example question and its paraphrases are shown in Figure 3 , Figure 3 and Figure 3 , respectively." ], [ "The ungrounded graphs are grounded to Freebase subgraphs by mapping entity nodes, entity-entity edges and entity type nodes in the ungrounded graph to Freebase entities, relations and types, respectively. For example, the graph in Figure 3 can be converted to a Freebase graph in Figure 3 by replacing the entity node Czech Republic with the Freebase entity CzechRepublic, the edge (speak.arg $_2$ , speak.in) between $x$ and Czech Republic with the Freebase relation (location.country.official_language.2, location.country.official_language.1), the type node language with the Freebase type language.human_language, and the target node remains intact. The rest of the nodes, edges and types are grounded to null. In a similar fashion, Figure 3 can be grounded to Figure 3 , but not Figure 3 to Figure 3 . If no paraphrase is isomorphic to the target grounded grounded graph, our grounding fails." ], [ "We use a linear model to map ungrounded graphs to grounded ones. The parameters of the model are learned from question-answer pairs. For example, the question What language do people in Czech Republic speak? paired with its answer $\\lbrace \\textsc {CzechLanguage}\\rbrace $ . In line with most work on question answering against Freebase, we do not rely on annotated logical forms associated with the question for training and treat the mapping of a question to its grounded graph as latent.", "Let $q$ be a question, let $p$ be a paraphrase, let $u$ be an ungrounded graph for $p$ , and let $g$ be a grounded graph formed by grounding the nodes and edges of $u$ to the knowledge base $\\mathcal {K}$ (throughout we use Freebase as the knowledge base). Following reddylargescale2014, we use beam search to find the highest scoring tuple of paraphrase, ungrounded and grounded graphs $(\\hat{p}, \\hat{u}, \\hat{g})$ under the model $\\theta \\in \\mathbb {R}^n$ : $\n({\\hat{p},\\hat{u},\\hat{g}}) = \\operatornamewithlimits{arg\\,max}_{(p,u,g)} \\theta \\cdot \\Phi (p,u,g,q,\\mathcal {K})\\,,\n$ ", "where $\\Phi (p, u, g, q, \\mathcal {K}) \\in \\mathbb {R}^n$ denotes the features for the tuple of paraphrase, ungrounded and grounded graphs. The feature function has access to the paraphrase, ungrounded and grounded graphs, the original question, as well as to the content of the knowledge base and the denotation $|g|_\\mathcal {K}$ (the denotation of a grounded graph is defined as the set of entities or attributes reachable at its target node). See sec:details for the features employed. The model parameters are estimated with the averaged structured perceptron BIBREF35 . Given a training question-answer pair $(q,\\mathcal {A})$ , the update is: $\n\\theta ^{t+1} \\leftarrow \\theta ^{t} + \\Phi (p^+, u^+, g^+, q,\n\\mathcal {K}) - \\Phi (\\hat{p}, \\hat{u}, \\hat{g}, q, \\mathcal {K})\\,,\n$ ", "where $({p^+,u^+,g^+})$ denotes the tuple of gold paraphrase, gold ungrounded and grounded graphs for $q$ . Since we do not have direct access to the gold paraphrase and graphs, we instead rely on the set of oracle tuples, $\\mathcal {O}_{\\mathcal {K}, \\mathcal {A}}(q)$ , as a proxy: $\n(p^{+},u^{+},{g^{+}}) = \\operatornamewithlimits{arg\\,max}_{(p,u,g) \\in \\mathcal {O}_{\\mathcal {K},\\mathcal {A}}(q)} \\theta \\cdot \\Phi ({p,u,g,q,\\mathcal {K}})\\,,\n$ ", "where $\\mathcal {O}_{\\mathcal {K}, \\mathcal {A}}(q)$ is defined as the set of tuples ( $p$ , $u$ , $g$ ) derivable from the question $q$ , whose denotation $|g|_\\mathcal {K}$ has minimal $F_1$ -loss against the gold answer $\\mathcal {A}$ . We find the oracle graphs for each question a priori by performing beam-search with a very large beam." ], [ "Below, we give details on the evaluation dataset and baselines used for comparison. We also describe the model features and provide implementation details." ], [ "We evaluate our approach on the WebQuestions dataset BIBREF5 . WebQuestions consists of 5,810 question-answer pairs where questions represents real Google search queries. We use the standard train/test splits, with 3,778 train and 2,032 test questions. For our development experiments we tune the models on held-out data consisting of 30% training questions, while for final testing we use the complete training data. We use average precision (avg P.), average recall (avg R.) and average F $_1$ (avg F $_1$ ) proposed by berantsemantic2013 as evaluation metrics." ], [ "We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases.", "We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions." ], [ "For WebQuestions, we use 8 handcrafted part-of-speech patterns (e.g., the pattern (DT)?(JJ.? $\\mid $ NN.?){0,2}NN.? matches the noun phrase the big lebowski) to identify candidate named entity mention spans. We use the Stanford CoreNLP caseless tagger for part-of-speech tagging BIBREF38 . For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. We generate ungrounded graphs for these paraphrases and treat the final entity disambiguation and paraphrase selection as part of the semantic parsing problem.", "We use the features from reddylargescale2014. These include edge alignments and stem overlaps between ungrounded and grounded graphs, and contextual features such as word and grounded relation pairs. In addition to these features, we add two new real-valued features – the paraphrase classifier's score and the entity disambiguation lattice score.", "We use beam search to infer the highest scoring graph pair for a question. The search operates over entity-entity edges and entity type nodes of each ungrounded graph. For an entity-entity edge, there are two operations: ground the edge to a Freebase relation, or skip the edge. Similarly, for an entity type node, there are two operations: ground the node to a Freebase type, or skip the node. We use a beam size of 100 in all our experiments." ], [ "In this section, we present results from five different systems for our question-answering experiments: original, mt, naive, ppdb and bilayered. First two are baseline systems. Other three systems use paraphrases generated from an L-PCFG grammar. naive uses a word lattice with a single start-to-end path representing the input question itself, ppdb uses a word lattice constructed using the PPDB rules, and bilayered uses bi-layered L-PCFG to build word lattices. Note that naive does not require any parallel resource to train, ppdb requires an external paraphrase database, and bilayered, like mt, needs a parallel corpus with paraphrase pairs. We tune our classifier features and GraphParser features on the development data. We use the best setting from tuning for evaluation on the test data." ], [ "We described a grammar method to generate paraphrases for questions, and applied it to a question answering system based on semantic parsing. We showed that using paraphrases for a question answering system is a useful way to improve its performance. Our method is rather generic and can be applied to any question answering system." ], [ "The authors would like to thank Nitin Madnani for his help with the implementation of the paraphrase classifier. We would like to thank our anonymous reviewers for their insightful comments. This research was supported by an EPSRC grant (EP/L02411X/1), the H2020 project SUMMA (under grant agreement 688139), and a Google PhD Fellowship for the second author." ] ] }
{ "question": [ "Do they evaluate the quality of the paraphrasing model?", "How many paraphrases are generated per question?", "What latent variables are modeled in the PCFG?", "What are the baselines?" ], "question_id": [ "117aa7811ed60e84d40cd8f9cb3ca78781935a98", "c359ab8ebef6f60c5a38f5244e8c18d85e92761d", "ad362365656b0b218ba324ae60701eb25fe664c1", "423bb905e404e88a168e7e807950e24ca166306c" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "semantic parsing", "semantic parsing", "semantic parsing", "semantic parsing" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "208951f0d5f93c878368122d70fd94c337104a5e" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "10*n paraphrases, where n depends on the number of paraphrases that contain the entity mention spans", "evidence": [ "For WebQuestions, we use 8 handcrafted part-of-speech patterns (e.g., the pattern (DT)?(JJ.? $\\mid $ NN.?){0,2}NN.? matches the noun phrase the big lebowski) to identify candidate named entity mention spans. We use the Stanford CoreNLP caseless tagger for part-of-speech tagging BIBREF38 . For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. We generate ungrounded graphs for these paraphrases and treat the final entity disambiguation and paraphrase selection as part of the semantic parsing problem." ], "highlighted_evidence": [ "For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. " ] } ], "annotation_id": [ "12f2e670e6d94fab6636a8ef24121fc2f2100eeb" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "syntactic information", "semantic and topical information" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In their traditional use, the latent states in L-PCFGs aim to capture syntactic information. We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions." ], "highlighted_evidence": [ "We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions." ] } ], "annotation_id": [ "727ec6309fb3d7beb4d8cf4455fe5c4778bb660e" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "GraphParser without paraphrases", "monolingual machine translation based model for paraphrase generation" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases.", "We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions." ], "highlighted_evidence": [ "We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases", "We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions." ] } ], "annotation_id": [ "32749f613e7b20e5fde56cfe720b1ecddf2646ff" ], "worker_id": [ "ab1027fb3232572ed0261cb9521d6d9f472e86e2" ] } ] }
{ "caption": [ "Figure 1: An example word lattice for the question What language do people in Czech Republic speak? using the lexical and phrasal rules from the PPDB.", "Figure 2: Trees used for bi-layered L-PCFG training. The questions what day is nochebuena, when is nochebuena and when is nochebuena celebrated are paraphrases from the Paralex corpus. Each nonterminal is decorated with a syntactic label and two identifiers, e.g., for WP-7-254, WP is the syntactic label assigned by the BLLIP parser, 7 is the syntactic latent state, and 254 is the semantic latent state.", "Figure 3: Ungrounded graphs for an input question and its paraphrases along with its correct grounded graph. The green squares indicate NL or Freebase entities, the yellow rectangles indicate unary NL predicates or Freebase types, the circles indicate NL or Freebase events, the edge labels indicate binary NL predicates or Freebase relations, and the red diamonds attach to the entity of interest (the answer to the question).", "Table 1: Oracle statistics and results on the WebQuestions development set.", "Table 2: Results on WebQuestions test dataset." ], "file": [ "4-Figure1-1.png", "5-Figure2-1.png", "7-Figure3-1.png", "9-Table1-1.png", "9-Table2-1.png" ] }
1709.07916
Characterizing Diabetes, Diet, Exercise, and Obesity Comments on Twitter
Social media provide a platform for users to express their opinions and share information. Understanding public health opinions on social media, such as Twitter, offers a unique approach to characterizing common health issues such as diabetes, diet, exercise, and obesity (DDEO), however, collecting and analyzing a large scale conversational public health data set is a challenging research task. The goal of this research is to analyze the characteristics of the general public's opinions in regard to diabetes, diet, exercise and obesity (DDEO) as expressed on Twitter. A multi-component semantic and linguistic framework was developed to collect Twitter data, discover topics of interest about DDEO, and analyze the topics. From the extracted 4.5 million tweets, 8% of tweets discussed diabetes, 23.7% diet, 16.6% exercise, and 51.7% obesity. The strongest correlation among the topics was determined between exercise and obesity. Other notable correlations were: diabetes and obesity, and diet and obesity DDEO terms were also identified as subtopics of each of the DDEO topics. The frequent subtopics discussed along with Diabetes, excluding the DDEO terms themselves, were blood pressure, heart attack, yoga, and Alzheimer. The non-DDEO subtopics for Diet included vegetarian, pregnancy, celebrities, weight loss, religious, and mental health, while subtopics for Exercise included computer games, brain, fitness, and daily plan. Non-DDEO subtopics for Obesity included Alzheimer, cancer, and children. With 2.67 billion social media users in 2016, publicly available data such as Twitter posts can be utilized to support clinical providers, public health experts, and social scientists in better understanding common public opinions in regard to diabetes, diet, exercise, and obesity.
{ "section_name": [ "Introduction", "Methods", "Data Collection", "Topic Discovery", "Topic Content Analysis", "Results", "Discussion", "Conclusion", "Conflict of interest", "Acknowledgement" ], "paragraphs": [ [ "The global prevalence of obesity has doubled between 1980 and 2014, with more than 1.9 billion adults considered as overweight and over 600 million adults considered as obese in 2014 BIBREF0 . Since the 1970s, obesity has risen 37 percent affecting 25 percent of the U.S. adults BIBREF1 . Similar upward trends of obesity have been found in youth populations, with a 60% increase in preschool aged children between 1990 and 2010 BIBREF2 . Overweight and obesity are the fifth leading risk for global deaths according to the European Association for the Study of Obesity BIBREF0 . Excess energy intake and inadequate energy expenditure both contribute to weight gain and diabetes BIBREF3 , BIBREF4 .", "Obesity can be reduced through modifiable lifestyle behaviors such as diet and exercise BIBREF4 . There are several comorbidities associated with being overweight or obese, such as diabetes BIBREF5 . The prevalence of diabetes in adults has risen globally from 4.7% in 1980 to 8.5% in 2014. Current projections estimate that by 2050, 29 million Americans will be diagnosed with type 2 diabetes, which is a 165% increase from the 11 million diagnosed in 2002 BIBREF6 . Studies show that there are strong relations among diabetes, diet, exercise, and obesity (DDEO) BIBREF7 , BIBREF4 , BIBREF8 , BIBREF9 ; however, the general public's perception of DDEO remains limited to survey-based studies BIBREF10 .", "The growth of social media has provided a research opportunity to track public behaviors, information, and opinions about common health issues. It is estimated that the number of social media users will increase from 2.34 billion in 2016 to 2.95 billion in 2020 BIBREF11 . Twitter has 316 million users worldwide BIBREF12 providing a unique opportunity to understand users' opinions with respect to the most common health issues BIBREF13 . Publicly available Twitter posts have facilitated data collection and leveraged the research at the intersection of public health and data science; thus, informing the research community of major opinions and topics of interest among the general population BIBREF14 , BIBREF15 , BIBREF16 that cannot otherwise be collected through traditional means of research (e.g., surveys, interviews, focus groups) BIBREF17 , BIBREF18 . Furthermore, analyzing Twitter data can help health organizations such as state health departments and large healthcare systems to provide health advice and track health opinions of their populations and provide effective health advice when needed BIBREF13 .", "Among computational methods to analyze tweets, computational linguistics is a well-known developed approach to gain insight into a population, track health issues, and discover new knowledge BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . Twitter data has been used for a wide range of health and non-health related applications, such as stock market BIBREF23 and election analysis BIBREF24 . Some examples of Twitter data analysis for health-related topics include: flu BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 , mental health BIBREF31 , Ebola BIBREF32 , BIBREF33 , Zika BIBREF34 , medication use BIBREF35 , BIBREF36 , BIBREF37 , diabetes BIBREF38 , and weight loss and obesity BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF21 .", "The previous Twitter studies have dealt with extracting common topics of one health issue discussed by the users to better understand common themes; however, this study utilizes an innovative approach to computationally analyze unstructured health related text data exchanged via Twitter to characterize health opinions regarding four common health issues, including diabetes, diet, exercise and obesity (DDEO) on a population level. This study identifies the characteristics of the most common health opinions with respect to DDEO and discloses public perception of the relationship among diabetes, diet, exercise and obesity. These common public opinions/topics and perceptions can be used by providers and public health agencies to better understand the common opinions of their population denominators in regard to DDEO, and reflect upon those opinions accordingly." ], [ "Our approach uses semantic and linguistics analyses for disclosing health characteristics of opinions in tweets containing DDEO words. The present study included three phases: data collection, topic discovery, and topic-content analysis." ], [ "This phase collected tweets using Twitter's Application Programming Interfaces (API) BIBREF43 . Within the Twitter API, diabetes, diet, exercise, and obesity were selected as the related words BIBREF4 and the related health areas BIBREF19 . Twitter's APIs provides both historic and real-time data collections. The latter method randomly collects 1% of publicly available tweets. This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. We used the queries to collect approximately 4.5 million related tweets between 06/01/2016 and 06/30/2016. The data will be available in the first author's website. Figure FIGREF3 shows a sample of collected tweets in this research." ], [ "To discover topics from the collected tweets, we used a topic modeling approach that fuzzy clusters the semantically related words such as assigning “diabetes\", “cancer\", and “influenza\" into a topic that has an overall “disease\" theme BIBREF44 , BIBREF45 . Topic modeling has a wide range of applications in health and medical domains such as predicting protein-protein relationships based on the literature knowledge BIBREF46 , discovering relevant clinical concepts and structures in patients' health records BIBREF47 , and identifying patterns of clinical events in a cohort of brain cancer patients BIBREF48 .", "Among topic models, Latent Dirichlet Allocation (LDA) BIBREF49 is the most popular effective model BIBREF50 , BIBREF19 as studies have shown that LDA is an effective computational linguistics model for discovering topics in a corpus BIBREF51 , BIBREF52 . LDA assumes that a corpus contains topics such that each word in each document can be assigned to the topics with different degrees of membership BIBREF53 , BIBREF54 , BIBREF55 .", "Twitter users can post their opinions or share information about a subject to the public. Identifying the main topics of users' tweets provides an interesting point of reference, but conceptualizing larger subtopics of millions of tweets can reveal valuable insight to users' opinions. The topic discovery component of the study approach uses LDA to find main topics, themes, and opinions in the collected tweets.", "We used the Mallet implementation of LDA BIBREF49 , BIBREF56 with its default settings to explore opinions in the tweets. Before identifying the opinions, two pre-processing steps were implemented: (1) using a standard list for removing stop words, that do not have semantic value for analysis (such as “the\"); and, (2) finding the optimum number of topics. To determine a proper number of topics, log-likelihood estimation with 80% of tweets for training and 20% of tweets for testing was used to find the highest log-likelihood, as it is the optimum number of topics BIBREF57 . The highest log-likelihood was determined 425 topics." ], [ "The topic content analysis component used an objective interpretation approach with a lexicon-based approach to analyze the content of topics. The lexicon-based approach uses dictionaries to disclose the semantic orientation of words in a topic. Linguistic Inquiry and Word Count (LIWC) is a linguistics analysis tool that reveals thoughts, feelings, personality, and motivations in a corpus BIBREF58 , BIBREF59 , BIBREF60 . LIWC has accepted rate of sensitivity, specificity, and English proficiency measures BIBREF61 . LIWC has a health related dictionary that can help to find whether a topic contains words associated with health. In this analysis, we used LIWC to find health related topics." ], [ "Obesity and Diabetes showed the highest and the lowest number of tweets (51.7% and 8.0%). Diet and Exercise formed 23.7% and 16.6% of the tweets (Table TABREF6 ).", "Out of all 4.5 million DDEO-related tweets returned by Tweeter's API, the LDA found 425 topics. We used LIWC to filter the detected 425 topics and found 222 health-related topics. Additionally, we labeled topics based on the availability of DDEO words. For example, if a topic had “diet\", we labeled it as a diet-related topic. As expected and driven by the initial Tweeter API's query, common topics were Diabetes, Diet, Exercise, and Obesity (DDEO). (Table TABREF7 ) shows that the highest and the lowest number of topics were related to exercise and diabetes (80 and 21 out of 222). Diet and Obesity had almost similar rates (58 and 63 out of 222).", "Each of the DDEO topics included several common subtopics including both DDEO and non-DDEO terms discovered by the LDA algorithm (Table TABREF7 ). Common subtopics for “Diabetes\", in order of frequency, included type 2 diabetes, obesity, diet, exercise, blood pressure, heart attack, yoga, and Alzheimer. Common subtopics for “Diet\" included obesity, exercise, weight loss [medicine], celebrities, vegetarian, diabetes, religious diet, pregnancy, and mental health. Frequent subtopics for “Exercise\" included fitness, obesity, daily plan, diet, brain, diabetes, and computer games. And finally, the most common subtopics for “Obesity\" included diet, exercise, children, diabetes, Alzheimer, and cancer (Table TABREF7 ). Table TABREF8 provides illustrative examples for each of the topics and subtopics.", "Further exploration of the subtopics revealed additional patterns of interest (Tables TABREF7 and TABREF8 ). We found 21 diabetes-related topics with 8 subtopics. While type 2 diabetes was the most frequent of the sub-topics, heart attack, Yoga, and Alzheimer are the least frequent subtopics for diabetes. Diet had a wide variety of emerging themes ranging from celebrity diet (e.g., Beyonce) to religious diet (e.g., Ramadan). Diet was detected in 63 topics with 10 subtopics; obesity, and pregnancy and mental health were the most and the least discussed obesity-related topics, respectively. Exploring the themes for Exercise subtopics revealed subjects such as computer games (e.g., Pokemon-Go) and brain exercises (e.g., memory improvement). Exercise had 7 subtopics with fitness as the most discussed subtopic and computer games as the least discussed subtopic. Finally, Obesity themes showed topics such as Alzheimer (e.g., research studies) and cancer (e.g., breast cancer). Obesity had the lowest diversity of subtopics: six with diet as the most discussed subtopic, and Alzheimer and cancer as the least discussed subtopics.", "Diabetes subtopics show the relation between diabetes and exercise, diet, and obesity. Subtopics of diabetes revealed that users post about the relationship between diabetes and other diseases such as heart attack (Tables TABREF7 and TABREF8 ). The subtopic Alzheimer is also shown in the obesity subtopics. This overlap between categories prompts the discussion of research and linkages among obesity, diabetes, and Alzheimer's disease. Type 2 diabetes was another subtopic expressed by users and scientifically documented in the literature.", "The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics. The words with italic and underline styles in Table 2 demonstrate the relation among the four DDEO areas. Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ). The strongest correlation among the topics was determined to be between exercise and obesity ( INLINEFORM0 ). Other notable correlations were: diabetes and obesity ( INLINEFORM1 ), and diet and obesity ( INLINEFORM2 )." ], [ "Diabetes, diet, exercise, and obesity are common public health related opinions. Analyzing individual- level opinions by automated algorithmic techniques can be a useful approach to better characterize health opinions of a population. Traditional public health polls and surveys are limited by a small sample size; however, Twitter provides a platform to capture an array of opinions and shared information a expressed in the words of the tweeter. Studies show that Twitter data can be used to discover trending topics, and that there is a strong correlation between Twitter health conversations and Centers for Disease Control and Prevention (CDC) statistics BIBREF62 .", "This research provides a computational content analysis approach to conduct a deep analysis using a large data set of tweets. Our framework decodes public health opinions in DDEO related tweets, which can be applied to other public health issues. Among health-related subtopics, there are a wide range of topics from diseases to personal experiences such as participating in religious activities or vegetarian diets.", "Diabetes subtopics showed the relationship between diabetes and exercise, diet, and obesity (Tables TABREF7 and TABREF8 ). Subtopics of diabetes revealed that users posted about the relation between diabetes and other diseases such as heart attack. The subtopic Alzheimer is also shown in the obesity subtopics. This overlap between categories prompts the discussion of research and linkages among obesity, diabetes, and Alzheimer's disease. Type 2 diabetes was another subtopic that was also expressed by users and scientifically documented in the literature. The inclusion of Yoga in posts about diabetes is interesting. While yoga would certainly be labeled as a form of fitness, when considering the post, it was insightful to see discussion on the mental health benefits that yoga offers to those living with diabetes BIBREF63 .", "Diet had the highest number of subtopics. For example, religious diet activities such as fasting during the month of Ramadan for Muslims incorporated two subtopics categorized under the diet topic (Tables TABREF7 and TABREF8 ). This information has implications for the type of diets that are being practiced in the religious community, but may help inform religious scholars who focus on health and psychological conditions during fasting. Other religions such as Judaism, Christianity, and Taoism have periods of fasting that were not captured in our data collection, which may have been due to lack of posts or the timeframe in which we collected data. The diet plans of celebrities were also considered influential to explaining and informing diet opinions of Twitter users BIBREF64 .", "Exercise themes show the Twitter users' association of exercise with “brain\" benefits such as increased memory and cognitive performance (Tables TABREF7 and TABREF8 ) BIBREF65 . The topics also confirm that exercising is associated with controlling diabetes and assisting with meal planning BIBREF66 , BIBREF9 , and obesity BIBREF67 . Additionally, we found the Twitter users mentioned exercise topics about the use of computer games that assist with exercising. The recent mobile gaming phenomenon Pokeman-Go game BIBREF68 was highly associated with the exercise topic. Pokemon-Go allows users to operate in a virtual environment while simultaneously functioning in the real word. Capturing Pokemons, battling characters, and finding physical locations for meeting other users required physically activity to reach predefined locations. These themes reflect on the potential of augmented reality in increasing patients' physical activity levels BIBREF69 .", "Obesity had the lowest number of subtopics in our study. Three of the subtopics were related to other diseases such as diabetes (Tables TABREF7 and TABREF8 ). The scholarly literature has well documented the possible linkages between obesity and chronic diseases such as diabetes BIBREF1 as supported by the study results. The topic of children is another prominent subtopic associated with obesity. There has been an increasing number of opinions in regard to child obesity and national health campaigns that have been developed to encourage physical activity among children BIBREF70 . Alzheimer was also identified as a topic under obesity. Although considered a perplexing finding, recent studies have been conducted to identify possible correlation between obesity and Alzheimer's disease BIBREF71 , BIBREF72 , BIBREF73 . Indeed, Twitter users have expressed opinions about the study of Alzheimer's disease and the linkage between these two topics.", "This paper addresses a need for clinical providers, public health experts, and social scientists to utilize a large conversational dataset to collect and utilize population level opinions and information needs. Although our framework is applied to Twitter, the applications from this study can be used in patient communication devices monitored by physicians or weight management interventions with social media accounts, and support large scale population-wide initiatives to promote healthy behaviors and preventative measures for diabetes, diet, exercise, and obesity.", "This research has some limitations. First, our DDEO analysis does not take geographical location of the Twitter users into consideration and thus does not reveal if certain geographical differences exists. Second, we used a limited number of queries to select the initial pool of tweets, thus perhaps missing tweets that may have been relevant to DDEO but have used unusual terms referenced. Third, our analysis only included tweets generated in one month; however, as our previous work has demonstrated BIBREF42 , public opinion can change during a year. Additionally, we did not track individuals across time to detect changes in common themes discussed. Our future research plans includes introducing a dynamic framework to collect and analyze DDEO related tweets during extended time periods (multiple months) and incorporating spatial analysis of DDEO-related tweets." ], [ "This study represents the first step in developing routine processes to collect, analyze, and interpret DDEO-related posts to social media around health-related topics and presents a transdisciplinary approach to analyzing public discussions around health topics. With 2.34 billion social media users in 2016, the ability to collect and synthesize social media data will continue to grow. Developing methods to make this process more streamlined and robust will allow for more rapid identification of public health trends in real time.", "Note: Amir Karami will handle correspondence at all stages of refereeing and publication." ], [ "The authors state that they have no conflict of interest." ], [ "This research was partially supported by the first author's startup research funding provided by the University of South Carolina, School of Library and Information Science. We thank Jill Chappell-Fail and Jeff Salter at the University of South Carolina College of Information and Communications for assistance with technical support.", "References" ] ] }
{ "question": [ "Do they evaluate only on English data?", "How strong was the correlation between exercise and diabetes?", "How were topics of interest about DDEO identified?" ], "question_id": [ "e5ae8ac51946db7475bb20b96e0a22083b366a6d", "18288c7b0f8bd7839ae92f9c293e7fb85c7e146a", "b5e883b15e63029eb07d6ff42df703a64613a18a" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "twitter", "twitter", "twitter" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "This phase collected tweets using Twitter's Application Programming Interfaces (API) BIBREF43 . Within the Twitter API, diabetes, diet, exercise, and obesity were selected as the related words BIBREF4 and the related health areas BIBREF19 . Twitter's APIs provides both historic and real-time data collections. The latter method randomly collects 1% of publicly available tweets. This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. We used the queries to collect approximately 4.5 million related tweets between 06/01/2016 and 06/30/2016. The data will be available in the first author's website. Figure FIGREF3 shows a sample of collected tweets in this research." ], "highlighted_evidence": [ "This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. " ] } ], "annotation_id": [ "13493df9ec75ae877c9904e23729ff119814671f" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "weak correlation with p-value of 0.08", "evidence": [ "The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics. The words with italic and underline styles in Table 2 demonstrate the relation among the four DDEO areas. Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ). The strongest correlation among the topics was determined to be between exercise and obesity ( INLINEFORM0 ). Other notable correlations were: diabetes and obesity ( INLINEFORM1 ), and diet and obesity ( INLINEFORM2 ).", "FLOAT SELECTED: Figure 2: DDEO Correlation P-Value" ], "highlighted_evidence": [ "The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics.", "Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ).", "FLOAT SELECTED: Figure 2: DDEO Correlation P-Value" ] } ], "annotation_id": [ "ea7f28bf7cf3afc36dfd4eade6a0235621cd2869" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "using topic modeling model Latent Dirichlet Allocation (LDA)", "evidence": [ "To discover topics from the collected tweets, we used a topic modeling approach that fuzzy clusters the semantically related words such as assigning “diabetes\", “cancer\", and “influenza\" into a topic that has an overall “disease\" theme BIBREF44 , BIBREF45 . Topic modeling has a wide range of applications in health and medical domains such as predicting protein-protein relationships based on the literature knowledge BIBREF46 , discovering relevant clinical concepts and structures in patients' health records BIBREF47 , and identifying patterns of clinical events in a cohort of brain cancer patients BIBREF48 .", "Among topic models, Latent Dirichlet Allocation (LDA) BIBREF49 is the most popular effective model BIBREF50 , BIBREF19 as studies have shown that LDA is an effective computational linguistics model for discovering topics in a corpus BIBREF51 , BIBREF52 . LDA assumes that a corpus contains topics such that each word in each document can be assigned to the topics with different degrees of membership BIBREF53 , BIBREF54 , BIBREF55 .", "We used the Mallet implementation of LDA BIBREF49 , BIBREF56 with its default settings to explore opinions in the tweets. Before identifying the opinions, two pre-processing steps were implemented: (1) using a standard list for removing stop words, that do not have semantic value for analysis (such as “the\"); and, (2) finding the optimum number of topics. To determine a proper number of topics, log-likelihood estimation with 80% of tweets for training and 20% of tweets for testing was used to find the highest log-likelihood, as it is the optimum number of topics BIBREF57 . The highest log-likelihood was determined 425 topics." ], "highlighted_evidence": [ "To discover topics from the collected tweets, we used a topic modeling approach that fuzzy clusters the semantically related words such as assigning “diabetes\", “cancer\", and “influenza\" into a topic that has an overall “disease\" theme BIBREF44 , BIBREF45 .", "Among topic models, Latent Dirichlet Allocation (LDA) BIBREF49 is the most popular effective model BIBREF50 , BIBREF19 as studies have shown that LDA is an effective computational linguistics model for discovering topics in a corpus BIBREF51 , BIBREF52 .", "We used the Mallet implementation of LDA BIBREF49 , BIBREF56 with its default settings to explore opinions in the tweets." ] } ], "annotation_id": [ "33c66527e46da56cb4033d4a47173f9aa136265d" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Figure 1: A Sample of Tweets", "Table 1: DDEO Queries", "Table 2: DDEO Topics and Subtopics - Diabetes, Diet, Exercise, and Obesity are shown with italic and underline styles in subtopics", "Figure 2: DDEO Correlation P-Value", "Table 3: Topics Examples" ], "file": [ "3-Figure1-1.png", "5-Table1-1.png", "6-Table2-1.png", "6-Figure2-1.png", "7-Table3-1.png" ] }
1909.00154
Rethinking travel behavior modeling representations through embeddings
This paper introduces the concept of travel behavior embeddings, a method for re-representing discrete variables that are typically used in travel demand modeling, such as mode, trip purpose, education level, family type or occupation. This re-representation process essentially maps those variables into a latent space called the \emph{embedding space}. The benefit of this is that such spaces allow for richer nuances than the typical transformations used in categorical variables (e.g. dummy encoding, contrasted encoding, principal components analysis). While the usage of latent variable representations is not new per se in travel demand modeling, the idea presented here brings several innovations: it is an entirely data driven algorithm; it is informative and consistent, since the latent space can be visualized and interpreted based on distances between different categories; it preserves interpretability of coefficients, despite being based on Neural Network principles; and it is transferrable, in that embeddings learned from one dataset can be reused for other ones, as long as travel behavior keeps consistent between the datasets. ::: The idea is strongly inspired on natural language processing techniques, namely the word2vec algorithm. Such algorithm is behind recent developments such as in automatic translation or next word prediction. Our method is demonstrated using a model choice model, and shows improvements of up to 60\% with respect to initial likelihood, and up to 20% with respect to likelihood of the corresponding traditional model (i.e. using dummy variables) in out-of-sample evaluation. We provide a new Python package, called PyTre (PYthon TRavel Embeddings), that others can straightforwardly use to replicate our results or improve their own models. Our experiments are themselves based on an open dataset (swissmetro).
{ "section_name": [ "Introduction", "Representing categorical variables", "The concept of text embeddings", "Travel behaviour embeddings", "Travel behaviour embeddings ::: The general idea", "Travel behaviour embeddings ::: Methodology", "An experiment with mode choice", "An experiment with mode choice ::: The Swissmetro dataset", "An experiment with mode choice ::: Principles for the model specification" ], "paragraphs": [ [ "Since their early days, representation in random utility behavior models has followed generally quite clear principles. For example, numeric quantities like travel time and cost may be directly used or transformed depending on observed non-linear efects (e.g. using log). Numeric variables that are not “quantities\" per se, such as age or even geographic coordinates tend to be discretized and then transformed into vectors of dummy variables. Similarly, categorical variables such as education level or trip purpose are already discrete, and thus are also usually “dummyfied\". Then, we may interact any subset of the above by combining (typically, multiplying) them, as long as we get in the end a vector of numeric values that can be incorporated in a statistical model, a linear one in the case of the most common logit model.", "There are however phenomena that are hard to represent, and modelers end up struggling to find the right representation. For example, influence of social interactions between different persons, hierarchical decision making, autocorrelated nature of time and space, or abstract concepts such as accessibility, attitudes, personality traits and so on. The point here, is that the nature of our models seems to enforce a compromise between the true semantics of a variable (i.e. the “meaning\" of a certain information for the decision making process) and its realisation in practice. And that further research should be done to find new representation paradigms.", "Historically speaking, the natural language processing (NLP) field has had similar dilemmas for decades, and for a while two general trends were competing: the statistical modeling approaches, and the linguistic theory based approaches. The former relied on simple representations, such as vector frequencies, or dummy variables, to become practical, while the latter used domain knowledge such as grammars or logic. Until recently, neither had considerable success in making machines able to understand or generate human language, but developments in deep neural networks together with overwhelmingly massive amounts of data (i.e. the World Wide Web) brought them to a new area, where the two are approaching each other and achieving hitherto results considered extremely hard, such as question answering, translation, next word prediction. One of the key concepts in this revolution is that of embeddings, which will be further explained in this paper.", "Our focus here is on the representation of categorical variables. The default paradigm is dummy variables (also known as “one-hot-encoding\" in machine learning literature), which have well-known limitations, namely the explosion of dimensionality and enforced ortogonality. The former happens because we assign one new “dummy\" variable to each of D-1 categories, and easily go from a small original variable specification to one with hundreds of variables, bringing problems in model estimation and analysis. This often affects the data collection process itself. Since one doesn't want to end up with too many categories, we might as well give less options in a survey, or decrease the resolution of a sensor. The problem of enforced ortogonality relates to the fact that, in a dummy encoding, all categories become equidistant. The similarity between “student\" and “employed\" is the same as between “student\" and “retired\", which in many cases (e.g. mode choice, departure time choice) goes against intuition. Other encoding methods exist, such as contrasted encoding or principal components analysis (PCA). The former ends up being a subtle variation on the dummy approach, but the latter already provides an interesting answer to the problem: categories are no longer forcibly equidistant, and the number of variables can be much reduced. However, it is a non-supervised approach. The distance between “student\" and “employed\" will always be the same, regardless of the problem we are solving, but this may be intuitively illogical if we consider car ownership versus departure time choice models for example.", "The key idea in this paper is to introduce a method, called Travel Behavior embeddings, that borrows much from the NLP concept. This method serves to encode categorical variables, and is dependent on the problem at hand. We will focus on mode choice, and test on a well-known dataset, by comparing with both dummy and PCA encoding. All the dataset and code are made openly available, and the reader can follow and generate results him/herself using an iPython notebook included. Our ultimate goal is certainly that the reader reuses our PyTre package for own purposes.", "This paper presents some results and conclusions, after a relatively long exploration and analysis process, including other datasets and code variations not mentioned here for interest of clarity and replicability. While we show these concepts to be promising and innovative in this paper, one should be wary of over-hyping yet another Machine Learning/Artificial Intelligence concept: after all, Machine Learning is still essentially based on statistics. In NLP, the number of different words in consideration at a given moment can be in order of tens of thousands, while our categorical variables rarely go beyond a few dozens. This means that, for example, it becomes clear later that the least number of original categories, the less the benefit of embeddings (in the limit, a binary variable like gender, is useless to do embeddings with), and also that if we do get a significantly large and statistically representative dataset, a dummy variables representation is sufficient. We will quickly see, however, that complexity can grow quick enough to justify an embeddings based method even if without the shockingly better performance observed in NLP applications." ], [ "We are generally concerned with random utility maximization (RUM) models, for they have a dominant role in travel behavior modeling. The nature of such models is predominantly numeric, linear, and quite often strictly flat (notwithstanding hierarchical variations, such as nested models BIBREF1, hierarchical Bayes BIBREF2, or non-linear transformations). As a consequence, while numerical variables (e.g. travel time, cost, or income) can be directly used as available, perhaps subject to transformations or segmentation, nominal ones bring about a greater challenge. We tend to enforce a limited set of treatments such as:", "Dummy variables, or one-hot encoding - for each categorical variable $v$ with D categories, we get D-1 binary variables (the “dummies\"). At each input vector $x_n$, with categorical value $v=d$, the value “1\" will be assigned to the corresponding dummy, while “0\" to all others. If $v$ corresponds to the “default\" category, all dummies are “0\".", "Contrast encoding BIBREF3 - same as dummy encoding, but instead of “1\" for each category, we have a value that results from a contrasting formula. There are many different formulas (e.g. Helmert, Sum, Backward Difference), but all consist of subtracting the mean of the target variable, for a given category, with a general stastic (e.g. the mean of the dependent variable for all categories; the mean of the dependent variable in the previous category in an ordered list).", "Principal Components Analysis (PCA) - run the PCA algorithm on the data matrix obtained by dummy representation of the categorical variable, then re-represent it with the corresponding projected eigenvector coefficients. One selects K eigenvectors (e.g. according to a variance explained rule), and thus each category is mapped to a vector of K real values.", "Segmenting models, mixture models - A general alternative to categorical data representation is in fact to avoid it in the first place. One obvious method would be through creating hierarchical disaggregate methods (e.g. one per category). This is not in itself a representation paradigm, but an alternative way to see this problem. It certainly raises scalability and inference concerns.", "In datasets where behavior heterogeneity is high, and number of observations is significantly smaller than population size, increasing dimensionality by adding a variable per each category is very risky because the amount of data that is in practice usable to estimate each new coefficient becomes insufficient. A simple intuition here is by considering that, for a dummy variable that is only “1\" for a few observations in the dataset, its coefficient will be “activated\" only that small number of times. If there is a lot of variance in the associated behavior, the variance of the coefficient will also be large, and the coefficient will be considered statistically insignificant.", "The benefit of representations that map into a latent space, like embeddings and PCA, is that such a space is inevitably shared, and thus every observation contributes indirectly to all category variables. This comes with no interpretability cost, because one can always map to the “dummy\" space and analyse the individual coefficients, as will be shown in our experiments." ], [ "The idea of text embeddings comes from a simple re-representation necessity. A natural-language processing system is itself also a numeric machine, therefore it requires each individual word in a dictionary to match its own numeric representation. Just as in our travel models, a possible solution has been to use dummy variables, and it is quite obvious that the dimensionality of such a one-hot encoding vector, quickly becomes overwhelming. Think for example next word prediction algorithm, like the one we have in our smartphones. It is essentially a skip-gram BIBREF4 model that predicts the next word, given the n words before. The English dictionary has about 300000 words, and if we have about 5 words before for context, the number of independent variables of the model would become 1.5 million!", "The goal of text embeddings algorithms (e.g. Word2Vec BIBREF5) is to a) reduce the representation of each word to a computationally acceptable dimension, while simultaneously b) learning the semantic distance between different words. In other words, the euclidean distance of semantically related words (e.g. “dog\" and “cat\") in this new space should be smaller than unrelated words (e.g. “dog\" and “optimize\"). As mentioned before, in a dummy (or one-hot) encoding, all distances between words are equal by definition.", "The word embeddings methodology is very well explained in several webpages such as BIBREF6, so the reader is strongly encouraged to visit them first. However, for the sake of completeness, we summarize here the general idea.", "Imagine the following task: given a word $w_i$ in a text, predict the next word $w_o$. If we solve it with a neural network model, we could have the architecture in Figure FIGREF8, where the input consists simply of the one-hot-encoding representation of the word (i.e. one dummy variable for each word in a dictionary of dimensionality $D$), and the output corresponds to the probability of each word in the dictionary being the next one (also a vector with dimensionality $D$).", "The output layer thus consists simply of a softmax function. In other words, exactly the classical multinomial logit formulation that we would have in an RUM, in which each different word corresponds to an “alternative\".", "The concept of embeddings is directly associated to the hidden layer, which is a set of linear activation neurons, typically with a dimensionality $K<<D$. Each such neuron is simply an identity function: it sums all inputs; then propagates this sum to the output layer. Since only one input neuron is activated at a time (remember that the input is a one-hot-encoding vector, with one “1\" and the rest with “0\"), each hidden layer neuron just propagates the (single) weight that links to that input neuron. If we have enough data for training this model, we will eventually land on a situation where, for each input word, there is a fixed vector of weights that are directly used in the output (softmax) function, to generate the prediction. With more data, this weight vector will not change (down to some small delta threshold). These stable vectors are what we call embeddings, and the dimensionality of these vectors is called embedding size.", "Formally, we have a dataset $\\mathcal {D}=\\lbrace x_n, y_n\\rbrace , n=1\\ldots N$, where each $x_n$ and $y_n$ are one-hot (dummy) encodings of categorical variables. The dimensionality of $x_n$ is $D\\times 1$, with $D$ being the number of different categories in $x_n$, while the dimensionality of $y_n$ is $C\\times 1$, with $C$ being the number of categories (alternatives) in $y_n$. The full expression for the embeddings model as described is:", "where $W$ is the embeddings matrix of size $K\\times D$, where $K$ is called the embeddings size. $B$ is a matrix of coefficients ($C\\times K$) for the softmax layer, so $B_c$ is simply the coefficients (row) vector for output class (alternative) $c$, and $\\alpha _c$ is the corresponding intercept. The typical loss function used in such models is called the categorical cross entropy:", "Where $\\delta _{i}$ is the kronecker delta ($\\delta _{true}=1; \\delta _{false}=0$), and $\\mathcal {L}(n)$ is the cumulative loss for an individual data point. This formalization is the simplest version, without loss of generality. In practice, as seen below, we will model multiple embeddings matrices simultaneously, and will add regularization terms to the loss function, so the models tested in this paper consist of compositions of the above.", "So these so called embeddings are in fact a relatively shallow data representation in a simple neural network. What is their added value? Obviously, the first practical benefit is dimensionality reduction, because now there is a mapping between each of the $C$ words to a unique vector of size $K$. The second aspect is that this new representation is the one that maximizes the performance towards a specific task (in our example, prediction of the next word), therefore it is a supervised process, as opposed for example to PCA. The third and more interesting aspect relates with semantic similarity. A natural consequence of the mentioned algorithm is that words that have similar output distributions (i.e. next words) will tend to be close to each other. Figure FIGREF10 shows a 2D visualization (t-SNE) with a subset of english words. In such a visualization, data is projected in 2D space by maintaining the same vector-to-vector distances as in the original ($K$ order space). Therefore the X and Y axes have no specific meaning, only distances between every pair of points are relevant.", "We can see that semantically similar concepts, more specifically concepts that tend to have the same distribution of “next words\", are placed closer. Another intriguing consequence is that, since the words are now in the $K$ dimensional, embeddings space, we can also do some linear algebra on them. A well known formulation is $King-Man+Woman=Queen$. Essentially, the vector $King-Man$ corresponds to the concept of “crowning\" (therefore $Woman+crowning=Queen$). The same could be done with many other concept pairs. Figure FIGREF11 show also an alternative interpretation of “man-female\", as well as examples with cities and verb tense.", "Finally, another relevant note on the embeddings representation is that, just like the PCA encoding, one can always project back into the original space and use this for interpretability. In other words, since there is a 1-to-1 mapping from each category to its encoding, there is also a 1-to-1 mapping between a model that uses dummy variables and a model using such encodings. This may be useful for interpretability, since in the case of dummy variables we have a direct interpretation (e.g. a beta coefficient value in a logit model) for the effect of a given category, while the same doesn't happen for an encoded variable (i.e. there is no meaning for the value of a single beta coefficient in an embeddings encoding when K>1). In order to preserve statistical significance information (e.g. p-values) we only need to follow the well known rules of normal random variables.", "There are open databases available (e.g. GLoVe BIBREF9, FastText BIBREF7) that provide word embedding tables for the entire English language (Glove provides several embedding tables, up to embedding size between 100 and 300). In our next word application example, we now talk about models with 500-1500 variables, which is very manageable for our machines today.", "Summarizing, the general idea of word embeddings is to re-represent a categorical variable into a lower dimensional representation with continuous values . Whenever such a variable is to be used in a model, one can simply replace it with the corresponding embeddings vector. We have previously demonstrated the value of such word embeddings in demand prediction in special events BIBREF10, where we collected event textual descriptions, and used Glove embedding vectors to incorporate such information in a neural network model.", "Finally, an interesting point to mention relates to the typical difference in dataset size between the original embeddings training model (Glove, approximately 6 billion input word vectors from 37 million texts) and the model one implements to solve a particular problem (in our special events case, less than 1000 short event descriptions, with at most few hundred words each). Instead of creating ourselves a new embeddings model using the events dataset, we reused the pre-trained GloVe dataset. The benefit is significant because in practice we trained our model to deal with all words in the dictionary, much beyond the limited vocabulary that we obtained in our 1000 short texts. In practice we have used a very small percentage of the english dictionary. When, in an out-of-sample test, our model finds words that were not in the training set, it still works perfectly well." ], [ "Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair.", "Our hypothesis is that, given the limitations of dummy variables that are commonly used and the unsupervised nature of PCA, using instead an embeddings mechanism should improve significantly the quality of our models, both in terms of loglikelihood but also in terms of allowing for lower complexity (i.e. less variables). Ultimately, one could think of a framework such as GLoVe, where embeddings for such variables could be trivially shared with the community. For example, we could have a “Travel behavior embeddings\" database, incrementally built from travel surveys from around the world. Such database could have embeddings for mode choice target variables, but also for departure time, destination choice, car ownership, and so on. Whenever a modeler wanted to estimate a new model, she could just download the right encodings and use them directly. This is particularly relevant if one considers the complicated challenges for opening or sharing travel survey datasets in our field. Of course, a major question arises: are behaviors that consistent across the world? There are certainly nuances across the world, but we believe that general patterns would emerge (e.g. a “business\" trip purpose will be closer to “work\" than “leisure\", in a departure time choice model; “student\" will be closer to “unemployed\" than to “retired\" in a car ownership model)." ], [ "We believe that, as with word embeddings, a mapping that preserves semantic distance relative to a certain choice problem, should be useful for modeling. As with a PCA encoding, another benefit is that by sharing parameters in the learning process, the model can generalize better, as opposed to a dummy encoding, where each categorical value has its own parameter, that is only active when observed.", "The general idea is thus to create a mapping between a variable for which we want to find an embeddings representation, and a target variable, as in Figure FIGREF15. We call the mapping function “PyTre Embeddings\", because that is the name of the object in our proposed Python “Travel Embeddings\" package.", "From an experimental design and application perspective, the approach followed in this paper is the following:", "Create list of categorical variables to encode (the encoding set)", "Split dataset into train, development and test sets", "For each variable in encoding set, learn the new embeddings using the embeddings train set . This should be done simultaneously (all variable embeddings estimated at once, as explained in the next section).", "Encode choice models for train, development and test sets using the learned embeddings", "Estimate choice model accordingly using its train set", "Evaluate the new model using the test set", "Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics). Whenever we want to analyse a particular model (e.g. to check the coefficients of a choice model), we select the one with the highest likelihood at the development set (i.e. in practice, its out-of-sample generalization performance), and report its performance on the test set." ], [ "Since a choice model will typically involve other variables than the categorical ones that we learn the embeddings for, it is important to take into account their effects. Figure FIGREF24 shows the simplest travel embeddings model. As an example, the categorical variable is trip purpose, and there are a few other variables such as gender, cost of the alternatives, distance, and so on. Notice that they are directly fed into the softmax output layer, together with the embeddings output.", "The dataset sizes in transportation behavior modeling are substantially smaller than typical word embeddings ones, and the risk of overfitting is therefore higher. To mitigate this problem, besides adding regularization penalties in the objective function, we add what we call a regularizer layer for each embedding, which is no more than a softmax layer that penalizes whenever it cannot recover the original one-hot-encoding vectors (Figure FIGREF25, left). We call the combination of embeddings and its regularizer network, a Travel Embeddings layer. Finally, it is obviously better to train all embeddings simultaneously, so that they accommodate each other's effects (Figure FIGREF25, right)." ], [ "The goal of this paper is to test the potential of embeddings in a simple and well-known choice model context, comparing it to well-known baseline techniques. Therefore, the general model specification follows quite simple assumptions. We expect that in future work from us or others, more elaborate derivations can take advantage of embeddings such as nested, mixed logit or latent class choice models (LCCM), for example.", "We will apply the methodology to the well-known “Swissmetro\" dataset. We will compare it with a dummy variables and PCA baselines. We will follow the 3-way experimental design mentioned before: split the dataset into train, development and test sets, so that the embeddings, PCA eigenvectors and the choice model are estimated from the same train and development sets, and validate it out-of-sample. For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space.", "All experiment code is available as a jupyter notebook in a package we created for this work (to which we called PyTre). For estimating the multinomial logit model (MNL) we used the PyLogit BIBREF11 package." ], [ "The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. According to its description BIBREF0, the respondents provided information in order to analyze the impact of the modal innovation in transportation, represented by the Swissmetro, a revolutionary mag-lev underground system, against the usual transport modes represented by car and train. After discarding respondents for which some variables were not available (e.g. age, purpose), a total of 10469 responses from 1188 individuals were used for the experiments.", "We split the dataset into 3 different parts:", "Embeddings train set: 60% of the dataset (6373 vectors)", "Development set: 20% of the dataset (2003 vectors)", "Test set: 20% of the dataset (2003 vectors)" ], [ "The PyLogit package BIBREF11 also uses Swissmetro as an example. Therefore, our model specifications will extend the default one from this package. We re-estimated this model with the train set and validated with testset. The results are shown in tables TABREF31 and TABREF32. Since we are comparing the models at the test set, the key indicators should be pseudo R-square and log-likelihood. Indicators that consider model complexity (robust r-square and AIC) are less important on the test set in our view because the overfitting effect (i.e. improving fit just by adding more variables) will no longer be verifiable in this way. Instead, one sees overfitting if test set performance is considerably inferior to the training set." ] ] }
{ "question": [ "What datasets are used for evaluation?", "How do their train their embeddings?", "How do they model travel behavior?", "How do their interpret the coefficients?" ], "question_id": [ "c45a160d31ca8eddbfea79907ec8e59f543aab86", "7358a1ce2eae380af423d4feeaa67d2bd23ae9dd", "1165fb0b400ec1c521c1aef7a4e590f76fee1279", "f2c5da398e601e53f9f545947f61de5f40ede1ee" ], "nlp_background": [ "five", "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Swissmetro dataset" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. According to its description BIBREF0, the respondents provided information in order to analyze the impact of the modal innovation in transportation, represented by the Swissmetro, a revolutionary mag-lev underground system, against the usual transport modes represented by car and train. After discarding respondents for which some variables were not available (e.g. age, purpose), a total of 10469 responses from 1188 individuals were used for the experiments." ], "highlighted_evidence": [ "The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. " ] } ], "annotation_id": [ "5ac34eb67f1f8386ca9654d0d56e6e970c8f6cde" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "The embeddings are learned several times using the training set, then the average is taken.", "evidence": [ "For each variable in encoding set, learn the new embeddings using the embeddings train set . This should be done simultaneously (all variable embeddings estimated at once, as explained in the next section).", "Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics). Whenever we want to analyse a particular model (e.g. to check the coefficients of a choice model), we select the one with the highest likelihood at the development set (i.e. in practice, its out-of-sample generalization performance), and report its performance on the test set." ], "highlighted_evidence": [ "For each variable in encoding set, learn the new embeddings using the embeddings train set .", "Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics)." ] } ], "annotation_id": [ "e7fa4a9302fccb534138aec8e7fcdff69791ab63" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "The data from collected travel surveys is used to model travel behavior.", "evidence": [ "Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair." ], "highlighted_evidence": [ "Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair." ] } ], "annotation_id": [ "135e6e05c3d4c16db9e073bdeb856ed2f91820a2" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "The coefficients are projected back to the dummy variable space.", "evidence": [ "We will apply the methodology to the well-known “Swissmetro\" dataset. We will compare it with a dummy variables and PCA baselines. We will follow the 3-way experimental design mentioned before: split the dataset into train, development and test sets, so that the embeddings, PCA eigenvectors and the choice model are estimated from the same train and development sets, and validate it out-of-sample. For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space." ], "highlighted_evidence": [ "For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space." ] } ], "annotation_id": [ "cefa81dfd716c6568a263ac073777e97fc32f783" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ] }
{ "caption": [ "Figure 1: The skip gram architecture [7]", "Figure 2: Visualization of a subset of words from FastText word embeddings database [8]", "Figure 3: Some classical examples of embeddings algebra [9]", "Figure 4: The general idea", "Figure 5: Travel embeddings model", "Figure 6: Travel embeddings model with regularization (left); Complete model, combining multiple travel embeddings layers (right).", "Table 1: Multinomial Logit Model Regression Results - original model", "Table 2: Multinomial Logit Model Regression coefficients - original model (**= p<0.05)", "Table 3: New dimensionality (K) of encoding set variables", "Figure 7: Embeddings model training performance", "Figure 8: MDS visualizations of embeddings results", "Figure 9: Switzerland’s cantons", "Table 4: Testset results for embeddings model", "Table 5: Multinomial Logit Model Regression Results - embeddings model (* = p<0.1; ** = p<0.05)", "Table 6: Multinomial Logit Model Regression Results - embeddings model projected into dummy variable space (* = p<0.1; ** = p<0.05)", "Table 7: Multinomial Logit Model Regression Results for dummy variable model with OD variables", "Table 8: Multinomial Logit Model Regression Results for dummy variable model without OD variables", "Table 9: Multinomial Logit Model Regression coefficients for dummy variable model without OD variables", "Table 10: Results for PCA model", "Table 11: Multinomial Logit Model Regression Results for PCA model", "Table 12: Summary of results", "Figure 10: R-square performance with percentage of “expensive\" survey. Left: light+detailed survey; Right: Big Data+detailed survey Note: Absence of data points means either negative R-squared, or model not possible to estimate (e.g. due to singular matrix)" ], "file": [ "6-Figure1-1.png", "7-Figure2-1.png", "8-Figure3-1.png", "10-Figure4-1.png", "11-Figure5-1.png", "12-Figure6-1.png", "13-Table1-1.png", "13-Table2-1.png", "14-Table3-1.png", "15-Figure7-1.png", "16-Figure8-1.png", "17-Figure9-1.png", "17-Table4-1.png", "18-Table5-1.png", "19-Table6-1.png", "21-Table7-1.png", "21-Table8-1.png", "22-Table9-1.png", "23-Table10-1.png", "24-Table11-1.png", "24-Table12-1.png", "25-Figure10-1.png" ] }
1908.05434
Sex Trafficking Detection with Ordinal Regression Neural Networks
Sex trafficking is a global epidemic. Escort websites are a primary vehicle for selling the services of such trafficking victims and thus a major driver of trafficker revenue. Many law enforcement agencies do not have the resources to manually identify leads from the millions of escort ads posted across dozens of public websites. We propose an ordinal regression neural network to identify escort ads that are likely linked to sex trafficking. Our model uses a modified cost function to mitigate inconsistencies in predictions often associated with nonparametric ordinal regression and leverages recent advancements in deep learning to improve prediction accuracy. The proposed method significantly improves on the previous state-of-the-art on Trafficking-10K, an expert-annotated dataset of escort ads. Additionally, because traffickers use acronyms, deliberate typographical errors, and emojis to replace explicit keywords, we demonstrate how to expand the lexicon of trafficking flags through word embeddings and t-SNE.
{ "section_name": [ "Introduction", "Related Work", "Method", "Word Embeddings", "Gated-Feedback Recurrent Neural Network", "Multi-Labeled Logistic Regression Layer", "Experiments", "Datasets", "Comparison with Baselines", "Ablation Test", "Qualitative Analysis of Predictions", "Emoji Analysis", "Discussion", "Acknowledgments", "Hyperparameters of the proposed ordinal regression neural network", "Access to the source materials" ], "paragraphs": [ [ "Globally, human trafficking is one of the fastest growing crimes and, with annual profits estimated to be in excess of 150 billion USD, it is also among the most lucrative BIBREF0 . Sex trafficking is a form of human trafficking which involves sexual exploitation through coercion. Recent estimates suggest that nearly 4 million adults and 1 million children are being victimized globally on any given day; furthermore, it is estimated that 99 percent of victims are female BIBREF1 . Escort websites are an increasingly popular vehicle for selling the services of trafficking victims. According to a recent survivor survey BIBREF2 , 38% of underage trafficking victims who were enslaved prior to 2004 were advertised online, and that number rose to 75% for those enslaved after 2004. Prior to its shutdown in April 2018, the website Backpage was the most frequently used online advertising platform; other popular escort websites include Craigslist, Redbook, SugarDaddy, and Facebook BIBREF2 . Despite the seizure of Backpage, there were nearly 150,000 new online sex advertisements posted per day in the U.S. alone in late 2018 BIBREF3 ; even with many of these new ads being re-posts of existing ads and traffickers often posting multiple ads for the same victims BIBREF2 , this volume is staggering.", "Because of their ubiquity and public access, escort websites are a rich resource for anti-trafficking operations. However, many law enforcement agencies do not have the resources to sift through the volume of escort ads to identify those coming from potential traffickers. One scalable and efficient solution is to build a statistical model to predict the likelihood of an ad coming from a trafficker using a dataset annotated by anti-trafficking experts. We propose an ordinal regression neural network tailored for text input. This model comprises three components: (i) a Word2Vec model BIBREF4 that maps each word from the text input to a numeric vector, (ii) a gated-feedback recurrent neural network BIBREF5 that sequentially processes the word vectors, and (iii) an ordinal regression layer BIBREF6 that produces a predicted ordinal label. We use a modified cost function to mitigate inconsistencies in predictions associated with nonparametric ordinal regression. We also leverage several regularization techniques for deep neural networks to further improve model performance, such as residual connection BIBREF7 and batch normalization BIBREF8 . We conduct our experiments on Trafficking-10k BIBREF9 , a dataset of escort ads for which anti-trafficking experts assigned each sample one of seven ordered labels ranging from “1: Very Unlikely (to come from traffickers)” to “7: Very Likely”. Our proposed model significantly outperforms previously published models BIBREF9 on Trafficking-10k as well as a variety of baseline ordinal regression models. In addition, we analyze the emojis used in escort ads with Word2Vec and t-SNE BIBREF10 , and we show that the lexicon of trafficking-related emojis can be subsequently expanded.", "In Section SECREF2 , we discuss related work on human trafficking detection and ordinal regression. In Section SECREF3 , we present our proposed model and detail its components. In Section SECREF4 , we present the experimental results, including the Trafficking-10K benchmark, a qualitative analysis of the predictions on raw data, and the emoji analysis. In Section SECREF5 , we summarize our findings and discuss future work." ], [ "Trafficking detection: There have been several software products designed to aid anti-trafficking efforts. Examples include Memex which focuses on search functionalities in the dark web; Spotlight which flags suspicious ads and links images appearing in multiple ads; Traffic Jam which seeks to identify patterns that connect multiple ads to the same trafficking organization; and TraffickCam which aims to construct a crowd-sourced database of hotel room images to geo-locate victims. These research efforts have largely been isolated, and few research articles on machine learning for trafficking detection have been published. Closest to our work is the Human Trafficking Deep Network (HTDN) BIBREF9 . HTDN has three main components: a language network that uses pretrained word embeddings and a long short-term memory network (LSTM) to process text input; a vision network that uses a convolutional network to process image input; and another convolutional network to combine the output of the previous two networks and produce a binary classification. Compared to the language network in HTDN, our model replaces LSTM with a gated-feedback recurrent neural network, adopts certain regularizations, and uses an ordinal regression layer on top. It significantly improves HTDN's benchmark despite only using text input. As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon.", "Ordinal regression: We briefly review ordinal regression before introducing the proposed methodology. We assume that the training data are INLINEFORM0 , where INLINEFORM1 are the features and INLINEFORM2 is the response; INLINEFORM3 is the set of INLINEFORM4 ordered labels INLINEFORM5 with INLINEFORM6 . Many ordinal regression methods learn a composite map INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 have the interpretation that INLINEFORM10 is a latent “score” which is subsequently discretized into a category by INLINEFORM11 . INLINEFORM12 is often estimated by empirical risk minimization, i.e., by minimizing a loss function INLINEFORM13 averaged over the training data. Standard choices of INLINEFORM14 and INLINEFORM15 are reviewed by J. Rennie & N. Srebro ( BIBREF11 ).", "Another common approach to ordinal regression, which we adopt in our proposed method, is to transform the label prediction into a series of INLINEFORM0 binary classification sub-problems, wherein the INLINEFORM1 th sub-problem is to predict whether or not the true label exceeds INLINEFORM2 BIBREF12 , BIBREF13 . For example, one might use a series of logistic regression models to estimate the conditional probabilities INLINEFORM3 for each INLINEFORM4 . J. Cheng et al. ( BIBREF6 ) estimated these probabilities jointly using a neural network; this was later extended to image data BIBREF14 as well as text data BIBREF15 , BIBREF16 . However, as acknowledged by J. Cheng et al. ( BIBREF6 ), the estimated probabilities need not respect the ordering INLINEFORM5 for all INLINEFORM6 and INLINEFORM7 . We force our estimator to respect this ordering through a penalty on its violation." ], [ "Our proposed ordinal regression model consists of the following three components: Word embeddings pre-trained by a Skip-gram model, a gated-feedback recurrent neural network that constructs summary features from sentences, and a multi-labeled logistic regression layer tailored for ordinal regression. See Figure SECREF3 for a schematic. The details of its components and their respective alternatives are discussed below.", " figure Overview of the ordinal regression neural network for text input. INLINEFORM0 represents a hidden state in a gated-feedback recurrent neural network." ], [ "Vector representations of words, also known as word embeddings, can be obtained through unsupervised learning on a large text corpus so that certain linguistic regularities and patterns are encoded. Compared to Latent Semantic Analysis BIBREF17 , embedding algorithms using neural networks are particularly good at preserving linear regularities among words in addition to grouping similar words together BIBREF18 . Such embeddings can in turn help other algorithms achieve better performances in various natural language processing tasks BIBREF4 .", "Unfortunately, the escort ads contain a plethora of emojis, acronyms, and (sometimes deliberate) typographical errors that are not encountered in more standard text data, which suggests that it is likely better to learn word embeddings from scratch on a large collection of escort ads instead of using previously published embeddings BIBREF9 . We use 168,337 ads scraped from Backpage as our training corpus and the Skip-gram model with Negative sampling BIBREF4 as our model." ], [ "To process entire sentences and paragraphs after mapping the words to embeddings, we need a model to handle sequential data. Recurrent neural networks (RNNs) have recently seen great success at modeling sequential data, especially in natural language processing tasks BIBREF19 . On a high level, an RNN is a neural network that processes a sequence of inputs one at a time, taking the summary of the sequence seen so far from the previous time point as an additional input and producing a summary for the next time point. One of the most widely used variations of RNNs, a Long short-term memory network (LSTM), uses various gates to control the information flow and is able to better preserve long-term dependencies in the running summary compared to a basic RNN BIBREF20 . In our implementation, we use a further refinement of multi-layed LSTMs, Gated-feedback recurrent neural networks (GF-RNNs), which tend to capture dependencies across different timescales more easily BIBREF5 .", "Regularization techniques for neural networks including Dropout BIBREF21 , Residual connection BIBREF7 , and Batch normalization BIBREF8 are added to GF-RNN for further improvements.", "After GF-RNN processes an entire escort ad, the average of the hidden states of the last layer becomes the input for the multi-labeled logistic regression layer which we discuss next." ], [ "As noted previously, the ordinal regression problem can be cast into a series of binary classification problems and thereby utilize the large repository of available classification algorithms BIBREF12 , BIBREF13 , BIBREF14 . One formulation is as follows. Given INLINEFORM0 total ranks, the INLINEFORM1 -th binary classifier is trained to predict the probability that a sample INLINEFORM2 has rank larger than INLINEFORM3 . Then the predicted rank is INLINEFORM4 ", "In a classification task, the final layer of a deep neural network is typically a softmax layer with dimension equal to the number of classes BIBREF20 . Using the ordinal-regression-to-binary-classifications formulation described above, J. Cheng et al. ( BIBREF6 ) replaced the softmax layer in their neural network with a INLINEFORM0 -dimensional sigmoid layer, where each neuron serves as a binary classifier (see Figure SECREF7 but without the order penalty to be discussed later).", "With the sigmoid activation function, the output of the INLINEFORM0 th neuron can be viewed as the predicted probability that the sample has rank greater than INLINEFORM5 . Alternatively, the entire sigmoid layer can be viewed as performing multi-labeled logistic regression, where the INLINEFORM6 th label is the indicator of the sample's rank being greater than INLINEFORM7 . The training data are thus re-formatted accordingly so that response variable for a sample with rank INLINEFORM8 becomes INLINEFORM9 k-1 INLINEFORM10 Y Y INLINEFORM11 Y - Y INLINEFORM12 J. Cheng et al.'s ( BIBREF6 ) final layer was preceded by a simple feed-forward network. In our case, word embeddings and GF-RNN allow us to construct a feature vector of fixed length from text input, so we can simply attach the multi-labeled logistic regression layer to the output of GF-RNN to complete an ordinal regression neural network for text input.", "The violation of the monotonicity in the estimated probabilities (e.g., INLINEFORM0 for some INLINEFORM1 and INLINEFORM2 ) has remained an open issue since the original ordinal regression neural network proposal of J. Cheng et al ( BIBREF6 ). This is perhaps owed in part to the belief that correcting this issue would significantly increase training complexity BIBREF14 . We propose an effective and computationally efficient solution to avoid the conflicting predictions as follows: penalize such conflicts in the training phase by adding INLINEFORM3 ", "to the loss function for a sample INLINEFORM0 , where INLINEFORM1 is a penalty parameter (Figure SECREF7 ). For sufficiently large INLINEFORM2 the estimated probabilities will respect the monotonicity condition; respecting this condition improves the interpretability of the predictions, which is vital in applications like the one we consider here as stakeholders are given the estimated probabilities. We also hypothesize that the order penalty may serve as a regularizer to improve each binary classifier (see the ablation test in Section SECREF15 ).", " figure Ordinal regression layer with order penalty.", "All three components of our model (word embeddings, GF-RNN, and multi-labeled logistic regression layer) can be trained jointly, with word embeddings optionally held fixed or given a smaller learning rate for fine-tuning. The hyperparameters for all components are given in the Appendix. They are selected according to either literature or grid-search." ], [ "We first describe the datasets we use to train and evaluate our models. Then we present a detailed comparison of our proposed model with commonly used ordinal regression models as well as the previous state-of-the-art classification model by E. Tong et al. ( BIBREF9 ). To assess the effect of each component in our model, we perform an ablation test where the components are swapped by their more standard alternatives one at a time. Next, we perform a qualitative analysis on the model predictions on the raw data, which are scraped from a different escort website than the one that provides the labeled training data. Finally, we conduct an emoji analysis using the word embeddings trained on raw escort ads." ], [ "We use raw texts scraped from Backpage and TNABoard to pre-train the word embeddings, and use the same labeled texts E. Tong et al. ( BIBREF9 ) used to conduct model comparisons. The raw text dataset consists of 44,105 ads from TNABoard and 124,220 ads from Backpage. Data cleaning/preprocessing includes joining the title and the body of an ad; adding white spaces around every emoji so that it can be tokenized properly; stripping tabs, line breaks, punctuations, and extra white spaces; removing phone numbers; and converting all letters to lower case. We have ensured that the raw dataset has no overlap with the labeled dataset to avoid bias in test accuracy. While it is possible to scrape more raw data, we did not observe significant improvements in model performances when the size of raw data increased from INLINEFORM0 70,000 to INLINEFORM1 170,000, hence we assume that the current raw dataset is sufficiently large.", "The labeled dataset is called Trafficking-10k. It consists of 12,350 ads from Backpage labeled by experts in human trafficking detection BIBREF9 . Each label is one of seven ordered levels of likelihood that the corresponding ad comes from a human trafficker. Descriptions and sample proportions of the labels are in Table TABREF11 . The original Trafficking-10K includes both texts and images, but as mentioned in Section SECREF1 , only the texts are used in our case. We apply the same preprocessing to Trafficking-10k as we do to raw data." ], [ "We compare our proposed ordinal regression neural network (ORNN) to Immediate-Threshold ordinal logistic regression (IT) BIBREF11 , All-Threshold ordinal logistic regression (AT) BIBREF11 , Least Absolute Deviation (LAD) BIBREF22 , BIBREF23 , and multi-class logistic regression (MC) which ignores the ordering. The primary evaluation metrics are Mean Absolute Error (MAE) and macro-averaged Mean Absolute Error ( INLINEFORM0 ) BIBREF24 . To compare our model with the previous state-of-the-art classification model for escort ads, the Human Trafficking Deep Network (HTDN) BIBREF9 , we also polarize the true and predicted labels into two classes, “1-4: Unlikely” and “5-7: Likely”; then we compute the binary classification accuracy (Acc.) as well as the weighted binary classification accuracy (Wt. Acc.) given by INLINEFORM1 ", "Note that for applications in human trafficking detection, MAE and Acc. are of primary interest. Whereas for a more general comparison among the models, the class imbalance robust metrics, INLINEFORM0 and Wt. Acc., might be more suitable. Bootstrapping or increasing the weight of samples in smaller classes can improve INLINEFORM1 and Wt. Acc. at the cost of MAE and Acc..", "The text data need to be vectorized before they can be fed into the baseline models (whereas vectorization is built into ORNN). The standard practice is to tokenize the texts using n-grams and then create weighted term frequency vectors using the term frequency (TF)-inverse document frequency (IDF) scheme BIBREF25 , BIBREF26 . The specific variation we use is the recommended unigram + sublinear TF + smooth IDF BIBREF26 , BIBREF27 . Dimension reduction techniques such as Latent Semantic Analysis BIBREF17 can be optionally applied to the frequency vectors, but B. Schuller et al. ( BIBREF28 ) concluded from their experiments that dimension reduction on frequency vectors actually hurts model performance, which our preliminary experiments agree with.", "All models are trained and evaluated using the same (w.r.t. data shuffle and split) 10-fold cross-validation (CV) on Trafficking-10k, except for HTDN, whose result is read from the original paper BIBREF9 . During each train-test split, INLINEFORM0 of the training set is further reserved as the validation set for tuning hyperparameters such as L2-penalty in IT, AT and LAD, and learning rate in ORNN. So the overall train-validation-test ratio is 70%-20%-10%. We report the mean metrics from the CV in Table TABREF14 . As previous research has pointed out that there is no unbiased estimator of the variance of CV BIBREF29 , we report the naive standard error treating metrics across CV as independent.", "We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models. Its Wt. Acc. is a substantial improvement over HTDN despite the fact that the latter use both text and image data. It is important to note that HTDN is trained using binary labels, whereas the other models are trained using ordinal labels and then have their ordinal predictions converted to binary predictions. This is most likely the reason that even the baseline models except for LAD can yield better Wt. Acc. than HTDN, confirming our earlier claim that polarizing the ordinal labels during training may lead to information loss." ], [ "To ensure that we do not unnecessarily complicate our ORNN model, and to assess the impact of each component on the final model performance, we perform an ablation test. Using the same CV and evaluation metrics, we make the following replacements separately and re-evaluate the model: 1. Replace word embeddings pre-trained from skip-gram model with randomly initialized word embeddings; 2. replace gated-feedback recurrent neural network with long short-term memory network (LSTM); 3. disable batch normalization; 4. disable residual connection; 5. replace the multi-labeled logistic regression layer with a softmax layer (i.e., let the model perform classification, treating the ordinal response variable as a categorical variable with INLINEFORM0 classes); 6. replace the multi-labeled logistic regression layer with a 1-dimensional linear layer (i.e., let the model perform regression, treating the ordinal response variable as a continuous variable) and round the prediction to the nearest integer during testing; 7. set the order penalty to 0. The results are shown in Table TABREF16 .", "The proposed ORNN once again has all the best metrics except for Wt. Acc. which is the 2nd best. This suggests that each component indeed makes a contribution. Note that if we disregard the ordinal labels and perform classification or regression, MAE falls off by a large margin. Setting order penalty to 0 does not deteriorate the performance by much, however, the percent of conflicting binary predictions (see Section SECREF7 ) rises from 1.4% to 5.2%. So adding an order penalty helps produce more interpretable results." ], [ "To qualitatively evaluate how well our model predicts on raw data and observe potential patterns in the flagged samples, we obtain predictions on the 44,105 unlabelled ads from TNABoard with the ORNN model trained on Trafficking-10k, then we examine the samples with high predicted likelihood to come from traffickers. Below are the top three samples that the model considers likely:", "[itemsep=0pt]", "“amazing reviewed crystal only here till fri book now please check our site for the services the girls provide all updates specials photos rates reviews njfantasygirls ...look who s back amazing reviewed model samantha...brand new spinner jessica special rate today 250 hr 21 5 4 120 34b total gfe total anything goes no limits...”", "“2 hot toght 18y o spinners 4 amazing providers today specials...”", "“asian college girl is visiting bellevue service type escort hair color brown eyes brown age 23 height 5 4 body type slim cup size c cup ethnicity asian service type escort i am here for you settle men i am a tiny asian girl who is waiting for a gentlemen...”", "Some interesting patterns in the samples with high predicted likelihood (here we only showed three) include: mentioning of multiple names or INLINEFORM0 providers in a single ad; possibly intentional typos and abbreviations for the sensitive words such as “tight” INLINEFORM1 “toght” and “18 year old” INLINEFORM2 “18y o”; keywords that indicate traveling of the providers such as “till fri”, “look who s back”, and “visiting”; keywords that hint on the providers potentially being underage such as “18y o”, “college girl”, and “tiny”; and switching between third person and first person narratives." ], [ "The fight against human traffickers is adversarial and dynamic. Traffickers often avoid using explicit keywords when advertising victims, but instead use acronyms, intentional typos, and emojis BIBREF9 . Law enforcement maintains a lexicon of trafficking flags mapping certain emojis to their potential true meanings (e.g., the cherry emoji can indicate an underaged victim), but compiling such a lexicon manually is expensive, requires frequent updating, and relies on domain expertise that is hard to obtain (e.g., insider information from traffickers or their victims). To make matters worse, traffickers change their dictionaries over time and regularly switch to new emojis to replace certain keywords BIBREF9 . In such a dynamic and adversarial environment, the need for a data-driven approach in updating the existing lexicon is evident.", "As mentioned in Section SECREF5 , training a skip-gram model on a text corpus can map words (including emojis) used in similar contexts to similar numeric vectors. Besides using the vectors learned from the raw escort ads to train ORNN, we can directly visualize the vectors for the emojis to help identify their relationships, by mapping the vectors to a 2-dimensional space using t-SNE BIBREF10 (Figure FIGREF24 ).", "We can first empirically assess the quality of the emoji map by noting that similar emojis do seem clustered together: the smileys near the coordinate (2, 3), the flowers near (-6, -1), the heart shapes near (-8, 1), the phones near (-2, 4) and so on. It is worth emphasizing that the skip-gram model learns the vectors of these emojis based on their contexts in escort ads and not their visual representations, so the fact that the visually similar emojis are close to one another in the map suggests that the vectors have been learned as desired.", "The emoji map can assist anti-trafficking experts in expanding the existing lexicon of trafficking flags. For example, according to the lexicon we obtained from Global Emancipation Network, the cherry emoji and the lollipop emoji are both flags for underaged victims. Near (-3, -4) in the map, right next to these two emojis are the porcelain dolls emoji, the grapes emoji, the strawberry emoji, the candy emoji, the ice cream emojis, and maybe the 18-slash emoji, indicating that they are all used in similar contexts and perhaps should all be flags for underaged victims in the updated lexicon.", "If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos." ], [ "Human trafficking is a form of modern day slavery that victimizes millions of people. It has become the norm for sex traffickers to use escort websites to openly advertise their victims. We designed an ordinal regression neural network (ORNN) to predict the likelihood that an escort ad comes from a trafficker, which can drastically narrow down the set of possible leads for law enforcement. Our ORNN achieved the state-of-the-art performance on Trafficking-10K BIBREF9 , outperforming all baseline ordinal regression models as well as improving the classification accuracy over the Human Trafficking Deep Network BIBREF9 . We also conducted an emoji analysis and showed how to use word embeddings learned from raw text data to help expand the lexicon of trafficking flags.", "Since our experiments, there have been considerable advancements in language representation models, such as BERT BIBREF30 . The new language representation models can be combined with our ordinal regression layer, replacing the skip-gram model and GF-RNN, to potentially further improve our results. However, our contributions of improving the cost function for ordinal regression neural networks, qualitatively analyzing patterns in the predicted samples, and expanding the trafficking lexicon through a data-driven approach are not dependent on a particular choice of language representation model.", "As for future work in trafficking detection, we can design multi-modal ordinal regression networks that utilize both image and text data. But given the time and resources required to label escort ads, we may explore more unsupervised learning or transfer learning algorithms, such as using object detection BIBREF31 and matching algorithms to match hotel rooms in the images." ], [ "We thank Cara Jones and Marinus Analytics LLC for sharing the Trafficking-10K dataset. We thank Praveen Bodigutla for his suggestions on Natural Language Processing literature." ], [ "Word Embeddings: pretraining model type: Skip-gram; speedup method: negative sampling; number of negative samples: 100; noise distribution: unigram distribution raised to 3/4rd; batch size: 16; window size: 5; minimum word count: 5; number of epochs: 50; embedding size: 128; pretraining learning rate: 0.2; fine-tuning learning rate scale: 1.0.", "GF-RNN: hidden size: 128; dropout: 0.2; number of layers: 3; gradient clipping norm: 0.25; L2 penalty: 0.00001; learning rate decay factor: 2.0; learning rate decay patience: 3; early stop patience: 9; batch size: 200; batch normalization: true; residual connection: true; output layer type: mean-pooling; minimum word count: 5; maximum input length: 120.", "Multi-labeled logistic regression layer: task weight scheme: uniform; conflict penalty: 0.5." ], [ "The fight against human trafficking is adversarial, hence the access to the source materials in anti-trafficking research is typically not available to the general public by choice, but granted to researchers and law enforcement individually upon request.", "Source code:", "https://gitlab.com/BlazingBlade/TrafficKill", "Trafficking-10k: Contact", "cara@marinusanalytics.com", "Trafficking lexicon: Contact", "sherrie@globalemancipation.ngo" ] ] }
{ "question": [ "By how much do they outperform previous state-of-the-art models?", "Do they use pretrained word embeddings?", "How is the lexicon of trafficking flags expanded?" ], "question_id": [ "2d4d0735c50749aa8087d1502ab7499faa2f0dd8", "43761478c26ad65bec4f0fd511ec3181a100681c", "01866fe392d9196dda1d0b472290edbd48a99f66" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Proposed ORNN has 0.769, 1.238, 0.818, 0.772 compared to 0.778, 1.244, 0.813, 0.781 of best state of the art result on Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.)", "evidence": [ "All models are trained and evaluated using the same (w.r.t. data shuffle and split) 10-fold cross-validation (CV) on Trafficking-10k, except for HTDN, whose result is read from the original paper BIBREF9 . During each train-test split, INLINEFORM0 of the training set is further reserved as the validation set for tuning hyperparameters such as L2-penalty in IT, AT and LAD, and learning rate in ORNN. So the overall train-validation-test ratio is 70%-20%-10%. We report the mean metrics from the CV in Table TABREF14 . As previous research has pointed out that there is no unbiased estimator of the variance of CV BIBREF29 , we report the naive standard error treating metrics across CV as independent.", "We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models. Its Wt. Acc. is a substantial improvement over HTDN despite the fact that the latter use both text and image data. It is important to note that HTDN is trained using binary labels, whereas the other models are trained using ordinal labels and then have their ordinal predictions converted to binary predictions. This is most likely the reason that even the baseline models except for LAD can yield better Wt. Acc. than HTDN, confirming our earlier claim that polarizing the ordinal labels during training may lead to information loss.", "FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.", "FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted." ], "highlighted_evidence": [ "We report the mean metrics from the CV in Table TABREF14 .", "We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models.", "FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.", "FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted." ] } ], "annotation_id": [ "1384b1e2ddc8d8417896cb3664c4586037474138" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Trafficking detection: There have been several software products designed to aid anti-trafficking efforts. Examples include Memex which focuses on search functionalities in the dark web; Spotlight which flags suspicious ads and links images appearing in multiple ads; Traffic Jam which seeks to identify patterns that connect multiple ads to the same trafficking organization; and TraffickCam which aims to construct a crowd-sourced database of hotel room images to geo-locate victims. These research efforts have largely been isolated, and few research articles on machine learning for trafficking detection have been published. Closest to our work is the Human Trafficking Deep Network (HTDN) BIBREF9 . HTDN has three main components: a language network that uses pretrained word embeddings and a long short-term memory network (LSTM) to process text input; a vision network that uses a convolutional network to process image input; and another convolutional network to combine the output of the previous two networks and produce a binary classification. Compared to the language network in HTDN, our model replaces LSTM with a gated-feedback recurrent neural network, adopts certain regularizations, and uses an ordinal regression layer on top. It significantly improves HTDN's benchmark despite only using text input. As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon." ], "highlighted_evidence": [ "As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon." ] } ], "annotation_id": [ "7a121e16f4f5def4e5700dfc4d6f588f03ac00a1" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones" ], "yes_no": null, "free_form_answer": "", "evidence": [ "If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos." ], "highlighted_evidence": [ "If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos." ] } ], "annotation_id": [ "26f9aea7a6585b16f09cf6f41dfbf0a3f9f8db81" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: Overview of the ordinal regression neural network for text input. H represents a hidden state in a gated-feedback recurrent neural network.", "Figure 2: Ordinal regression layer with order penalty.", "Table 1: Description and distribution of labels in Trafficking-10K.", "Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.", "Table 3: Ablation test. Except for models everything is the same as Table 2.", "Figure 3: Emoji map produced by applying t-SNE to the emojis’ vectors learned from escort ads using skip-gram model. For visual clarity, only the emojis that appeared most frequently in the escort ads we scraped are shown out of the total 968 emojis that appeared." ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "6-Table1-1.png", "7-Table2-1.png", "7-Table3-1.png", "8-Figure3-1.png" ] }
1612.05310
Modeling Trolling in Social Media Conversations
Social media websites, electronic newspapers and Internet forums allow visitors to leave comments for others to read and interact. This exchange is not free from participants with malicious intentions, who troll others by positing messages that are intended to be provocative, offensive, or menacing. With the goal of facilitating the computational modeling of trolling, we propose a trolling categorization that is novel in the sense that it allows comment-based analysis from both the trolls' and the responders' perspectives, characterizing these two perspectives using four aspects, namely, the troll's intention and his intention disclosure, as well as the responder's interpretation of the troll's intention and her response strategy. Using this categorization, we annotate and release a dataset containing excerpts of Reddit conversations involving suspected trolls and their interactions with other users. Finally, we identify the difficult-to-classify cases in our corpus and suggest potential solutions for them.
{ "section_name": [ "Introduction", "Related Work", "Trolling Categorization", "Conversation Excerpts", "Corpus and Annotation", "Trolling Attempt Prediction", "Feature Sets", "Results", "Error Analysis", "Conclusion and Future Work" ], "paragraphs": [ [ "In contrast to traditional content distribution channels like television, radio and newspapers, Internet opened the door for direct interaction between the content creator and its audience. Young people are now gaining more frequent access to online, networked media. Although most of the time, their Internet use is harmless, there are some risks associated with these online activities, such as the use of social networking sites (e.g., Twitter, Facebook, Reddit). The anonymity and freedom provided by social networks makes them vulnerable to threatening situations on the Web, such as trolling.", "Trolling is “the activity of posting messages via a communications network that are intended to be provocative, offensive or menacing” BIBREF0 . People who post such comments are known as trolls. According to hardaker2010trolling, a troll's “real intention(s) is/are to cause disruption and/or trigger or exacerbate conflict for the purpose of their own amusement”. Worse still, the troll's comments may have a negative psychological impact on his target/victim and possibly others who participated in the same conversation. It is therefore imperative to identify such comments and perhaps even terminate the conversation before it evolves into something psychological disruptive for the participants. Monitoring conversations is a labor-intensive task: it can potentially place a severe burden on the moderators, and it may not be an effective solution when traffic is heavy. This calls for the need to develop automatic methods for identifying malicious comments, which we will refer to as trolling attempts in this paper.", "In fact, there have recently been some attempts to automatically identify comments containing cyberbullying (e.g., van2015detection), which corresponds to the most severe cases of trolling BIBREF0 . However, we believe that it is important not only to identify trolling attempts, but also comments that could have a negative psychological impact on their recipients. As an example, consider the situation where a commenter posts a comment with the goal of amusing others. However, it is conceivable that not everybody would be aware of these playful intentions, and these people may disagree or dislike the mocking comments and take them as inappropriate, prompting a negative reaction or psychological impact on themselves.", "In light of this discussion, we believe that there is a need to identify not only the trolling attempts, but also comments that could have a negative psychological impact on its receipts. To this end, we seek to achieve the following goals in this paper. First, we propose a comprehensive categorization of trolling that allows us to model not only the troll's intention given his trolling attempt, but also the recipients' perception of the troll's intention and subsequently their reaction to the trolling attempt. This categorization gives rise to very interesting problems in pragmatics that involve the computational modeling of intentions, perceived intentions, and reactions to perceived intentions. Second, we create a new annotated resource for computational modeling of trolling. Each instance in this resource corresponds to a suspected trolling attempt taken from a Reddit conversation, it's surrounding context, and its immediate responses and will be manually coded with information such as the troll's intention and the recipients' reactions using our proposed categorization of trolling. Finally, we identify the instances that are difficult to classify with the help of a classifier trained with features taken from the state of the art, and subsequently present an analysis of these instances.", "To our knowledge, our annotated resource is the first one of its sort that allows computational modeling on both the troll's side and the recipients' side. By making it publicly available, we hope to stimulate further research on this task. We believe that it will be valuable to any NLP researcher who is interested in the computational modeling of trolling." ], [ "In this section, we discuss related work in the areas of trolling, bullying, abusive language detection and politeness, as they intersect in their scope and at least partially address the problem presented in this work.", "In the realm of psychology, bishop2013effect and bishop2014representations elaborate a deep description of a troll's personality, motivations, effects on the community that trolls interfere in and the criminal and psychological aspects of trolls. Their main focus are flaming (trolls), and hostile and aggressive interactions between users BIBREF1 .", "On the computational side, mihaylov2015finding address the problem of identifying manipulation trolls in news community forums. Not only do they focus solely on troll identification, but the major difference with this work is that all their predictions are based on non-linguistic information such as number of votes, dates, number of comments and so on. In a networks related framework, kumar2014accurately and guha2004propagation present a methodology to identify malicious individuals in a network based solely on the network's properties rather than on the textual content of comments. cambria2010not propose a method that involves NLP components, but fail to provide an evaluation of their system.", "There is extensive work on detecting offensive and abusive language in social media BIBREF2 and BIBREF3 . There are two clear differences between their work and ours. One is that trolling is concerned about not only abusive language but also a much larger range of language styles and addresses the intentions and interpretations of the commenters, which goes beyond the linguistic dimension. The other is that we are additionally interested in the reactions to trolling attempts, real or perceived, because we argued that this is a phenomenon that occurs in pairs through the interaction of at least two individuals, which is different from abusive language detection. Also, xu2012learning, xu2012fast and xu2013examination address bullying traces. Bullying traces are self-reported events of individuals describing being part of bullying events, but we believe that the real impact of computational trolling research is not on analyzing retrospective incidents, but on analyzing real-time conversations. chen2012detecting use lexical and semantic features to determine sentence offensiveness levels to identify cyberbullying, offensive or abusive comments on Youtube. On Youtube as well, dinakar2012common identified sensitive topics for cyberbullying. dadvar2014experts used expert systems to classify between bullying and no bullying in posts. van2015detection predict fine-grained categories for cyberbullying, distinguishing between insults and threats and identified user roles in the exchanges. Finally, hardaker2010trolling argues that trolling cannot be studied using established politeness research categories." ], [ "In this section, we describe our proposal of a comprehensive trolling categorization. While there have been attempts in the realm of psychology to provide a working definition of trolling (e.g., hardaker2010trolling, bishop2014representations), their focus is mostly on modeling the troll's behavior. For instance, bishop2014representations constructed a “trolling magnitude” scale focused on the severity of abuse and misuse of internet mediated communications. bishop2013effect also categorized trolls based on psychological characteristics focused on pathologies and possible criminal behaviors. In contrast, our trolling categorization seeks to model not only the troll's behavior but also the impact on the recipients, as described below.", "Since one of our goals is to identify trolling events, our datasets will be composed of suspected trolling attempts (i.e., comments that are suspected to be trolling attempts). In other words, some of these suspected trolling attempts will be real trolling attempts, and some of them won't. So, if a suspected trolling attempt is in fact not a trolling attempt, then its author will not be a troll.", "To cover both the troll and the recipients, we define a (suspected trolling attempt, responses) pair as the basic unit that we consider for the study of trolling, where “responses” are all the direct responses to the suspected trolling attempt. We characterize a (suspected trolling attempt, responses) pair using four aspects. Two aspects describe the trolling attempt: (1) Intention (I) (what is its author's purpose?), and (2) Intention Disclosure (D) (is its author trying to deceive its readers by hiding his real (i.e., malicious) intentions?). The remaining two aspects are defined on each of the (direct) responses to the trolling attempt: (1) Intention Interpretation (R) (what is the responder's perception of the troll's intention?), and (2) the Response strategy (B) (what is the responder's reaction?). Two points deserve mention. First, R can be different from I due to misunderstanding and the fact that the troll may be trying to hide his intention. Second, B is influenced by R, and the responder's comment can itself be a trolling attempt. We believe that these four aspects constitute interesting, under-studied pragmatics tasks for NLP researchers.", "The possible values of each aspect are described in Table TABREF1 . As noted before, since these are suspected trolling attempts, if an attempt turns out not to be a trolling attempt, its author will not be a troll.", "For a given (suspected trolling attempt, responses) pair, not all of the 189 (= INLINEFORM0 ) combinations of values of the four aspects are possible. There are logical constraints that limit plausible combinations: a) Trolling or Playing Intentions (I) must have Hidden or Exposed Intention Disclosure (D), b) Normal intentions (I) can only have None Intention disclosure (D) and c) Trolling or Playing interpretation (R) cannot have Normal response strategy (B)." ], [ "To enable the reader to better understand this categorization, we present two example excerpts taken from the original (Reddit) conversations. The first comment on each excerpt, generated by author C0, is given as a minimal piece of context. The second comment, written by the author C1 in italics, is the suspected trolling attempt. The rest of the comments comprise all direct responses to the suspected trolling comment.", "Example 1.", "", "[noitemsep,nolistsep] ", "Yeah, cause that's what usually happens. Also, quit following me around, I don't want a boyfriend.", "[noitemsep,nolistsep]", "I wasn't aware you were the same person.... I've replied to a number of stupid people recently, my bad", "[noitemsep,nolistsep]", "Trollname trollpost brotroll", "", "In this example, C1 is teasing of C0, expecting to provoke or irritate irritate, and he is clearly disclosing her trolling intentions. In C0's response, we see that he clearly believe that C1 is trolling, since is directly calling him a “brotroll” and his response strategy is frustrate the trolling attempt by denouncing C1 troll's intentions “trollpost” and true identity “brotroll”.", "Example 2.", "", "[noitemsep,nolistsep] ", "Please post a video of your dog doing this. The way I'm imagining this is adorable.", "[noitemsep,nolistsep]", "I hope the dog gets run over by a truck on the way out of the childrens playground.", "[noitemsep,nolistsep]", "If you're going to troll, can you at least try to be a bit more", "Haha I hope the cancer kills you. convincing?", "", "In this example, we observe that C0's first comment is making a polite request (Please). In return, C1 answer is a mean spirited comment whose intention is to disrupt and possible hurtful C0. Also, C1's comment is not subtle at all, so his intention is clearly disclosed. As for C2, she is clearly acknowledging C1's trolling intention and her response strategy is a criticism which we categorize as frustrate. Now, in C0's second comment, we observe that his interpretation is clear, he believes that C1 is trolling and the negative effect is so tangible, that his response strategy is to troll back or counter-troll by replying with a comparable mean comment." ], [ "Reddit is popular website that allows registered users (without identity verification) to participate in fora grouped by topic or interest. Participation consists of posting stories that can be seen by other users, voting stories and comments, and comments in the story's comment section, in the form of a forum. The forums are arranged in the form of a tree, allowing nested conversations, where the replies to a comment are its direct responses. We collected all comments in the stories' conversation in Reddit that were posted in August 2015. Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. To do so, we used Lucene to create an inverted index from the comments and queried it for comments containing the word “troll” with an edit distance of 1 in order to include close variations of this word, hypothesizing that such comments would be reasonable candidates of real trolling attempts. We did observe, however, that sometimes people use the word troll to point out that another user is trolling. Other times, people use the term to express their frustration about a particular user, but there is no trolling attempt. Yet other times people simply discuss trolling and trolls without actually observing one. Nonetheless, we found that this search produced a dataset in which 44.3% of the comments are real trolling attempts. Moreover, it is possible for commenters to believe that they are witnessing a trolling attempt and respond accordingly even where there is none due to misunderstanding. Therefore, the inclusion of comments that do not involve trolling would allow us to learn what triggers a user's interpretation of trolling when it is not present and what kind of response strategies are used.", "For each retrieved comment, we reconstructed the original conversation tree it appears in, from the original post (i.e., the root) to the leaves, so that its parent and children can be recovered. We consider a comment in our dataset a suspected trolling attempt if at least one of its immediate children contains the word troll. For annotation purposes, we created snippets of conversations exactly like the ones shown in Example 1 and Example 2, each of which consists of the parent of the suspected trolling attempt, the suspected trolling attempt, and all of the direct responses to the suspected trolling attempt.", "We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”.", "Due to the subjective nature of the task we did not expect perfect agreement. However, on the 100 doubly-annotated snippets, we obtained substantial inter-annotator agreement according to Cohen's kappa statistic BIBREF4 for each of the four aspects: Intention: 0.788, Intention Disclosure: 0.780, Interpretation: 0.797 and Response 0.776. In the end, the annotators discussed their discrepancies and managed to resolve all of them." ], [ "In this section, we make predictions on the four aspects of our task, with the primary goal of identifying the errors our classifier makes (i.e., the hard-to-classify instances) and hence the directions for future work, and the secondary goal of estimating the state of the art on this new task using only shallow (i.e., lexical and wordlist-based) features." ], [ "For prediction we define two sets of features: (1) a basic feature set taken from Van Hee's van2015detection paper on cyberbullying prediction, and (2) an extended feature set that we designed using primarily information extracted from wordlists and dictionaries.", "N-gram features. We encode each lemmatized and unlemmatized unigram and bigram collected from the training comments as a binary feature. In a similar manner, we include the unigram and bigram along with their POS tag as in BIBREF5 . To extract these features we used Stanford CoreNLP BIBREF6 .", "Sentiment Polarity. The overall comment's emotion could be useful to identify the response and intention in a trolling attempt. So, we apply the Vader Sentiment Polarity Analyzer BIBREF7 and include four features, one per each measurement given by the analyzer: positive, neutral, negative and a composite metric, each as a real number value.", "Emoticons. Reddit's comments make extensive use of emoticons. We argue that some emoticons are specifically used in trolling attempts to express a variety of emotions, which we hypothesize would be useful to identify a comment's intention, interpretation and response. For that reason, we use the emoticon dictionary developed hogenboom2015exploiting. We create a binary feature whose value is one if at least one of these emoticons is found in the comment.", "Harmful Vocabulary. In their research on bullying, nitta2013detecting identified a small set of words that are highly offensive. We create a binary feature whose value is one if the comment contains at least one of these words.", "Emotions Synsets. As in xu2012fast, we extracted all lemmas associated with each WordNet BIBREF8 synset involving seven emotions (anger, embarrassment, empathy, fear, pride, relief and sadness) as well as the synonyms of these emotion words extracted from the English merriam2004merriam dictionary. We create a binary feature whose value is one if any of these synsets or synonyms appears in the comment.", "Swearing Vocabulary. We manually collected 1061 swear words and short phrases from the internet, blogs, forums and smaller repositories . The informal nature of this dictionary resembles the type of language used by flaming trolls and agitated responses, so we encode a binary feature whose value is one when at least one such swear word is found in the comment.", "Swearing Vocabulary in Username. An interesting feature that is suggestive of the intention of a comment is the author's username. We found that abusive and annoying commenters contained cursing words in their usernames. So, we create a binary feature whose value is one if a swear word from the swearing vocabulary is found in their usernames.", "Framenet. We apply the SEMAFOR parser BIBREF9 to each sentence in every comment, and construct three different types of binary features: every frame name that is present in the sentence, the frame name and the target word associated with it, and the argument name along with the token or lexical unit in the sentence associated with it. We believe that some frames are especially interesting from the trolling perspective. We hypothesize that these features are useful for identifying trolling attempts in which semantic and not just syntactic information is required.", "Politeness cues. danescu2013computational identified cues that signal polite and impolite interactions among groups of people collaborating online. Based on our observations of trolling examples, it is clear that flaming, hostile and aggressive interactions between users BIBREF1 and engaged or emotional responses would use impolite cues. In contrast, neutralizing and frustrating responses to the troll avoid falling in confrontation and their vocabulary tends to be more polite. So we create a binary feature whose value is one if at least one cue appears in the comment.", "GloVe Embeddings. All the aforementioned features constitute a high dimensional bag of words (BOW). Word embeddings were created to overcome certain problems with the BOW representation, like sparsity, and weight in correlations of semantically similar words. For this reason, and following nobata2016abusive, we create a distributed representation of the comments by averaging the word vector of each lowercase token in the comment found in the Twitter corpus pre-trained GloVe vectors BIBREF10 . The resulting comment vector representation is a 200 dimensional array that is concatenated with the existing BOW." ], [ "Using the features described in the previous subsection, we train four independent classifiers using logistic regression, one per each of the four prediction tasks. All the results are obtained using 5-fold cross-validation experiments. In each fold experiment, we use three folds for training, one fold for development, and one fold for testing. All learning parameters are set to their default values except for the regularization parameter, which we tuned on the development set. In Table TABREF19 the leftmost results column reports F1 score based on majority class prediction. The next section (Single Feature Group) reports F1 scores obtained by using one feature group at a time. The goal of the later set of experiments is to gain insights about feature predictive effectiveness. The right side section (All features) shows the system performance measured using recall, precision, and F-1 as shown when all features described in section SECREF13 are used.", "The majority class prediction experiment is simplest baseline to which we can can compare the rest of the experiments. In order to illustrate the prediction power of each feature group independent from all others, we perform the “Single Feature Group”, experiments. As we can observe in Table TABREF19 there are groups of features that independently are not better than the majority baseline, for example, the emoticons, politeness cues and polarity are not better disclosure predictors than the majority base. Also, we observe that only n-grams and GloVe features are the only group of features that contribute to more than a class type for the different tasks. Now, the “All Features” experiment shows how the interaction between feature sets perform than any of the other features groups in isolation. The accuracy metric for each trolling task is meant to provide an overall performance for all the classes within a particular task, and allow comparison between different experiments. In particular, we observe that GloVe vectors are the most powerful feature set, accuracy-wise, even better than the experiments with all features for all tasks except interpretation.", "The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. This result is what makes this dataset interesting: there is still lots of room for research on this task. Again, the primary goal of this experiment is to help identify the difficult-to-classify instances for analysis in the next section." ], [ "In order to provide directions for future work, we analyze the errors made by the classifier trained on the extended features on the four prediction tasks.", "Errors on Intention (I) prediction: The lack of background is a major problem when identifying trolling comments. For example, “your comments fit well in Stormfront” seems inoffensive on the surface. However, people who know that Stormfront is a white supremacist website will realize that the author of this comment had an annoying or malicious intention. But our system had no knowledge about it and simply predicted it as non-trolling. These kind of errors reduces recall on the prediction of trolling comments. A solution would be to include additional knowledge from anthologies along with a sentiment or polarity. One could modify NELL BIBREF12 to broaden the understanding of entities in the comments.", "Non-cursing aggressions and insults This is a challenging problem, since the majority of abusive and insulting comments rely on profanity and swearing. The problem arises with subtler aggressions and insults that are equally or even more annoying, such as “Troll? How cute.” and “settle down drama queen”. The classifier has a more difficult task of determining that these are indeed aggressions or insults. This error also decreases the recall of trolling intention. A solution would be to exploit all the comments made by the suspected troll in the entire conversation in order to increase the chances of finding curse words or other cues that lead the classifier to correctly classify the comment as trolling.", "Another source of error is the presence of controversial topic words such as “black”,“feminism”, “killing”, “racism”, “brown”, etc. that are commonly used by trolls. The classifier seems too confident to classify a comment as trolling in the presence of these words, but in many cases they do not. In order to ameliorate this problem, one could create ad-hoc word embeddings by training glove or other type of distributed representation on a large corpus for the specific social media platform in consideration. From these vectors one could expect a better representation of controversial topics and their interactions with other words so they might help to reduce these errors.", "Errors on Disclosure (D) prediction: A major source of error that affects disclosure is the shallow meaning representation obtained from the BOW model even when augmented with the distributional features given by the glove vectors. For example, the suspected troll's comment “how to deal with refugees? How about a bullet to the head” is clearly mean-spirited and is an example of disclosed trolling. However, to reach that conclusion the reader need to infer the meaning of “bullet to the head” and that this action is desirable for a vulnerable group like migrants or refugees. This problem produces low recall for the disclosed prediction task. A solution for this problem may be the use of deeper semantics, where we represent the comments and sentences in their logical form and infer from them the intended meaning.", "Errors on Interpretation (R) prediction: it is a common practice from many users to directly ask the suspected troll if he/she is trolling or not. There are several variations of this question, such as “Are you a troll?” and “not sure if trolling or not”. While the presence of a question like these seems to give us a hint of the responder's interpretation, we cannot be sure of his interpretation without also considering the context. One way to improve interpretation is to exploit the response strategy, but the response strategy in our model is predicted independently of interpretation. So one solution could be similar to the one proposed above for the disclosure task problem: jointly learning classifiers that predict both variables simultaneously. Another possibility is to use the temporal sequence of response comments and make use of older response interpretation as input features for later comments. This could be useful since commenters seem to influence each other as they read through the conversation.", "Errors on Response Strategy (B) prediction: In some cases there is a blurry line between “Frustrate” and “Neutralize”. The key distinction between them is that there exists some criticism in the Frustrate responses towards the suspected troll's comment, while “Neutralizing” comments acknowledge that the suspected troll has trolling intentions, but gives no importance to them. For example, response comments such as “oh, you are a troll” and “you are just a lame troll” are examples of this subtle difference. The first is a case of “neutralize” while the second is indeed criticizing the suspected troll's comment and therefore a “frustrate” response strategy. This kind of error affects both precision and recall for these two classes. A possible solution could be to train a specialized classifier to disambiguate between “frustrate” and “neutralize” only.", "Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes." ], [ "We presented a new view on the computational modeling of trolling in Internet fora where we proposed a comprehensive categorization of trolling attempts that for the first time considers trolling from not only the troll's perspective but also the responders' perspectives. This categorization gives rise to four interesting pragmatics tasks that involve modeling intensions, perceived intensions, and reactions. Perhaps most importantly, we create an annotated dataset that we believe is the first of its sort. We intend to make publicly available with the hope of stimulating research on trolling." ] ] }
{ "question": [ "Do they experiment with the dataset?", "Do they use a crowdsourcing platform for annotation?", "What is an example of a difficult-to-classify case?", "What potential solutions are suggested?", "What is the size of the dataset?", "What Reddit communities do they look at?" ], "question_id": [ "394cf73c0aac8ccb45ce1b133f4e765e8e175403", "2c4003f25e8d95a3768204f52a7a5f5e17cb2102", "65e32f73357bb26a29a58596e1ac314f7e9c6c91", "46f175e1322d648ab2c0258a9609fe6f43d3b44e", "7cc22fd8c9d0e1ce5e86d0cbe90bf3a177f22a68", "3fa638e6167e1c7a931c8ee5c0e2e397ec1b6cda" ], "nlp_background": [ "", "", "", "", "", "" ], "topic_background": [ "", "", "", "", "", "" ], "paper_read": [ "", "", "", "", "", "" ], "search_query": [ "social media", "social media", "social media", "social media", "social media", "social media" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. This result is what makes this dataset interesting: there is still lots of room for research on this task. Again, the primary goal of this experiment is to help identify the difficult-to-classify instances for analysis in the next section." ], "highlighted_evidence": [ "The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. " ] } ], "annotation_id": [ "ea5e04a335216985caf9fe97f2ce836a48a80650" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "Reddit is popular website that allows registered users (without identity verification) to participate in fora grouped by topic or interest. Participation consists of posting stories that can be seen by other users, voting stories and comments, and comments in the story's comment section, in the form of a forum. The forums are arranged in the form of a tree, allowing nested conversations, where the replies to a comment are its direct responses. We collected all comments in the stories' conversation in Reddit that were posted in August 2015. Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. To do so, we used Lucene to create an inverted index from the comments and queried it for comments containing the word “troll” with an edit distance of 1 in order to include close variations of this word, hypothesizing that such comments would be reasonable candidates of real trolling attempts. We did observe, however, that sometimes people use the word troll to point out that another user is trolling. Other times, people use the term to express their frustration about a particular user, but there is no trolling attempt. Yet other times people simply discuss trolling and trolls without actually observing one. Nonetheless, we found that this search produced a dataset in which 44.3% of the comments are real trolling attempts. Moreover, it is possible for commenters to believe that they are witnessing a trolling attempt and respond accordingly even where there is none due to misunderstanding. Therefore, the inclusion of comments that do not involve trolling would allow us to learn what triggers a user's interpretation of trolling when it is not present and what kind of response strategies are used.", "We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”." ], "highlighted_evidence": [ "Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. ", "We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. " ] } ], "annotation_id": [ "76357c9c4f5a08b96237b1d71756118497627f4f" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "The lack of background", "Non-cursing aggressions and insults", "the presence of controversial topic words ", " shallow meaning representation", "directly ask the suspected troll if he/she is trolling or not", "a blurry line between “Frustrate” and “Neutralize”", "distinction between the classes “Troll” and “Engage”" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In order to provide directions for future work, we analyze the errors made by the classifier trained on the extended features on the four prediction tasks.", "Errors on Intention (I) prediction: The lack of background is a major problem when identifying trolling comments. For example, “your comments fit well in Stormfront” seems inoffensive on the surface. However, people who know that Stormfront is a white supremacist website will realize that the author of this comment had an annoying or malicious intention. But our system had no knowledge about it and simply predicted it as non-trolling. These kind of errors reduces recall on the prediction of trolling comments. A solution would be to include additional knowledge from anthologies along with a sentiment or polarity. One could modify NELL BIBREF12 to broaden the understanding of entities in the comments.", "Non-cursing aggressions and insults This is a challenging problem, since the majority of abusive and insulting comments rely on profanity and swearing. The problem arises with subtler aggressions and insults that are equally or even more annoying, such as “Troll? How cute.” and “settle down drama queen”. The classifier has a more difficult task of determining that these are indeed aggressions or insults. This error also decreases the recall of trolling intention. A solution would be to exploit all the comments made by the suspected troll in the entire conversation in order to increase the chances of finding curse words or other cues that lead the classifier to correctly classify the comment as trolling.", "Another source of error is the presence of controversial topic words such as “black”,“feminism”, “killing”, “racism”, “brown”, etc. that are commonly used by trolls. The classifier seems too confident to classify a comment as trolling in the presence of these words, but in many cases they do not. In order to ameliorate this problem, one could create ad-hoc word embeddings by training glove or other type of distributed representation on a large corpus for the specific social media platform in consideration. From these vectors one could expect a better representation of controversial topics and their interactions with other words so they might help to reduce these errors.", "Errors on Disclosure (D) prediction: A major source of error that affects disclosure is the shallow meaning representation obtained from the BOW model even when augmented with the distributional features given by the glove vectors. For example, the suspected troll's comment “how to deal with refugees? How about a bullet to the head” is clearly mean-spirited and is an example of disclosed trolling. However, to reach that conclusion the reader need to infer the meaning of “bullet to the head” and that this action is desirable for a vulnerable group like migrants or refugees. This problem produces low recall for the disclosed prediction task. A solution for this problem may be the use of deeper semantics, where we represent the comments and sentences in their logical form and infer from them the intended meaning.", "Errors on Interpretation (R) prediction: it is a common practice from many users to directly ask the suspected troll if he/she is trolling or not. There are several variations of this question, such as “Are you a troll?” and “not sure if trolling or not”. While the presence of a question like these seems to give us a hint of the responder's interpretation, we cannot be sure of his interpretation without also considering the context. One way to improve interpretation is to exploit the response strategy, but the response strategy in our model is predicted independently of interpretation. So one solution could be similar to the one proposed above for the disclosure task problem: jointly learning classifiers that predict both variables simultaneously. Another possibility is to use the temporal sequence of response comments and make use of older response interpretation as input features for later comments. This could be useful since commenters seem to influence each other as they read through the conversation.", "Errors on Response Strategy (B) prediction: In some cases there is a blurry line between “Frustrate” and “Neutralize”. The key distinction between them is that there exists some criticism in the Frustrate responses towards the suspected troll's comment, while “Neutralizing” comments acknowledge that the suspected troll has trolling intentions, but gives no importance to them. For example, response comments such as “oh, you are a troll” and “you are just a lame troll” are examples of this subtle difference. The first is a case of “neutralize” while the second is indeed criticizing the suspected troll's comment and therefore a “frustrate” response strategy. This kind of error affects both precision and recall for these two classes. A possible solution could be to train a specialized classifier to disambiguate between “frustrate” and “neutralize” only.", "Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes." ], "highlighted_evidence": [ "In order to provide directions for future work, we analyze the errors made by the classifier trained on the extended features on the four prediction tasks.\n\nErrors on Intention (I) prediction: The lack of background is a major problem when identifying trolling comments.", "Non-cursing aggressions and insults This is a challenging problem, since the majority of abusive and insulting comments rely on profanity and swearing. ", "Another source of error is the presence of controversial topic words such as “black”,“feminism”, “killing”, “racism”, “brown”, etc. that are commonly used by trolls.", "Errors on Disclosure (D) prediction: A major source of error that affects disclosure is the shallow meaning representation obtained from the BOW model even when augmented with the distributional features given by the glove vectors.", "Errors on Interpretation (R) prediction: it is a common practice from many users to directly ask the suspected troll if he/she is trolling or not. ", "Errors on Response Strategy (B) prediction: In some cases there is a blurry line between “Frustrate” and “Neutralize”. ", "Another challenging problem is the distinction between the classes “Troll” and “Engage”. " ] } ], "annotation_id": [ "29b2916971ecf070449e09aadfb6715f4cad53ec" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " inclusion of longer parts of the conversation" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes." ], "highlighted_evidence": [ "This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. " ] } ], "annotation_id": [ "139f3d416ba32e78ad435ed102dc234b1c898cdd" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "1000 conversations composed of 6833 sentences and 88047 tokens" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”." ], "highlighted_evidence": [ "The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. " ] } ], "annotation_id": [ "a01202588764d81374be8fb96d9c4e5a45aefdec" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "175130f8de4381c0aa9f17a799617e6d33036a28" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] }
{ "caption": [ "Table 1: Classes for trolling aspects: Intention, Intention Disclosure, Intention Interpretation and Response Strategy. Size refers to the percentage per class, in parenthesis is the total number of instances in the dataset.", "Table 2: Experiments Results. Below the “mjr” header, we report F1 scores the the majority class prediction we report F1 scores for the four aspects of trolling: Intention, Intentions Disclosure, Interpretation, and Response strategy. Also, below the “Single Feature Group” header, we report F1 scores as before, when the feature group indicated in the column headers is the only feature group used for classifier. The column headers abbreviations stand for: Emoticons, Harmful Vocabulary, Emotion Synsets, Swearing Vocabulary, Swearing Vocabulary in Usernames, Framenet, Politeness cues, n-grams (actual n-grams and n-grams appended with their corresponding part of speech tag) and Glove embeddings in that order. Below the “All Features” header we report Recall, Precision and F1 score, respectively, when all features are use for prediction. All experiments are performed using a logistic regression classifier per task. The last column reports the class distribution in percentage per task. The last row of each trolling aspect reports accuracy (the percentage of instances correctly classified). The last row in the table reports total accuracy, the percentage of correctly classified instances considering all aspects." ], "file": [ "3-Table1-1.png", "7-Table2-1.png" ] }
1912.09713
Measuring Compositional Generalization: A Comprehensive Method on Realistic Data
State-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings.
{ "section_name": [ "Introduction", "Distribution-Based Compositionality Assessment (DBCA)", "Distribution-Based Compositionality Assessment (DBCA) ::: Principles for measuring compositionality", "The CFQ Dataset", "The CFQ Dataset ::: Automatic, rule-based generation", "The CFQ Dataset ::: Dataset details and statistics", "Compositionality Experiments for CFQ and scan", "Experimental Results and Analysis ::: Experiment Setup", "Experimental Results and Analysis ::: Results and analysis for CFQ", "Experimental Results and Analysis ::: Results and analysis for scan", "Related Work", "Conclusion and Outlook", "Example Dataset Item", "Data Quality Analysis", "Data Distribution Analysis ::: Answer frequencies", "Data Distribution Analysis ::: Impact of subsampling on the distribution of complexity levels", "Data Distribution Analysis ::: Impact of subsampling on the frequency of rules and rule combinations", "Divergence-Based Split Analysis ::: Qualitative analysis of MCD@!START@$_{1}$@!END@", "Divergence-Based Split Analysis ::: Quantitative analysis of MCD@!START@$_{1}$@!END@", "Hyperparameters", "Detailed error analysis ::: Breakdown of error types", "Detailed error analysis ::: Qualitative error analysis", "Additional experimental results on scan", "Analysis of relations between accuracy, compound divergence, and training size", "Logical Form", "Rule Format", "Rule Format ::: Grammar rule format", "Rule Format ::: Knowledge rule format", "Rule Format ::: Inference rule format", "Rule Format ::: Resolution rule format", "Generation Algorithm", "Generation Algorithm ::: Join by Logical Form", "Generation Algorithm ::: Relationship between Generation and Parsing", "Generation Algorithm ::: Selecting an appropriate sample set", "Example of a rule application DAG", "Example of a rule application DAG ::: DAG normalization", "Example of a rule application DAG ::: Concept abbreviations", "Example of a rule application DAG ::: Entity placeholders", "Example of a rule application DAG ::: Subgraphs and their weights", "Rules Index", "Rules Index ::: Grammar rules", "Rules Index ::: Inference rules", "Rules Index ::: Resolution rules", "Rules Index ::: Knowledge rules" ], "paragraphs": [ [ "Human intelligence exhibits systematic compositionality BIBREF0, the capacity to understand and produce a potentially infinite number of novel combinations of known components, i.e., to make “infinite use of finite means” BIBREF1. In the context of learning from a set of training examples, we can observe compositionality as compositional generalization, which we take to mean the ability to systematically generalize to composed test examples of a certain distribution after being exposed to the necessary components during training on a different distribution.", "Humans demonstrate this ability in many different domains, such as natural language understanding (NLU) and visual scene understanding. For example, we can learn the meaning of a new word and then apply it to other language contexts. As BIBREF2 put it: “Once a person learns the meaning of a new verb `dax', he or she can immediately understand the meaning of `dax twice' and `sing and dax'.” Similarly, we can learn a new object shape and then understand its compositions with previously learned colors or materials BIBREF3, BIBREF4.", "In contrast, state-of-the-art machine learning (ML) methods often fail to capture the compositional structure that is underlying the problem domain and thus fail to generalize compositionally BIBREF2, BIBREF5, BIBREF6, BIBREF7, BIBREF3. We believe that part of the reason for this shortcoming is a lack of realistic benchmarks that comprehensively measure this aspect of learning in realistic scenarios.", "As others have proposed, compositional generalization can be assessed using a train-test split based on observable properties of the examples that intuitively correlate with their underlying compositional structure. BIBREF8, for example, propose to test on different output patterns than are in the train set, while BIBREF2 propose, among others, to split examples by output length or to test on examples containing primitives that are rarely shown during training. In this paper, we formalize and generalize this intuition and make these contributions:", "We introduce distribution-based compositionality assessment (DBCA), which is a novel method to quantitatively assess the adequacy of a particular dataset split for measuring compositional generalization and to construct splits that are ideally suited for this purpose (Section SECREF2).", "We present the Compositional Freebase Questions (CFQ) , a simple yet realistic and large NLU dataset that is specifically designed to measure compositional generalization using the DBCA method, and we describe how to construct such a dataset (Section SECREF3).", "We use the DBCA method to construct a series of experiments for measuring compositionality on CFQ and scan BIBREF2 and to quantitatively compare these experiments to other compositionality experiments (Section SECREF4).", "We analyze the performance of three baseline ML architectures on these experiments and show that these architectures fail to generalize compositionally, and perhaps more surprisingly, that compound divergence between train and test sets is a good predictor of the test accuracy (Section SECREF5).", "" ], [ "", "Like other authors, we propose to measure a learner's ability to generalize compositionally by using a setup where the train and test sets come from different distributions. More specifically, we propose a setup where each example is obtained by composing primitive elements (atoms), and where these atoms are similarly represented in the train and test sets while the test set contains novel compounds, i.e., new ways of composing the atoms of the train set.", "As a simple illustrative scenario, consider the task of answering simple questions such as “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?”. In this scenario, the atoms intuitively correspond to the primitive elements that are used to compose those questions, such as the predicates “direct(ed)” and “produce(d)”, the question patterns “Who [predicate] [entity]” and “Did [entity1] [predicate] [entity2]”, and the entities “Inception”, “Christopher Nolan”, etc. The compounds on the other hand correspond to the combinations of these atoms that appear in the various examples: \"Who directed [entity]?\", \"Did Christopher Nolan [predicate] Inception?\", etc.", "To measure compositional generalization on such a task, one might therefore use the questions “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?” as training examples while testing on questions such as “Did Christopher Nolan direct Goldfinger?” and \"Who produced Inception?\" because the atoms are identically represented in the train and test sets while the compounds differ.", "To make this intuition more precise, we focus on datasets such as CFQ (introduced in Section SECREF3) and scan BIBREF2, where each example can be created from a formal set of rules by successively applying a number of these rules. In this case, the atoms are the individual rules, while the compounds are the subgraphs of the directed acyclic graphs (DAGs) that correspond to the rule applications. (See Sections SECREF3 and SECREF4 for more details.)", "" ], [ "We use the term compositionality experiment to mean a particular way of splitting the data into train and test sets with the goal of measuring compositional generalization. Based on the notions of atoms and compounds described above, we say that an ideal compositionality experiment should adhere to the following two principles:", "", "Similar atom distribution: All atoms present in the test set are also present in the train set, and the distribution of atoms in the train set is as similar as possible to their distribution in the test set.", "", "Different compound distribution: The distribution of compounds in the train set is as different as possible from the distribution in the test set.", "", "The second principle guarantees that the experiment is compositionally challenging in the sense that it tests the learner on compounds that are as different as possible from the compounds used during training. The first principle aims to guarantee that the experiment is exclusively measuring the effect of the difference in the way atoms are composed to form compounds (rather than some related but different property such as domain adaptation on the distribution of the atoms).", "To determine to which degree a certain experiment adheres to these principles, we use the following formalization. For a sample set $T$, we use $\\mathcal {F}_A(T)$ to denote the frequency distribution of atoms in $T$ and $\\mathcal {F}_C(T)$ for the weighted frequency distribution of compounds in $T$, which correspond to the subgraphs of the rule application DAGs. For practicality, we do not consider all subgraphs of rule application DAGs when computing the compound divergence. Instead, we first generate a large subset $\\mathbb {G}$ of subgraphs, then weight them in context of their occurrence, and keep only the ones with highest sum of weights. The purpose of the weighting is to avoid double-counting compounds that are highly correlated with some of their super-compounds. We achieve this by calculating the weight of $G \\in \\mathbb {G}$ in a sample as $w(G) = \\max _{g \\in \\text{occ}(G)} (1 - \\max _{G^{\\prime }: g \\prec g^{\\prime } \\in \\text{occ}(G^{\\prime })} P(G^{\\prime }| G))$, where $\\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\\prec $ denotes the strict subgraph relation, and $P(G^{\\prime }| G)$ is the empirical probability of $G^{\\prime }$ occurring as a supergraph of $G$ over the full sample set. See Appendix SECREF202 for example subgraphs and more details on the weighting.", "We measure divergence (or similarity) of the weighted distributions using the Chernoff coefficient $C_\\alpha (P \\Vert Q) = \\sum _{k} p_k^\\alpha \\, q_k^{1-\\alpha } \\in [0, 1]$ BIBREF9. For the atom divergence, we use $\\alpha =0.5$, which corresponds to the Bhattacharyya coefficient and reflects the desire of making the atom distributions in train and test as similar as possible. For the compound divergence, we use $\\alpha = 0.1$, which reflects the intuition that it is more important whether a certain compound occurs in $P$ (train) than whether the probabilities in $P$ (train) and $Q$ (test) match exactly. This allows us to formally define as follows the notions of compound divergence $\\mathcal {D}_C$ and atom divergence $\\mathcal {D}_A$ of a compositionality experiment consisting of a train set $V$ and a test set $W$:", "Based on these principles, we suggest to use as a preferred compositionality benchmark for a given dataset the accuracy obtained by a learner on splits with maximum compound divergence and low atom divergence (we use $\\mathcal {D}_A \\le 0.02$). See Section SECREF4 for details about how to construct such splits." ], [ "We present the Compositional Freebase Questions (CFQ) as an example of how to construct a dataset that is specifically designed to measure compositional generalization using the DBCA method introduced above. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also provides for each question a corresponding sparql query against the Freebase knowledge base BIBREF10. This means that CFQ can be used for semantic parsing BIBREF11, BIBREF12, which is the task that we focus on in this paper." ], [ "BIBREF13 describe a number of benefits for automated rule-based dataset generation, including scalability, control of scope, and avoidance of human errors. Beyond these benefits, however, such an approach is particularly attractive in the context of measuring compositional generalization using the DBCA method, as it allows us to precisely track the atoms (rules) and compounds (rule applications) of each example by recording the sequence of rule applications used to generate it.", "Since the way we measure compositionality depends on how the examples can be broken down into atoms and compounds, we design the generation rules so as to have few and meaningful atoms. More precisely, we aim to have as few rules as possible so that the richness of the examples comes from composing them, which yields a large variety of compounds (enabling a large range of different compound divergences) while making it easy to obtain similar distributions of atoms. Also, we aim to make our rules truly “atomic” in the sense that the behavior of any rule is independent of the context where it is applied (e.g., rules may not contain “if-then-else” constructs).", "In order to minimize the number of rules, we use an intermediate logical form that serves as a uniform semantic representation with relatively direct mappings to natural language and sparql. Our rules thus fall into the following four categories (a selection of rules is provided in Appendix SECREF20):", "Grammar rules that generate natural language constructs and corresponding logical forms.", "Inference rules that describe transformations on logical forms, allowing us to factor out transformations that are independent of specific linguistic and sparql constructs.", "Resolution rules that map constructs of the logical form to sparql constructs.", "Knowledge rules that supply logical form expressions that are universally applicable. Other rules can be kept more generic by parameterizing them on knowledge.", "These rules define a language of triples of the form $\\langle \\text{question, logical form, \\textsc {sparql}{} query} \\rangle $. Our generation algorithm produces such triples in a mixed top-down and bottom-up fashion. We first apply grammar rules and inference rules to produce the natural language questions and their semantics in our logical form. Then we apply resolution rules to obtain the sparql query. See Figure FIGREF14 for an illustration. In addition, the generator produces a normalized, directed acyclic graph (DAG) of rule applications that corresponds to the normalized program that generated the triple. (Appendix SECREF19 shows an example.) Edges of this DAG represent dependencies among the rule applications, and the normalization ensures that a certain rule combination is represented using the same DAG across all the examples where it occurs.", "The described approach can generate a potentially infinite set of questions, from which we first sample randomly and then subsample (to maximize the overall diversity of rule combinations while keeping a uniform distribution over complexity). We measure the diversity of rule combinations using the empirical entropy of a weighted subset of the rule application DAGs, and we use the number of rule applications as a measure of the complexity of an example. We also limit the maximum example complexity such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity. An example of a complete data item is shown in Appendix SECREF8, a more detailed data quality analysis is presented in Appendix SECREF9, and the generation algorithm is discussed in more detail in Appendix SECREF18." ], [ "Input and output. While the primary focus of the dataset is semantic parsing (natural language question to sparql query), we also provide natural language answers for each question. This allows the dataset to be used in a text-in-text-out scenario as well (see Appendix SECREF8).", "Ambiguity. We largely avoid ambiguity in the questions. In particular, we make sure each name is used to refer to exactly one entity, and we avoid different possible parse trees, different interpretations of plurals, and the need for disambiguation that requires semantic knowledge.", "Scope. We select the following language features as compositional building blocks: open questions and closed questions; subordinate clauses; active and passive voice; conjunctions of verb phrases and of noun phrases; possessives with roles (“X's parent”); adjectives; and type restrictions. For knowledge base features, we select roles, verbs, types, and adjectives from domains that are well-represented in Freebase and that can be combined easily. We start from the popular movie domain (e.g., directing, producing, editor, sequel) and extend this with personal relations (e.g., parent, spouse, sibling), companies (e.g., founding, employer), and adjectives (e.g., gender, nationality).", "Logical form and grammar. For the internal logical form, we adopt a variation of the description logic $\\mathcal {EL}$ BIBREF14, BIBREF15, augmented with additional constructors (see Appendix SECREF16) to more easily map to certain linguistic structures. For the grammar rules, we use a unification-based grammar syntax similar to that used in the Prolog extension GULP 3.1 BIBREF16, with addition of support for disjunction, negation, absence, and default inheritance of features for compactness of representation.", "Grounding in Freebase. Once an example is generated by the CFQ rules, it still contains entity placeholders instead of Freebase machine ids (MIDs). For the task of semantic parsing, the examples could theoretically be used as-is, as our avoidance of semantic ambiguity means that a learner should not need knowledge of the specific entity in order to parse the question. To make the questions natural, however, we apply an additional step of replacing the placeholders with appropriate specific entities. To do this we first execute the generated sparql query against Freebase. This returns a set of candidate MID combinations that satisfy the query and can be used as substitutes. If the set is empty, we abandon the generated question candidate as unnatural. Otherwise, we pick one combination at random to yield a question with positive answer. In the case of a closed question, we also generate a variation that yields the answer “No”, which we do by mixing in MIDs from another substitution (or a more generic replacement if that fails) to keep the question as plausible-sounding as possible. We then randomly choose either the question with positive or with negative answer, to avoid spurious correlations between question structure and yes/no answer.", "Semantic and structural filtering. Even among the questions that can be satisfied in Freebase, there are some that are meaningful but somewhat unnatural, such as “Was Strange Days directed by a female person whose gender is female?”. We automatically filter out such unnatural questions using semantic and structural rules. Note that since we do not require a learner to identify such questions, we do not track these filtering rules.", "Release and statistics.", "CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data. (The data URL is not yet provided for anonymous review.) We include a list of MIDs such that their English names map unambiguously to a MID. Table TABREF17(a) summarizes the overall statistics of CFQ. Table TABREF17(b) uses numbers from BIBREF8 and from an analysis of WebQuestionsSP BIBREF17 and ComplexWebQuestions BIBREF18 to compare three key statistics of CFQ to other semantic parsing datasets (none of which provide annotations of their compositional structure). CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. Note that it would be easy to boost the raw number of questions in CFQ almost arbitrarily by repeating the same question pattern with varying entities, but we use at most one entity substitution per question pattern. Appendix SECREF10 contains more detailed analyses of the data distribution." ], [ "The DBCA principles described in Section SECREF6 enable a generic and task-independent method for constructing compositionality experiments. To construct such an experiment for a dataset $U$ and a desired combination of atom and compound divergences, we use an iterative greedy algorithm that starts with empty sets $V$ (train) and $W$ (test), and then alternates between adding an example $u \\in U$ to $V$ or $W$ (while maintaining the desired train/test ratio). At each iteration, the element $u$ is selected such that $\\mathcal {D}_C(V \\Vert W)$ and $\\mathcal {D}_A(V \\Vert W)$ are kept as closely as possible to the desired values. To reduce the risk of being stuck in a local optimum, we also allow removing examples at certain iterations.", "In general, there are many different splits that satisfy a desired compound and atom divergence. This reflects the fact that a certain compound may either occur exclusively in the train set or the test set, or it may occur in both of them because the split may have achieved the desired compound divergence by separating other (possibly orthogonal) compounds. Our greedy algorithm addresses this by making random choices along the way, starting with picking the first example randomly.", "For the goal of measuring compositional generalization as accurately as possible, it is particularly interesting to construct maximum compound divergence (MCD) splits, which aim for a maximum compound divergence at a low atom divergence (we use $\\mathcal {D}_A \\le 0.02$). Table TABREF18 compares the compound divergence $\\mathcal {D}_C$ and atom divergence $\\mathcal {D}_A$ of three MCD splits to a random split baseline as well as to several previously suggested compositionality experiments for both CFQ and the existing scan dataset (cf. Section SECREF30). The split methods (beyond random split) are the following:", "Output length: Variation of the setup described by BIBREF2 where the train set consists of examples with output (sparql query or action sequence) length $\\le \\hspace{-2.5pt} N$, while the test set consists of examples with output length $> \\hspace{-2.5pt} N$. For CFQ, we use $N = 7$ constraints. For scan, we use $N = 22$ actions.", "Input length: Variation of the above setup, in which the train set consists of examples with input (question or command) length $\\le N$, while test set consists of examples with input length $> N$. For CFQ, we use $N=19$ grammar leaves. For SCAN, we use $N=8$ tokens.", "Output pattern: Variation of setup described by BIBREF8, in which the split is based on randomly assigning clusters of examples sharing the same output (query or action sequence) pattern. Query patterns are determined by anonymizing entities and properties; action sequence patterns collapse primitive actions and directions.", "Input pattern: Variation of the previous setup in which the split is based on randomly assigning clusters of examples sharing the same input (question or command) pattern. Question patterns are determined by anonymizing entity and property names ; command patterns collapse verbs and the interchangeable pairs left/right, around/opposite, twice/thrice.", "All of these experiments are based on the same train and validation/test sizes of 40% and 10% of the whole set, respectively. For CFQ, this corresponds to about 96k train and 12k validation and test examples, whereas for scan, it corresponds to about 8k train and 1k validation and test examples. We chose to use half of the full dataset for the train-test splits, as it led to an appropriate balance between high compound divergence and high train set size in informal experiments.", "The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments. The reason for this is that, instead of focusing on only one intuitive but rather arbitrary aspect of compositional generalization, the MCD splits aim to optimize divergence across all compounds directly.", "Interestingly, the MCD splits still correlate with the aspects of compositional generalization that are targeted by the other experiments in this table. As shown in the four right columns of Table TABREF18, for each MCD split, the train set $V$ contains on average shorter examples than the test set $W$ (measured by the ratio of average lengths), and $V$ also contains only a small fraction of the input and output patterns used in $W$ (measured by the fraction of patterns covered). However, these correlations are less pronounced than for the experiments that specifically target these aspects, and they vary significantly across the different MCD splits.", "This illustrates that MCD splits are comprehensive in the sense that they cover many different aspects of compositional generalization, especially when looking at multiple of them. It also means that whether a certain example ends up in train or test is not determined solely by a single criterion that is immediately observable when looking at the input and output (such as length). As we show in Appendix SECREF91, this generally makes the examples in train and test look fairly similar." ], [ "We use three encoder-decoder neural architectures as baselines: (1) LSTM+attention as an LSTM BIBREF19 with attention mechanism BIBREF20; (2) Transformer BIBREF21 and (3) Universal Transformer BIBREF22.", "We tune the hyperparameters using a CFQ random split, and we keep the hyperparameters fixed for both CFQ and scan (listed in Appendix SECREF12). In particular the number of training steps is kept constant to remove this factor of variation. We train a fresh model for each experiment, and we replicate each experiment 5 times and report the resulting mean accuracy with 95% confidence intervals.", "Note that while we construct test and validation sets from the same distribution, we suggest that hyperparameter tuning should be done on a random split (or random subset of the train set) if one wants to measure compositional generalization of a model with respect to an unknown test distribution as opposed to an architecture with respect to a known test distribution. Tuning on a validation set that has the same distribution as the test set would amount to optimizing for a particular type of compound divergence and thus measure the ability for a particular architecture to yield models that can be made to generalize in one particular way (through leaking information about the test set in the hyperparameters).", "Similarly to BIBREF8, we anonymize the Freebase names and MIDs in the textual input and the SPARQL output, respectively, by replacing them with a placeholder (e.g., “M0” for the first MID). This removes the need for two learning sub-tasks that are orthogonal to our focus: named entity recognition and learning that the MIDs are patterns that need to be copied. An example input-output (question-query) pair then looks like the following: `Was M0 a screenwriter' $\\mapsto $ `select count(*) where {M0 a ns:film.writer}'.", "The main relation we are interested in is the one between compound divergence of the data split and accuracy. Specifically, we compute the accuracy of each model configuration on a series of divergence-based splits that we produce with target compound divergences that span the range between zero and the maximum achievable in 0.1 increments (while ensuring that atom divergence does not exceed the value of 0.02). For each target divergence, we produce at least 3 different splits with different randomization parameters (compare Section SECREF4). For comparison, we also compute accuracies on the other splits shown in Table TABREF18." ], [ "The mean accuracies of the three architectures on CFQ are shown in Figure FIGREF28(a) and Table TABREF29. We make three main observations:", "All models achieve an accuracy larger than 95% on a random split, and this is true even if they are trained on 10 times fewer training instances (see Appendix SECREF15 for a more detailed analysis on the performance with varying training size).", "The mean accuracy on the MCD splits is below 20% for all architectures, which means that even a large train set (about 96k instances) with a similar distribution of atoms between train and test is not sufficient for these architectures to perform well on the test distribution.", "For all architectures, there is a strong negative correlation between the compound divergence and the mean accuracy.", "This suggests that the baseline models are able to capture the superficial structure of the dataset, but fail to capture the compositional structure. We find it surprising that varying the compound divergence gives direct control of the (mean) accuracy, even though the examples in train and test look similar (see Appendix SECREF91). This means that compound divergence seems to capture the core difficulty for these ML architectures to generalize compositionally.", "Note that the experiment based on output-length exhibits a worse accuracy than what we would expect based on its compositional divergence. One explanation for this is that the test distribution varies from the training distribution in other ways than compound divergence (namely in output length and a slightly higher atom divergence), which seems to make this split particularly difficult for the baseline architectures. To analyze the influence of the length ratio further, we compute the correlation between length ratios and accuracy of the baseline systems and compare it to the correlation between compound divergence and accuracy. We observe $R^2$ correlation coefficients between 0.11 and 0.22 for the input and output length ratios and between 0.81 and 0.88 for the compound divergence. This shows that despite the known phenomenon that the baseline systems struggle to generalize to longer lengths, the compound divergence seems to be a stronger explanation for the accuracy on different splits than the lengths ratios.", "Error analysis. We perform an analysis of the errors for the split MCD$_{1}$ (the first MCD split that we constructed, with more details provided in Appendix SECREF13). We observe accuracies between 29% and 37% on the test set of this particular split. Qualitatively, all three systems seem to make similar errors at this point (68% of errors are on the same samples). They make more errors for longer sequences and predict about 20% too short output when they make an error. The most common category of error is the omission of a clause in the output (present in 43%-49% of the test samples), e.g.: (1) Omitted conjunctions: for the input “What spouse of a film producer executive produced and edited M0, M1, and M2?” the best system ignores “executive produced” in the output. (2) Omitted adjectives: for the input “Which female Spanish film producer was M3' s spouse?” the best system ignores the adjective “female”." ], [ "To demonstrate the use of our analysis method on another dataset, we re-create the scan dataset BIBREF2, which consists of compositional navigation commands (e.g, `turn left twice and jump') mapped to corresponding action sequences (e.g., `lturn lturn jump'). We use the original grammar while tracking the rule applications used for the construction of each input-output pair. This enables us to compare the compositional generalization abilities of the baseline systems on this dataset in a novel way.", "Figure FIGREF28(b) shows the graphs for the scan data set in the same setup as Figure FIGREF28(a) does for CFQ. We observe that the compound divergence again is a good predictor for the mean accuracy for all three architectures. One difference is that for scan the systems are able to attain accuracies close to 100% for compound divergences up to around 0.2, which is not the case for CFQ. This seems to be in line with the fact that overall CFQ is a more complex task than scan: the total number of rules used in generating scan is only 38 in comparison to 443 rules in the construction of CFQ.", "Appendix SECREF14 provides a comparison to other experiments presented in previous work, including experiments that have significantly different atom distributions. We observe that this generally causes lower accuracies but does not break the correlation between accuracy and compound divergence." ], [ "To measure compositional generalization for semantic parsing to SQL, BIBREF8 propose to ensure that no SQL query pattern occurs in both the train and the test set (“query split”), and they provide such splits for several data sets. By evaluating several ML architectures the authors confirm that this query-pattern split is harder to learn than a conventional split.", "BIBREF2 introduce the scan dataset, and several publications provide interesting analyses of compositional generalization using it BIBREF5, BIBREF6. BIBREF7 discuss a particular extension of a seq2seq model that is effective in handling difficult scan sub-tasks by separating semantic and syntactic information during learning. Our contributions extend the analyses on the scan data in several ways: CFQ provides richer annotations and covers a broader subset of English than the scan dataset, and we propose a comprehensive score for assessing aggregate compositionality of a system on a given task.", "The mathematics dataset BIBREF13 is a large, automatically generated set of 112M samples in 56 separated sub-tasks. The authors present data and experiments that share common goals with our approach, but focus on mathematical reasoning instead of natural language. Our breakdown of generation rules per train sample is more fine-grained, which allows a more precise compositional generalization analysis. Being automatically generated also links our approach to datasets such as the bAbI tasks BIBREF23, which however do not focus on compositional generalization.", "A dataset related to CFQ is ComplexWebQuestions BIBREF18, which consists of complex questions that are automatically generated from simpler sub-questions in WebQuestionsSP BIBREF17 and then reworded manually. While these datasets can be used for semantic parsing, we did not find them suitable for a thorough compositionality analysis because a consistent annotation with the compositional structure would be hard to obtain. Other approaches to semi-automatic dataset creation also use paraphrasing BIBREF24, BIBREF25.", "BIBREF3 introduce the generated clevr dataset, which shares common goals with our work applied in the area of visual reasoning. The dataset's functional programs capture some of the structural information of the questions and are linked one-to-many to the 423 question patterns used. The authors specifically investigate generalization to new combinations of visual attributes in one experiment which uses a particular train-test split based on the colors used. BIBREF26 propose a neural-symbolic architecture and discuss promising results on additional specific splits of the clevr data, e.g. based on object counts and program depth. BIBREF27 describe how the application of compositional attention networks to the clevr data leads to structured and data-efficient learning. BIBREF28 present a large, compositional, generated visual question answering data set with functional programs, on which neural state machines achieve good performance BIBREF29. The use of specific splits between train and test data also occurs in the context of visual data. E.g., BIBREF30 propose a greedy split algorithm to maximize the coverage of test concepts in the train set while keeping question-type/answer pairs disjoint and observe performance degradation of existing approaches. BIBREF31 introduce a synthetic visual question answering dataset called sqoop, which is used to test whether a learner can answer questions about all possible object pairs after being trained on a subset.", "While these datasets are very interesting, the additional annotation that we provide in CFQ indicating the exact rule trees needed to link input and output makes additional analyses regarding compositionality possible. Our analyses go beyond many of the presented discussions (that mostly focus on accuracy regarding particular holdouts) in formalizing an approach that uses the atom and compound divergences to measure compositionality.", "A number of ML approaches have been developed for semantic parsing. BIBREF32 propose Key-Value Memory Networks – neural network-based architectures that internalize a knowledge base into the network – and introduce the WikiMovies dataset. BIBREF33 develop an end-to-end architecture that can handle noise in questions and learn multi-hop reasoning simultaneously. They introduce the MetaQA benchmark that is based on WikiMovies but uses a set of only 511 question patterns (mod entities) shared between train and test.", "With regards to studying compositionality in ML, BIBREF34 argue that combinatorial generalization should be a top priority to achieve human-like abilities. BIBREF35 discusses measuring the compositionality of a trained representation, e.g. of a learned embedding. The author suggests to use a tree reconstruction error that is based on how well the oracle derivation of the input matches the structure that can be derived on the representations. BIBREF4 discuss an architecture that enables the learning of compositional concept operators on top of learned visual abstractions. BIBREF36 introduce the compositional recursive learner that “can generalize to more complex problems than the learner has previously encountered”." ], [ "In this paper we presented what is (to the best of our knowledge) the largest and most comprehensive benchmark for compositional generalization on a realistic NLU task. It is based on a new dataset generated via a principled rule-based approach and a new method of splitting the dataset by optimizing the divergence of atom and compound distributions between train and test sets. The performance of three baselines indicates that in a simple but realistic NLU scenario, state-of-the-art learning systems fail to generalize compositionally even if they are provided with large amounts of training data and that the mean accuracy is strongly correlated with the compound divergence.", "We hope our work will inspire others to use this benchmark as a yardstick to advance the compositional generalization capabilities of learning systems and achieve high accuracy at high compound divergence. Some specific directions that we consider promising include applying unsupervised pretraining on the input language or output queries and the use of more diverse or more targeted learning architectures, such as syntactic attention BIBREF7. We also believe it would be interesting to apply the DBCA approach to other domains such as visual reasoning, e.g. based on clevr BIBREF3.", "In the area of compositionality benchmarks, we are interested in determining the performance of current architectures on the end-to-end task that expects a natural language answer given a natural language question in CFQ. We would like also to extend our approach to broader subsets of language understanding, including use of ambiguous constructs, negations, quantification, comparatives, additional languages, and other vertical domains." ], [ "The following shows an example data item including the question text in various forms, the answer, the sparql query in various forms, some tracked statistics, and the set of used rules (atoms) and the applied rule tree (compound). Some details are omitted, indicated by ellipses (`...')." ], [ "During the development of our data generation pipeline, we manually checked the generated examples for quality. Below is a random selection of 50 examples of the final CFQ dataset (no cherry-picking was used). Brackets around [entity names] are provided just for ease of human reading. Manual checking also indicated that all questions are associated with the semantically correct sparql queries. However, because we rely on the data present in Freebase, there are three debatable questions which sound somewhat unnatural (UNKREF33, UNKREF51, and UNKREF59, see further discussion below the list).", "Who was a writer, star, and cinematographer of [Tetsuo: The Bullet Man], [Nightmare Detective], and [Bullet Ballet]?", "Which male person was a sibling of [Andrew Klavan]?", "Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?", "Did a producer, writer, and art director of [Thelma & Luis] produce, direct, and write [Light Girls]?", "Were [Hangover Square], [Zack and Miri Make a Porno], and [Clerks II] edited by a founder and employee of a film producer?", "What American parent of [Charlie Sistovaris] was a British screenwriter's sibling?", "Did [Anne Williams Rubinstein] marry a person that influenced a screenwriter and influenced [John Most]?", "Was [Cachún cachún ra ra!]'s director a film director's American child?", "Did [Maisy's Garden]'s executive producer write, edit, and executive produce [Pakalppooram], [It's Not About the Shawerma], [Rick's Canoe], and [The Fifth Wall]?", "Was [Holly Ellenson]'s child [Wally Ellenson]?", "Did [Emerald Cities]'s cinematographer, writer, and editor edit, executive produce, and direct [Blues for the Avatar] and [White Stork Is Coming]?", "Was a film producer [Lilies of the Ghetto]'s distributor and producer?", "Which child of [Mimi Iger] did a film producer employ and [The Walt Disney Company] employ?", "What Japanese spouse of [Hong Kong Paradise]'s star did [Ineko Arima] and [Nishiki Kô] marry?", "Who influenced and was influenced by [Black Dynamite]'s star?", "What was written by, edited by, directed by, produced by, and executive produced by [Pauline Collins]'s child's sibling?", "Which Swedish film director that [Théo Van Horn]'s actor influenced did [Egen ingȧng] star?", "Who was influenced by [Golden Yeggs]'s star, was influenced by [Richard Pryor], was influenced by [Bill Murray], and married [Elaine Chappelle]?", "What did [This Is My Show]'s director, cinematographer, and star direct, edit, produce, and executive produce?", "Who was a male costume designer and director of [Ene... due... like... fake...] and [The Windmill Bar]?", "Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?", "Did an art director, editor, director, writer, cinematographer, and star of [Tetsuo II: Body Hammer] produce [Nightmare Detective], [Tetsuo: The Iron Man], and [A Snake of June]?", "Was [Alexandra Naoum] [Monsieur Verdoux]'s producer, writer, and star?", "What film director founded [THX], was employed by [American Zoetrope], [LucasArts], [Skywalker Sound], and [Lucasfilm], and founded [Industrial Light & Magic]?", "What male employee of [Weta Workshop] was [Bad Taste]'s editor?", "Were [Weta Digital] and [Weta Workshop] founded by a cinematographer and founded by a film editor?", "What art director influenced [DreamWorks Animation]'s founder?", "Did [Daisies] star [Fruit of Paradise]'s costume designer and writer, star [Jaromír Vomácka], and star [Jirina Myskova]?", "What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?", "What British costume designer of [The Love Letter] and [The Chamber] was a screenwriter's child?", "Was [Eric Massa] a cinematographer's parent's sibling's American sibling?", "What art director of [Stepping Sisters 1932] was a parent of [Imre Sándorházi]?", "What was executive produced by, written by, produced by, and edited by a director of [V/H/S/2]'s sequel?", "What did an editor and cinematographer of [Tongue Twister Variations] direct?", "Who was a Canadian screenwriter that produced [Her Painted Hero] and [The Nick of Time Baby]?", "Which American parent of [Janet Friedman] did [Rose Friedman] influence and marry?", "Did [George Carlin] influence [Louis C.K.: Shameless]'s executive producer and influence [Joan Rivers]?", "Who was a male writer, star, director, and costume designer of [The Wizard of Speed and Time]?", "Who was [Lost Boys: The Thirst]'s prequel's sequel's art director?", "Did a cinematographer's female parent executive produce, direct, and write [Hit Dat Shit 5]?", "Who married [Siri von Essen], influenced [A Lesson in Love]'s director and art director, influenced [Tennessee Williams], and influenced [Maxim Gorky]?", "What Italian film director directed [Children of Hannibal]?", "What film producer directed, wrote, edited, and produced [la estrella], [la ardilla], and [el valiente]?", "Were [Flames: The Movie] and [Soltera] directed by a male person and executive produced by [Hilda Russoff]'s spouse?", "Was a sibling of [Fawwaz bin Abdulaziz Al Saud] [Badr bin Abdulaziz Al Saud]'s sibling?", "What did a sibling of [Louise Rohr] executive produce, produce, and edit?", "Did a French cinematographer of [Le Volcan interdit] edit [The Last Bolshevik] and direct [A.K.] and [Statues Also Die]?", "Was [Mannai Thottu Kumbidanum] directed by and written by a Dutch male cinematographer?", "Was a director, art director, executive producer, and costume designer of [But I'm a Genderqueer] [Lauren Soldano]?", "Was [When We Were Kings] produced by a film editor whose spouse was employed by [Royal Academy of Dramatic Art] and distributed by [PolyGram Filmed Entertainment]?", "Further discussion of the debatable questions:", "Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?", "The occurrence of the seemingly implausible combination of roles “spouse and parent” is due to incorrect data in Freebase, in which there are 502 entities asserted to be both the spouse and parent of other entities. For instance, “Anne Dacre” is both the spouse and parent of “Christopher Conyers”. We can also find occasional occurrences in CFQ of other implausible role combinations, such as “parent and child”, “spouse and sibling” etc., triggered by similar Freebase data issues.", "Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?", "The somewhat unnatural phrasing of “country's employee” occurs due to a modeling choice in Freebase, in which the same entity is used to represent both a country and the government of that country. This makes it possible for a country to employ a person.", "What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?", "The somewhat unnatural phrasing of “a character was influenced by” occurs due to a modeling choice in Freebase, in which when a film character is based on a real person, Freebase commonly uses the same entity to represent both. This makes “person” and “character” exchangeable in the questions where the person is also a film character." ], [ "Table TABREF85 shows the most frequently occurring answers in CFQ. Not surprisingly, after the answers “Yes” and “No”, entities related in Freebase to the domain of movies have highest frequency." ], [ "Figure FIGREF87 illustrates how subsampling changes the distribution of questions in CFQ with different levels of complexity to become more even." ], [ "Subsampling increases the frequency of rarely used rules and rule combinations and decreases the frequency of commonly used ones. For rules, this is illustrated by Figure FIGREF89 which shows the ratio of examples each rule appears in, before and after subsampling, in the order of their frequency. Figure FIGREF90 shows the same comparison for rule combinations." ], [ "Traditional compositionality experiments often use train-test splits based on observable properties of the input and output (e.g., input/output complexity, input/output patterns, and input/output feature holdouts). One consequence of this is that the difference between train and test examples is relatively easily observable “with the naked eye”. The lists below illustrate that this is not usually the case for divergence-based splits. Similar to the random sample of the general data in Appendix SECREF9 we provide a random sample of size 20 from both the train and test set here. Indeed, even for the MCD$_{1}$ split with a high divergence of 0.694, the 20 random samples of train and test questions shown below cannot easily be distinguished as they both contain the same kind of questions of different sizes.", "Train samples from MCD$_{1}$:", "What was founded by a costume designer, founded by [Forgotten Silver]'s star, and founded by [Jamie Selkirk]?", "Which male person influenced and was influenced by [William Dean Howells]?", "Did [Marco Bellocchio] produce, write, and direct [Greek Pete]?", "What did [Rick Schmidt] edit, [Philip Rashkovetsky] edit, and a cinematographer edit?", "Were [The Living Playing Cards] and [The Haunted Castle] edited by, directed by, and produced by a French writer of [Le cauchemar de Méliès]?", "What did a spouse of [Shorts]'s producer's spouse executive produce and direct?", "Did [P. G. Wodehouse], [Raymond Chandler], [Edward Bunker], [Pauline Kael], and [Michael Cimino] influence [Grindhouse]'s cinematographer and star?", "What Mexican person did a film producer employ?", "Did [The Midnight After]'s Chinese executive producer edit [Perfect Life] and [Dumplings]?", "Who did [For the Secret Service]'s director's female spouse influence?", "Who married, was influenced by, and influenced a company's founder?", "Was [MAN SE]'s French male German employee's employer [Sulzer]?", "Who influenced an actor that [Robin Santana] was influenced by and [K. J. Stevens] was influenced by and was influenced by [Virgil]?", "Did [Pirates of Malaysia] star [Giuseppe Addobbati] and star a Spanish screenwriter?", "Was [The Silence of the Sea] written by, produced by, executive produced by, directed by, and edited by [The Red Circle]'s French editor?", "Did [Chanel] employ a German costume designer, employ [Gaspard Ulliel] and [Maureen Chiquet], and employ [Jacques Polge]?", "Who was influenced by [Adam Sandler] and married a film producer?", "Did a Spanish screenwriter's child direct and edit [Bakuchi-uchi: Nagaremono]?", "Was a founder of [IG Port] employed by a film producer?", "Was [Orizzonti Orizzonti!] executive produced by and written by an art director's sibling?", "Test samples from MCD$_{1}$:", "What sequel of [Paranormal Activity 2] was edited by and written by a film director?", "What spouse of a film producer founded [Grand Hustle Records] and was employed by [40/40 Club], [Roc-A-Fella Records], and [Def Jam Recordings]?", "Did [Pixar] employ an art director and employ [Susham Bedi]?", "Was a sibling of [David Lindbland] [Dynamit Nobel]'s Swedish founder?", "What prequel of [Charlie the Unicorn 2] starred, was edited by, was produced by, was written by, and was directed by [Jason Steele]?", "Did [Rick Schmidt] direct, produce, executive produce, and edit [Blues for the Avatar], [White Stork Is Coming], [The Fifth Wall], and [It's Not About the Shawerma]?", "Was [Luke Larkin Music] an art director's employer?", "What prequel of [Goat Story 2] was executive produced, written, directed, edited, and produced by [Jan Tománek]?", "Was [Bullet Ballet]'s editor, star, director, and cinematographer [Promises Written in Water]'s star, director, writer, executive producer, and art director?", "What was edited by, produced by, directed by, and written by [Ellis Kaan Ozen], [Thaw Bwe], [Jeffrey Malkofsky-Berger], and [Leslie Berkley]?", "Was a person's female sibling [Reggae in a Babylon]'s producer?", "Who was a director, cinematographer, executive producer, art director, producer, star, and writer of [The Man Who Killed God]?", "Was [My Sweet Home]'s director, editor, writer, art director, producer, cinematographer, and costume designer a person?", "Which art director, star, and editor of [The Brown Bunny] and [Promises Written in Water] did [Cord] star?", "Did an employee and founder of [Virgin Mobile Australia], [Virgin Mobile USA], and [Virgin Mobile France] found [Virgin America] and found [V2 Records]?", "Was a Chinese executive producer and star of [Happy Ghost II] and [All's Well, Ends Well 2010] a film director?", "Was [The Voyeur]'s executive producer an actor's parent?", "Did [Erasable Cities]'s writer, producer, editor, art director, cinematographer, and director produce and executive produce [Promises Written in Water]?", "Who was an editor, star, and cinematographer of [Tetsuo: The Iron Man], [A Snake of June], and [Bullet Ballet]?", "Was a costume designer's employer [Philips High School]?" ], [ "Figure FIGREF133 shows the frequency of atoms (upper graph) and compounds (lower graph) in the train and test sets of the maximum compound divergence split for the CFQ data. As the frequency of an atom resp. compound we use the fraction of examples it appears in. Both atoms and compounds are indexed primarily by their frequency in the train set, secondarily by their frequency in the test set, in decreasing order. For practical reasons we only look at a small subset of compounds here but we believe the analysis is representative.", "We can see that the frequency of atoms in the two sets is very aligned and that all atoms from the test set appear in the train set. The frequency of compounds however is wildly different: While some invariably occur in both sets, the frequencies are often not aligned and most compounds appear only in either the train or the test set." ], [ "The experiments were run using the tensor2tensor framework BIBREF39 with some of the hyperparameters tuned using a random split of a previous, smaller version of the data set during development. We use the default hyperparameter sets publicly available in the tensor2tensor implementation (obtained from https://github.com/tensorflow/tensor2tensor) and override the tuned hyperparameters. The hyperparameters used are summarized in Table TABREF134." ], [ "Table TABREF136 shows a more detailed analysis of the errors that the baseline models make on CFQ for MCD$_{1}$ (compare Section SECREF24). The reported errors are bucketized into three main types: sparql property clause error, sparql filter clause error and malformed sparql query in the model's output. The total number of test set examples exhibiting any clause or filter error is reported (sum column), as well as the number of insertions (ins), deletions (del), and substitutions (sub) in the model's output with respect to the correct query. Property clause substitution errors are further subdivided into those where only the property itself is wrong while subject and object are correct (prop), those where the property is correct but either subject or object is wrong (node) and those where both the property and the subject or the object are wrong (both).", "The accuracy metric requires the model response and the golden (correct) answer to be exactly equal to each other. Thus, a sparql query with the same clauses as the golden answer but in a different order or with some of the clauses appearing multiple times is also considered to be an error despite being equivalent to the golden answer in its meaning. The amount of such errors is relatively small though, accounting for 1.8%, 0.6% and 1.5% of total test set size for LSTM+Attention, Transformer and Universal Transformer respectively." ], [ "Below we qualitatively analyze a number of instances the models fail on. We anonymize the MIDs in the same way as the data is provided to the models (see Section SECREF5). We first select queries on which all machine learning systems fail in all replicated runs (about 5k instances out of a total of about 12k), and then randomly select queries from this list. In the following we discuss a few cases in more detail. Note that, for readability, we use the following abbreviations for the sparql properties in Query 1:", "ns:people.person.child = ns:people.person.children|", "ns:fictional_universe.fictional_character.children|", "ns:organization.organization.child/", "ns:organization.organization_relationship.child", "ns:people.person.sibling = ns:people.person.siblings/", "ns:people.siblingrelationship.sibling|", "ns:fictionaluniverse.fictionalcharacter.siblings/", "ns:fictionaluniverse.", "siblingrelationshipoffictionalcharacters.siblings", "Query 1: “What sibling of M0 was M1' s parent?”", "Golden (correct) sparql query:", "SELECT DISTINCT ?x0 WHERE {", "?x0 ns:people.person.child M1 .", "?x0 ns:people.person.sibling M0 .", "FILTER ( ?x0 != M0 )", "}", "Inferred (system) sparql query:", "SELECT DISTINCT ?x0 WHERE {", "?x0 ns:people.person.sibling ?x1 .", "?x0 ns:people.person.sibling M0 .", "?x1 ns:people.person.child M1 .", "FILTER ( ?x0 != ?x1 )", "}", "Analysis. The meaning of the sparql query generated by the system is “What sibling of M0 was a sibling of M1's parent?”, which is incorrect. We next analyze the train set, in order to show that we believe enough information has been provided in the train set for the question to be answered correctly.", "Some subqueries of the query and their occurrences are shown in Table TABREF140. While the exact subquery “What sibling” does not occur at training, the two words have been shown separately in many instances: the subqueries “sibling of Mx”, and “Mx's parent” occur 2,331 and 1,222 times, respectively. We can analyze this example in more detail by comparing parts of the rule tree of this example with those shown at training. As can be read from the table, similar sentences have been shown during training. Some examples are:", "What was executive produced by and written by a sibling of M0?", "What costume designer did M1's parent employ?", "What cinematographer was a film editor that M2 and M3 married?", "What film director was a character influenced by M2?", "Query 2: “Did a male film director edit and direct M0 and M1?”", "Golden (correct) sparql query:", "SELECT count ( * ) WHERE {", "?x0 ns:film.director.film M0 .", "?x0 ns:film.director.film M1 .", "?x0 ns:film.editor.film M0 .", "?x0 ns:film.editor.film M1 .", "?x0 ns:people.person.gender m_05zppz", "}", "Inferred (system) sparql query:", "SELECT count ( * ) WHERE {", "?x0 ns:film.director.film M0 .", "?x0 ns:film.director.film M1 .", "?x0 ns:film.editor.film M0 .", "?x0 ns:people.person.gender m_05zppz", "}", "Analysis. The meaning of the inferred sparql query is “Did a male film director edit M0 and direct M0 and M1?”. It thus seems the model `forgets' to include the relation between the director and movie M1.", "Looking at subqueries and their occurrence count (Table TABREF145), we see again that various subqueries occur often during training. However, “edit and direct” have not been shown often together. When looking at the rule trees, we see that both conjunctions in the query occur often at training separately: “Did [DetNP] [VP] and [VP] [DetNP]” occurs 1,432 times, and “Did [DetNP] [VP] [Entity] and [Entity]” occurs 909 times. However, they never occur together: “Did [DetNP] [VP] and [VP] [DetNP] and [DetNP]” does not occur at training. This may be the reason why all systems fail on this example, but at the same time we believe a compositional learner should be able to generalize correctly given the training instances. Some examples are:", "Did a male film director that M3's parent married influence an art director?", "Did a film producer that played M2 edit and direct M1?", "Did a screenwriter edit and direct a sequel of M1", "Did a Chinese male film director edit M1 and M2?" ], [ "", "Figure FIGREF150 shows a scatter plot of accuracy vs. compound divergence for the three baseline architectures (see Section SECREF5) on existing splits of the scan data. These splits are discussed in BIBREF2 and BIBREF6, and the exact split data is available. (Data splits obtained from https://github.com/brendenlake/SCAN). We map these splits onto the re-created scan data, which enables us to measure the atom and compound divergences. The authors present a total of six split experiments (some with several sub-experiments):", "", "BIBREF2:", "simple (random)", "by action sequence length", "adding a primitive and adding a primitive along with complex combinations", "BIBREF6:", "adding a template", "adding template fillers", "adding more training examples of fillers (fewshot)", "", "In the plot, we omit some data points that are too close to be distinguished easily. The point labels have the form `(abbreviated experiment name)<(parameter)>@(number of samples) (baseline system abbreviation) [(train set size fraction), (split atom divergence)]'. The train set size fraction is given as a percentage of the overall data size. The baseline system abbreviations are LSTM, T for Transformer, UT for Universal Transformer, T/UT where both transformer models are indistinguishable, and empty where all three systems perform indistinguishably. The abbreviated experiment name is one of the names in italics above.", "We can observe a strong dependency of the accuracies on the compound divergence of the data split. Again, this seems to indicate that the compound divergence is correlated with accuracy for these baseline architectures. One difference to the data shown in Figure FIGREF28(b) is that for this set of experiments the accuracy drops faster with increasing compound divergence. One explanation for this effect is that the experiments are directly aimed at highlighting one specific potentially problematic scenario for learning. E.g. in the experiment `primitive<jump>' (with very low accuracies for all three systems) the jump command is shown exactly in one combination (namely alone) in the training data while it occurs in all test examples in arbitrary combinations.", "This is reflected in the higher atom divergence value of 0.08 for this split, as well as in all other splits that exhibit a low accuracy at a low compound divergence in Figure FIGREF150. Note that BIBREF2 already compare the experiment `primitive<jump>' to the experiment `primitive<turn left>' for which all three systems achieve a much higher accuracy. In their interpretation of this phenomenon, they mainly focus on the fact that in contrast to 'jump', the action 'turn left' is also generated by other inputs. We additionally observe that the latter experiment also has a slightly lower atom divergence of 0.07, a lower compound divergence, and it covers a much larger part of the data in the train set (94% vs. 63%).", "While the accuracies we observe for the `primitive' experiments are very much in line with the results reported by BIBREF2, we noticed a few interesting differences for other experiments: All three systems go to 100% accuracy on the fewshot task even for one example (while BIBREF6 report a slowly increasing accuracy for the architecture they evaluate). On the other hand, both transformer models only reach 0% accuracy on the length split, while the LSTM obtains around 14% (which is in line with what previous work reports)." ], [ "Figure FIGREF28 shows for all baseline systems a strong correlation between accuracy and compound divergence for the chosen training sizes (96k for CFQ and 8k for scan). One interesting question is whether and how this correlation is changed for different training sizes. Figures FIGREF159 and FIGREF159 show that this correlation holds also for smaller training sizes but that the accuracy is generally somewhat lower for smaller training sizes.", "At the same time, we observe that the difference between accuracies of various training sizes gets smaller as the training size increases. This can be seen even more clearly in Figures FIGREF159 and FIGREF159, which plot the training size rather than the compound divergence on the x-axis. These figures show that the increase in accuracy flattens out significantly as we reach training size of about 80k for CFQ and about 6k for SCAN. This indicates that further increasing train set size may not be sufficient to do well on these compositionality experiments." ], [ "To represent our logical form we use syntax of the description logic $\\mathcal {EL}$ BIBREF14, BIBREF15 with additional concept and role constructors. These constructors do not have description logic semantics; instead, their meaning is completely determined by the set of generation rules of the CFQ dataset.", "Let $A$ be a concept name, $C, C_1, C_2$ be concepts, $R, R_1, R_2$ be roles, and $v$ be a raw string. Then the following would be concepts:", "and the following would be roles:", "Note that our logical form does not have roles other than those in a form of RolePair($C_1$, $C_2$).", "New strings are generated by using a special function new_var($\\$S$). This function generates a unique string of the form ?x<N>, where N is a unique number, and assigns that string to variable $\\$S$. This string can later be used as a variable in a sparql constraint." ], [ "This section describes the format of each of the rule types we use for generating the CFQ dataset, in the form in which they appear in the rules index in Appendix SECREF20.", "General formatting conventions shared across all rule types:", "Variable names are prefixed by `$'. Example: $X.", "(Exception: In grammar rules, while variables standing for constants are prefixed by `$', variables standing for logical forms are prefixed by `_'. Example: _action.)", "Concept names are written in camel case. Example: FilmProducer.", "Names of functions that output logical forms (concepts, roles, or knowledge) are also written in camel case. Examples: DropDependency, BoundRolePairs, RolePair.", "Names of functions that output string literals or which are used for converting logical forms to sparql are written in lowercase with underscores. Examples: def2sparql, get_specializations, new_var.", "String literals are enclosed in single quotes. Example: 'ns:film:director'." ], [ "The CFQ grammar is a unification-based grammar of recursive rewriting rules used to generate pairs of strings and their corresponding logical form. For an introductory overview of unification-based grammars including several popular variations, see BIBREF38. The rules in the CFQ grammar follow a similar syntax in particular to that used in the Prolog extension GULP 3.1 BIBREF16, with the addition of support for disjunction, negation, absence, and default inheritance of features, and with minor differences in formatting described below.", "Properties shared between the CFQ grammar syntax and that of BIBREF16 include the following:", "Grammar rules are notated as variations of context-free phrase-structure rules of the form $T_{0} \\rightarrow T_{1}$ ... $T_{n}$, where each of the syntactic non-terminals and terminals $T_{0}$ ... $T_{n}$ are augmented with feature lists in parentheses.", "Each grammar rule can be interpreted as specifying how a feature structure (with logical form) that is unifiable with the lefthand side can be re-written to the sequence of features structures (with logical form) indicated on the righthand side.", "Features are represented as attribute-value pairs separated by a colon (i.e., $attribute$:$value$).", "Shared values in feature structures are represented through the use of variables.", "Specifically, in the rules index, CFQ grammar rules are described in the format", "$T_{0}(F_{0})[H]/L_{0} \\rightarrow T_{1}(F_{1})/L_{1}$ ... $T_{n}(F_{n})/L_{n}$", "where:", "Each $T_{i}$ is a syntactic category (syntactic nonterminal) or a string literal (syntactic terminal).", "Each $L_{i}$ for $i \\in [1, n]$ is either a variable representing a logical form or an empty string. In the case when $L_{i}$ is an empty string, we allow dropping the trailing slash from the $T_{i}(F_{i})/L_{i}$ expression, resulting in just $T_{i}(F_{i})$.", "$L_{0}$ is a logical form expressed in terms of $L_{1}...L_{n}$.", "Each $F_{i}$ is a comma-separated feature list of the form $(attribute_{1}$:$value_{1}$, ..., $attribute_{k}$:$value_{k})$. In the case where $F_{i}$ is empty, we allow dropping the parentheses from the $T_{i}(F_{i})$ expression, resulting in just $T_{i}$.", "$H$ is either an empty string or one of the variables $L_{i}$ for $i \\in [1, n]$, indicating that $F_{0}$ default inherits the features of $F_{i}$ (the syntactic “head”). In the case where $H$ is an empty string, we allow dropping the brackets from the $T_{0}(F_{0})[H]$ expression, resulting in just $T_{0}(F_{0})$.", "Note that while the above notation adopts the convention of splitting out the syntactic category and logical form from the feature list for visual prominence and to highlight the relationship to its context-free phrase-structure rule core, behaviorally it is identical to adding two more features to the feature list (we can call them, for example, $cat$ and $sem$) to represent the syntactic category and logical form.", "This means that, for example, the rule", "ACTIVE_VP[_head]/_head", "$\\rightarrow $ VP_SIMPLE(form:infinitive)/_head", "can be considered a notational shorthand for the following rule expressed purely using feature lists:", "(cat:ACTIVE_VP, sem:_head)[_head]", "$\\rightarrow $ (cat:VP_SIMPLE, sem:_head, form:infinitive)", "Disjunction of features. Similarly to BIBREF37, we allow disjunctive feature specifications, which we denote by separating the alternative values with a pipe (`$|$'). The feature specification (form:gerund|infinitive) would thus unify with either (form:gerund) or (form:infinitive), but not with (form:pastparticiple).", "Absence of features. We use a special atomic value _none_ to indicate that a given feature must either be absent or else explicitly set to the value _none_. The feature specification (subject:_none_, object:yes) would thus unify with either (object:yes) or (subject:_none_, object:yes), but not with (subject:yes, object:yes).", "Negation of features. Similarly to BIBREF37, we allow negated feature specifications, which we denote by prefixing the attribute with a minus sign (`-'). The feature specification (-form:gerund|infinitive) would thus unify with (form:pastparticiple) or (form:_none_), but not with (form:gerund) or (form:infinitive). In general, a feature specification of the form (-attribute:v$_{1}$|...|v$_{j}$) can be considered a notational shorthand for (attribute:v$_{j+1}$|...|v$_{k}$|_none_), where v$_{j+1}$|...|v$_{k}$ is an enumeration of all possible values of the feature attribute other than v$_{1}$|...|v$_{j}$.", "Default inheritance of features. If the lefthand side term is notated as $T_{0}(F_{0})[H]$, with $H$ equal to one of the variables $L_{i}$ for $i \\in [1, n]$, then this is interpreted as a notational shorthand for augmenting both $F_{0}$ and $F_{i}$ with an additional list of attribute-value pairs $(a_{1}$:$\\$v_{1}, ..., a_{k}$:$\\$v_{k})$, where $a_{1} ... a_{k}$ are all of the attributes listed in $F_{i}$ that were not originally listed in $F_{0}$.", "Unification of logical forms. As described in Appendix SECREF16, we represent logical forms using a variation of description logic, rather than using feature structures. In the context of unification, we consider logical forms to unify if and only they achieve structural concept equality after variable replacement (using the same variable replacements applied during unification of the corresponding feature lists), while taking into account the commutativity and associativity of $\\sqcap $. For example, under this criterion, the logical form GenderRel $\\sqcap $ $\\exists $RolePair(Predicate, Gender)._head would unify with either GenderRel $\\sqcap $ $\\exists $RolePair(Predicate, Gender).Male or with ($\\exists $RolePair(Predicate, Gender).Male) $\\sqcap $ GenderRel under a variable replacement mapping _head to Male, but would not unify with GenderRel $\\sqcap $ $\\exists $RolePair(Predicate, Gender).Male $\\sqcap $ $\\exists $RolePair(Predicate, GenderHaver).FilmProducer." ], [ "CFQ knowledge rules output expressions representing facts that are known to be true. They have no direct effect on text, logical forms, or sparql, but the generated knowledge can be used as preconditions to other rules. In the rules index, they are described in the following format:", "$\\rightarrow K$, where $K$ is knowledge that is output.", "By convention, we define the rule name of a knowledge rule to be simply the string representing the knowledge that the rule outputs, and we omit the rule name in the rules index for brevity.", "The union of those rules defines a knowledge base which we denote with $KB^{CFQ}$.", "All knowledge in CFQ is represented in the form $P(X_1,...,X_n)$, where $P$ is a predicate from the list below, and $X_1, ..., X_n$ are either logical forms or else raw strings. Knowledge rules do not use variable-based expressions.", "Supported knowledge predicates:", "BoundRolePairs", "ExclusiveRolePair", "FreebaseEntityMapping", "FreebasePropertyMapping", "FreebaseTypeMapping", "NonExclusiveRolePair", "Role" ], [ "CFQ inference rules transform logical forms and may be conditioned on knowledge. In the rules index, they are described in the following format:", "$K: L_0 \\rightarrow L_1$", "where $K$ represents a comma-separated list of knowledge preconditions, and $L_0$ and $L_1$ represent the input and output logical forms, all expressed in terms of a shared set of variables $v_1,...,v_m$.", "These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms $l_1,...,l_m$ respectively, such that $r(K) \\subseteq KB^{CFQ}$, then we can apply the inference rule by rewriting $r(L_0)$ to $r(L_1)$." ], [ "CFQ resolution rules transform sparql expressions and may be conditioned on knowledge. They do not affect text or logical forms.", "In the rules index, they are described in the following format:", "$K: S_0 \\rightarrow S_1~...~S_n$", "where $K$ represents a comma-separated list of knowledge preconditions, $S_0$ is a variable-based expression and $S_1~...~S_n$ are either raw sparql strings or else expressions described in terms of the same variables used in $S_0$ and $K$.", "These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms, strings, or expressions $l_1,...,l_m$ respectively, such that $r(K) \\subseteq KB^{CFQ}$, then we can apply the resolution rule by rewriting $r(S_0)$ to the sequence of terms $r(S_1)~...~r(S_n)$." ], [ "Our generation algorithm produces triples of the form $\\langle \\text{question, logical form, \\textsc {sparql}{} query} \\rangle $ in a mixed top-down and bottom-up fashion, with the final program of rule applications output alongside each triple in the form of a rule application DAG. The top-down portion of generation is responsible for efficiently searching for rules that can be applied to produce a meaningful example, while the bottom-up portion is responsible for actually applying the rules (i.e., performing the composition) and for producing the DAG.", "The generation process proceeds in two phases, each involving a top-down as well as bottom-up aspect. In the first phase, we apply grammar rules interleaved with inference rules to produce a pair of $\\langle \\text{question, logical form} \\rangle $. Specifically, we apply a recursive top-down algorithm which starts with the $S$ nonterminal and at every step performs a random search over the rules in the grammar which could produce the target nonterminal with accompanying feature structure. This top-down process proceeds until a candidate syntactic parse tree is attained whose leaves consist purely of syntactic terminals (i.e., string literals or entity placeholders). The grammar rules from this candidate parse tree are then applied in a bottom-up fashion beginning with the syntactic terminals to yield a tree of $\\langle \\text{text, logical form} \\rangle $ pairs. After each such bottom-up grammar rule application, we then greedily apply all possible inference rules on the resulting logical forms, applying an arbitrary deterministic ordering to the inference rules in cases where rules could be applied in multiple valid orderings. This ensures that inference rules and grammar rules are executed in an interleaved manner and each inference rule is applied at the earliest possible occasion.", "When a $\\langle \\text{question, logical form} \\rangle $ pair is generated for the $S$ nonterminal, we proceed to the second phase of the algorithm, in which resolution rules are applied to generate a corresponding sparql query to make up the third element of the desired $\\langle \\text{question, logical form, \\textsc {sparql}{} query} \\rangle $ triple. In practice, the bulk of the work in this phase is performed in a top-down fashion, in which resolution rules are recursively applied to transform a starting expression of the form get_specializations($L) (where $L represents the logical form output from the grammar phase) into a sequence of text literals representing the sparql query. This is followed nominally by a bottom-up process to construct the rule application DAG, yielding a tree of resolution rule applications of a similar form to the tree of interleaved grammar and inference rules output from the grammar phase. Note that while the grammar phase involves a large degree of random choice, the resolution phase proceeds much more deterministically, as the CFQ resolution rules have been designed such that any given question can yield only one possible sparql query, modulo commutativity and associativity of $\\sqcap $. In cases where resolution rules could be applied in multiple valid orderings, we again apply an arbitrary deterministic ordering to the resolution rules so as to yield as consistent as possible a rule application DAG and $\\langle \\text{question, logical form, \\textsc {sparql}{} query} \\rangle $ triple for any given question.", "Finally, to ease the task of tracking unique query patterns and to minimize the impact on the learning task of implementation details regarding choice of variable names or ordering of clauses, we normalize the final sparql query by alphabetically sorting the query clauses and re-numbering the variables to follow a standard increasing order.", "The resulting $\\langle \\text{question, logical form, \\textsc {sparql}{} query} \\rangle $ triple is then appended to the CFQ dataset." ], [ "In general, we do not explicitly track rules to represent the example-independent behaviors of the generation algorithm, as the universal applicability of these rules mean that the complete behavior of the generator should be observable on any reasonably-sized train set. The same applies to certain core behaviors of the description logic $\\mathcal {EL}$, such as commutativity and associativity of $\\sqcap $, which we omit tracking as explicit rules due to their similar ubiquity of application.", "One example-independent rule, however, that we do explicitly track is the rule that describes the handover process between the grammar phase and the resolution phase – or in terms of the rule application DAG, the rule that joins the tree of interleaved grammar and inference rule applications with the tree of resolution rule applications. We call this rule JOIN_BY_LOGICAL_FORM. It is included in the rules list for every example in CFQ and appears as the head of the rule application tree for each example." ], [ "Note that conceptually a similar approach for combining the different rule types could be applied to the semantic parsing task. The main difference would be that, instead of performing random search over the grammar, the semantic parsing task would need to find the set of rules which produce the desired input text." ], [ "For many domains, the set of examples generated by exhaustively combining rules is infinite or prohibitively large. For example, the CFQ grammar generates an infinite set of questions, and even when restricted to a reasonable complexity, the set is still too large for practical use. This means that we need to choose which subset of examples we want to include in our dataset. Given our goal of comprehensively measuring compositional generalization, we do this by:", "maximizing the overall diversity of rule combinations (allowing us to test as many rule combinations as possible)", "while using a uniform distribution from simple examples to increasingly more complex examples.", "We measure the diversity of rule combinations of a dataset using the empirical entropy over the frequency distribution of the subgraphs of the rule application DAGs, and we measure the complexity of an example using the number of rule applications used to generate it.", "For CFQ, we choose the following practical trade-off between these two criteria. We first generate a sufficiently large sample set by performing random rule applications. We then subsample from it to select a subset that maximizes the entropy of the subgraph distribution (while only taking into account subgraphs with a limited number of nodes for practicality). We use a greedy algorithm that incrementally assigns elements to the subsampled set while maximizing entropy at each step.", "The subsampling is initially limited to examples with the smallest complexity level and continues with increasingly larger complexity levels. We cap the maximum number of examples per level to achieve a uniform distribution across levels, and we limit the maximum complexity level such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity." ], [ "Figures FIGREF190 through FIGREF192 show the rule application DAG that was produced when generating the question “Who directed [entity]?”. They illustrate how grammar, inference, and knowledge rules are combined to generate a pair of text and logical form, and how resolution rules are used to generate the sparql query for the resulting logical form." ], [ "As discussed in Section SECREF3, nodes of this DAG represent rule applications while edges represent dependencies among the rules; i.e., an edge $A \\rightarrow B$ means that rule $B$ strictly depends on rule $A$ in the sense that the generator cannot apply rule $B$ before applying rule $A$. The DAG is normalized to ensure that a certain rule combination is represented using the same DAG across all the examples where it occurs. This is important for meaningfully comparing measures such as entropy and divergence across subgraphs of different examples.", "Specifically, together with adopting the measures described above to ensure that rules are applied in a deterministic order, we achieve the normalization of the DAG by only producing edges that represent “minimal dependencies”. This means that if a rule $A$ can be applied after rule $B$, but it could also be applied after rule $B^{\\prime }$ with $B \\rightarrow B^{\\prime }$ (i.e., $B^{\\prime }$ depends on $B$), we don't produce the edge $B^{\\prime } \\rightarrow A$." ], [ "For brevity, in the rule application DAG figures we have applied the following abbreviations for several lengthy concept names:", "Director = FilmDirector", "Directee = DirectedFilm", "Directing = DirectingAFilm", "SubjectAgentVerb = PredicateWithBoundRolePairs(RolePair( SubjectHaver, Subject), RolePair(Predicate, Agent))", "ObjectUndergoerVerb = PredicateWithBoundRolePairs(RolePair( ObjectHaver, Object), RolePair(Predicate, Undergoer))", "E1 = Entity('?E1')" ], [ "As described in Section SECREF16, during generation we initially generate a $\\langle \\text{question, logical form, \\textsc {sparql}{} query} \\rangle $ triple containing entity placeholders, and then replace those placeholders with specific entities as a post-processing step. Conceptually, one could construct a rule application DAG describing either the process by which the original $\\langle \\text{question, logical form, \\textsc {sparql}{} query} \\rangle $ triple with entity placeholders was generated, or alternatively the rules that would need to be applied if constructing the $\\langle \\text{question, logical form, \\textsc {sparql}{} query} \\rangle $ triple containing the final entity MIDs directly. Structurally, these two DAGs are identical, differing only in the definition of two entity-related rules described below. The rule application DAG shown in the accompanying figures is the version using entity placeholders.", "Versions of entity rules applicable when using entity placeholders:", "ENTITY=[ENTITY]_HSz7QrdGdsX:", "ENTITY(number:singular)/Entity(new_var(V1))", "$\\rightarrow $ '[entity]'", "ENTITY_MID:", "ent2sparql(Entity($X)) $\\rightarrow $ $X", "Versions of entity rules applicable when using actual entity MIDs:", "ENTITY=[ENTITY]_HSz7QrdGdsX:", "ENTITY(number:singular)/'m.'$X", "$\\rightarrow $ 'm.'$X", "ENTITY_MID:", "ent2sparql('m.'$X) $\\rightarrow $ 'ns:m.'$X" ], [ "Figure FIGREF203 shows an example of subgraphs in order to provide more details on the sampling and weighting of compounds. An example non-linear subgraph is highlighted by the red area, and two linear subgraphs are highlighted by the blue and the yellow areas, respectively.", "As described in Section SECREF6, given a large subset $\\mathbb {G}$ of subgraphs from the sample set as a whole, we calculate for each sample the weight of each subgraph $G \\in \\mathbb {G}$ that occurs in that sample as:", "where $\\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\\prec $ denotes the strict subgraph relation, and $P(G^{\\prime }| G)$ is the empirical probability of $G^{\\prime }$ occurring as a supergraph of $G$ over the full sample set.", "Intuitively, we are trying to estimate how interesting the subgraph $G$ is in the sample. First, for every occurrence $g$ of a subgraph $G$, we look for the supergraph $G^{\\prime }$ of $g$ that co-occurs most often with $G$ in the full sample set. The empirical probability of having $G^{\\prime }$ as a supergraph of $G$ determines how interesting the occurrence $g$ is – the higher this probability, the less interesting the occurrence. Thus we compute the weight of the occurrence as the complement of this maximum empirical probability. Then we take the weight of $G$ to be the weight of the most interesting occurrence $g$ of $G$ in the sample.", "E.g. in the extreme case that $G$ only occurs within the context $G^{\\prime }$, the weight of $G$ will be 0 in all samples. Conversely, if $G$ occurs in many different contexts, such that there is no single other subgraph $G^{\\prime }$ that subsumes it in many cases, then $w(G)$ will be high in all samples in which it occurs. This ensures that when calculating compound divergence based on a weighted subset of compounds, the most representative compounds are taken into account, while avoiding double-counting compounds whose frequency of occurrence is already largely explainable by the frequency of occurrence of one of its super-compounds.", "Returning to our example in Figure FIGREF203, suppose that $G$ represents the smallest linear subgraph (yellow area), and suppose that the weight of $G$ in this sample is 0.4. Then this means that there exists some other subgraph $G^{\\prime }$ (for instance, the linear subgraph highlighted by the blue area) that is a supergraph of $G$ in 60% of the occurrences of $G$ across the sample set." ], [ "Below is a selection of the rules used in the generation of CFQ. Specifically, this includes all rules involved in generating the question “Who directed [entity]?” (the same example illustrated in the rule application DAG in Appendix SECREF19). The format of the rules is discussed in Appendix SECREF17." ], [ "S=WHQ_F6E9egkQqxj:", "S/_x", "$\\rightarrow $ WHQ/_x", "WHQ=NPQ_INDIRECT_VP_INDIRECT_TXCca9URgVm:", "WHQ[_subject]/DropDependency(_subject) $\\sqcap $ DropDependency($\\exists $RolePair(Subject, SubjectHaver)._action)", "$\\rightarrow $ NPQ_INDIRECT(is_what:_none_, number:$n)/_subject", "VP_INDIRECT(form:past, number:$n, object:yes, subject:_none_)/_action", "NPQ_INDIRECT=WHO_5ptbPXXbuLZ:", "NPQ_INDIRECT(number:singular)/Person", "$\\rightarrow $ 'who'", "VP_INDIRECT=VP_INDIRECT_DP_ZJH4NhRkByc:", "VP_INDIRECT(object:yes)[_action]/_action $\\sqcap $ $\\exists $RolePair(ObjectHaver, Object)._object", "$\\rightarrow $ VP_INDIRECT(object:_none_, subject:_none_)/_action", "DP/_object", "VP_INDIRECT=ACTIVE_VP_RX51Tm7RXPe:", "VP_INDIRECT(object_type:$ut, subject_type:$at)[_head]/_head $\\sqcap $ PredicateWithBoundRolePairs(RolePair(SubjectHaver, Subject), RolePair(Predicate, Agent)) $\\sqcap $ PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))", "$\\rightarrow $ ACTIVE_VP(agent_type:$at, undergoer_type:$ut)/_head", "ACTIVE_VP=VP_SIMPLE_hJqAyjRUYJp:", "ACTIVE_VP(number:singular)[_head]/_head", "$\\rightarrow $ VP_SIMPLE(form:past)/_head", "VP_SIMPLE=VP_GHWf3fcVRZg:", "VP_SIMPLE(agent_type:person, undergoer_type:movie)[_head]/_head", "$\\rightarrow $ VP(concept_id:DirectingAFilm)/_head", "VP=DIRECTED_JkYzNbQyXtv:", "VP(concept_id:DirectingAFilm, form:past)/DirectingAFilm", "$\\rightarrow $ 'directed'", "DP=ENTITY_M6fSP5GvRaN:", "DP(is_proper_noun:yes, number:singular)[_head]/_head", "$\\rightarrow $ ENTITY/_head", "ENTITY=[ENTITY]_HSz7QrdGdsX:", "ENTITY(number:singular)/Entity(new_var(V1))", "$\\rightarrow $ '[entity]'", "... (211 grammar rules total)" ], [ "BOUND_ROLES_WITH_PREDICATE_OBJECT:", "BoundRolePairs($A, RolePair($R, $Q), RolePair($T, $S)):", "$\\exists $RolePair($Q, $R).($A $\\sqcap $ $B) $\\rightarrow $ $\\exists $RolePair($S, $T).($A $\\sqcap $ $B)", "BOUND_ROLES_WITH_PREDICATE_SUBJECT:", "BoundRolePairs($B, RolePair($Q, $R), RolePair($S, $T)):", "$B $\\sqcap $ $\\exists $RolePair($Q, $R).$A $\\rightarrow $ $B $\\sqcap $ $\\exists $RolePair($S, $T).$A", "IGNORE_BOUND_ROLE_PAIRS:", "$A $\\sqcap $ PredicateWithBoundRolePairs($X, $Y) $\\rightarrow $ $A", "IGNORE_DEPENDENCY_DROPPING:", "DropDependency($X) $\\rightarrow $ $X", "PREDICATE_UNREIFICATION:", "Role($Q, $P), Role($R, $P):", "$\\exists $RolePair($Q, Predicate).($P $\\sqcap $ $\\exists $RolePair(Predicate, $R).$A) $\\rightarrow $ $\\exists $RolePair($Q, $R).$A", "... (17 inference rules total)" ], [ "CONJUNCTION_WITHOUT_ENTITY:", "def2sparql($X $\\sqcap $ $Y, $V1) $\\rightarrow $ def2sparql($X, $V1) ' . ' def2sparql($Y, $V1)", "ENTITY_MID:", "ent2sparql(Entity($X)) $\\rightarrow $ $X", "GET_SPECIALIZATIONS:", "get_specializations($X) $\\rightarrow $ 'SELECT DISTINCT ' get_var($X, new_var($V0)) ' WHERE { ' def2sparql($X, get_var($X, $V0)) '}'", "GET_VAR_CONJUNCTION:", "get_var($X $\\sqcap $ $Y, $V1) $\\rightarrow $ shared_var(get_var($X, get_var($Y, $V1)), get_var($Y, get_var($X, $V1)))", "GET_VAR_RELATION:", "get_var($\\exists $$R.$X, $V1) $\\rightarrow $ $V1", "GET_VAR_TYPE:", "FreebaseTypeMapping($X, $F):", "get_var($X, $V1) $\\rightarrow $ $V1", "PROPERTY_MAPPING:", "FreebasePropertyMapping($R, $F):", "role2sparql($R) $\\rightarrow $ $F", "RELATION_MAPPING_WITHOUT_EXCLUSION:", "NonExclusiveRolePair($R):", "rel2sparql($X, $R, $Y) $\\rightarrow $ $X role2sparql($R) $Y", "RELATION_TO_ENTITY:", "def2sparql($\\exists $$R.$X, $V1) $\\rightarrow $ rel2sparql($V1, $R, ent2sparql($X))", "SHARED_VAR:", "shared_var($X, $X) $\\rightarrow $ $X", "SPECIALIZATION_OF_TYPE:", "def2sparql($X, $V1) $\\rightarrow $ $V1 ' a ' type2sparql($X)", "TYPE_MAPPING:", "FreebaseTypeMapping($X, $F):", "type2sparql($X) $\\rightarrow $ $F", "... (21 resolution rules total)" ], [ "$\\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Agent), RolePair(Predicate, FilmDirector))", "$\\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Undergoer), RolePair(Predicate, DirectedFilm))", "$\\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer)), RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))", "$\\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate)), RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate))", "$\\rightarrow $ FreebasePropertyMapping(RolePair(FilmDirector, DirectedFilm), 'ns:film.director.film')", "$\\rightarrow $ FreebaseTypeMapping(Person, 'ns:people.person')", "$\\rightarrow $ NonExclusiveRolePair(FilmDirector, DirectedFilm)", "$\\rightarrow $ Role(DirectedFilm, DirectingFilm)", "$\\rightarrow $ Role(FilmDirector, DirectingFilm)", "", "... (194 knowledge rules total)" ] ] }
{ "question": [ "How strong is negative correlation between compound divergence and accuracy in performed experiment?", "What are results of comparison between novel method to other approaches for creating compositional generalization benchmarks?", "How authors justify that question answering dataset presented is realistic?", "What three machine architectures are analyzed?", "How big is new question answering dataset?", "What are other approaches into creating compositional generalization benchmarks?" ], "question_id": [ "d2b3f2178a177183b1aeb88784e48ff7e3e5070c", "d5ff8fc4d3996db2c96cb8af5a6d215484991e62", "d9c6493e1c3d8d429d4ca608f5acf29e4e7c4c9b", "0427ca83d6bf4ec113bc6fec484b2578714ae8ec", "f1c70baee0fd02b8ecb0af4b2daa5a56f3e9ccc3", "8db45a8217f6be30c31f9b9a3146bf267de68389" ], "nlp_background": [ "zero", "zero", "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ " between 0.81 and 0.88" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Note that the experiment based on output-length exhibits a worse accuracy than what we would expect based on its compositional divergence. One explanation for this is that the test distribution varies from the training distribution in other ways than compound divergence (namely in output length and a slightly higher atom divergence), which seems to make this split particularly difficult for the baseline architectures. To analyze the influence of the length ratio further, we compute the correlation between length ratios and accuracy of the baseline systems and compare it to the correlation between compound divergence and accuracy. We observe $R^2$ correlation coefficients between 0.11 and 0.22 for the input and output length ratios and between 0.81 and 0.88 for the compound divergence. This shows that despite the known phenomenon that the baseline systems struggle to generalize to longer lengths, the compound divergence seems to be a stronger explanation for the accuracy on different splits than the lengths ratios." ], "highlighted_evidence": [ "We observe $R^2$ correlation coefficients between 0.11 and 0.22 for the input and output length ratios and between 0.81 and 0.88 for the compound divergence. " ] } ], "annotation_id": [ "e901d0fbf87e192489157f04553b531ae611ff31" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments. The reason for this is that, instead of focusing on only one intuitive but rather arbitrary aspect of compositional generalization, the MCD splits aim to optimize divergence across all compounds directly." ], "highlighted_evidence": [ "The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments. " ] } ], "annotation_id": [ "c3a86fd975d3d0c4a9e803ab681c020d00e843ce" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets" ], "yes_no": null, "free_form_answer": "", "evidence": [ "CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data. (The data URL is not yet provided for anonymous review.) We include a list of MIDs such that their English names map unambiguously to a MID. Table TABREF17(a) summarizes the overall statistics of CFQ. Table TABREF17(b) uses numbers from BIBREF8 and from an analysis of WebQuestionsSP BIBREF17 and ComplexWebQuestions BIBREF18 to compare three key statistics of CFQ to other semantic parsing datasets (none of which provide annotations of their compositional structure). CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. Note that it would be easy to boost the raw number of questions in CFQ almost arbitrarily by repeating the same question pattern with varying entities, but we use at most one entity substitution per question pattern. Appendix SECREF10 contains more detailed analyses of the data distribution." ], "highlighted_evidence": [ "CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. " ] } ], "annotation_id": [ "3f40c0b625e69e4fa77d7dc219219e70dd069aaa" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "LSTM+attention", "Transformer ", "Universal Transformer" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use three encoder-decoder neural architectures as baselines: (1) LSTM+attention as an LSTM BIBREF19 with attention mechanism BIBREF20; (2) Transformer BIBREF21 and (3) Universal Transformer BIBREF22." ], "highlighted_evidence": [ "We use three encoder-decoder neural architectures as baselines: (1) LSTM+attention as an LSTM BIBREF19 with attention mechanism BIBREF20; (2) Transformer BIBREF21 and (3) Universal Transformer BIBREF22." ] } ], "annotation_id": [ "78896dceed3658b8da2f10e52f46a35e1b9a9179" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "239,357 English question-answer pairs" ], "yes_no": null, "free_form_answer": "", "evidence": [ "CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data. (The data URL is not yet provided for anonymous review.) We include a list of MIDs such that their English names map unambiguously to a MID. Table TABREF17(a) summarizes the overall statistics of CFQ. Table TABREF17(b) uses numbers from BIBREF8 and from an analysis of WebQuestionsSP BIBREF17 and ComplexWebQuestions BIBREF18 to compare three key statistics of CFQ to other semantic parsing datasets (none of which provide annotations of their compositional structure). CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. Note that it would be easy to boost the raw number of questions in CFQ almost arbitrarily by repeating the same question pattern with varying entities, but we use at most one entity substitution per question pattern. Appendix SECREF10 contains more detailed analyses of the data distribution." ], "highlighted_evidence": [ "CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data." ] } ], "annotation_id": [ "13ccd7460430645d55ded77fd3d46fbf4d1e0abb" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "random ", "Output length", "Input length", "Output pattern", "Input pattern" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For the goal of measuring compositional generalization as accurately as possible, it is particularly interesting to construct maximum compound divergence (MCD) splits, which aim for a maximum compound divergence at a low atom divergence (we use $\\mathcal {D}_A \\le 0.02$). Table TABREF18 compares the compound divergence $\\mathcal {D}_C$ and atom divergence $\\mathcal {D}_A$ of three MCD splits to a random split baseline as well as to several previously suggested compositionality experiments for both CFQ and the existing scan dataset (cf. Section SECREF30). The split methods (beyond random split) are the following:", "Output length: Variation of the setup described by BIBREF2 where the train set consists of examples with output (sparql query or action sequence) length $\\le \\hspace{-2.5pt} N$, while the test set consists of examples with output length $> \\hspace{-2.5pt} N$. For CFQ, we use $N = 7$ constraints. For scan, we use $N = 22$ actions.", "Input length: Variation of the above setup, in which the train set consists of examples with input (question or command) length $\\le N$, while test set consists of examples with input length $> N$. For CFQ, we use $N=19$ grammar leaves. For SCAN, we use $N=8$ tokens.", "Output pattern: Variation of setup described by BIBREF8, in which the split is based on randomly assigning clusters of examples sharing the same output (query or action sequence) pattern. Query patterns are determined by anonymizing entities and properties; action sequence patterns collapse primitive actions and directions.", "Input pattern: Variation of the previous setup in which the split is based on randomly assigning clusters of examples sharing the same input (question or command) pattern. Question patterns are determined by anonymizing entity and property names ; command patterns collapse verbs and the interchangeable pairs left/right, around/opposite, twice/thrice." ], "highlighted_evidence": [ "The split methods (beyond random split) are the following:\n\nOutput length: Variation of the setup described by BIBREF2 where the train set consists of examples with output (sparql query or action sequence) length $\\le \\hspace{-2.5pt} N$, while the test set consists of examples with output length $> \\hspace{-2.5pt} N$. For CFQ, we use $N = 7$ constraints. For scan, we use $N = 22$ actions.\n\nInput length: Variation of the above setup, in which the train set consists of examples with input (question or command) length $\\le N$, while test set consists of examples with input length $> N$. For CFQ, we use $N=19$ grammar leaves. For SCAN, we use $N=8$ tokens.\n\nOutput pattern: Variation of setup described by BIBREF8, in which the split is based on randomly assigning clusters of examples sharing the same output (query or action sequence) pattern. Query patterns are determined by anonymizing entities and properties; action sequence patterns collapse primitive actions and directions.\n\nInput pattern: Variation of the previous setup in which the split is based on randomly assigning clusters of examples sharing the same input (question or command) pattern. Question patterns are determined by anonymizing entity and property names ; command patterns collapse verbs and the interchangeable pairs left/right, around/opposite, twice/thrice." ] } ], "annotation_id": [ "c4b7c4f2bd001ebecda536db986a3fbfc1607980" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] }
{ "caption": [ "Figure 1: Generating a natural language question together with its SPARQL query using four types of rules. One (of potentially many) intermediate logical forms is also shown.", "Table 1: Examples of generated questions at varying levels (L) of complexity.", "Table 2: (a) CFQ dataset statistics. (b) CFQ complexity statistics in comparison to other semantic parsing datasets. Datasets in the first section map text to SQL for various DBs, with numbers as reported by Finegan-Dollak et al. (2018). Datasets in the second section map text to SPARQL for Freebase. The number of query patterns is determined by anonymizing entities and properties.", "Table 3: Comparison of relevant measurements for different split methods on CFQ / SCAN.", "Figure 2: Accuracies of the three baseline systems on (a) CFQ and (b) SCAN vs. compound divergence for different split methods and for different target compound divergences.", "Table 4: Mean accuracies of the three baseline systems on CFQ and SCAN (in %).", "Table 5: Most frequent answers in CFQ.", "Figure 4: Ratio of examples in which a given rule appears, before (blue) and after (red) subsampling.", "Figure 5: Ratio of examples in which a given rule combination appears, before (blue) and after (red) subsampling.", "Figure 6: Frequency of atoms resp. compounds in the train vs. test set", "Table 6: Summary of hyperparameters that deviate from the defaults. Default hyperparameter sets are: lstm_bahdanau_attention_multi, transformer_base, and universal_transformer_tiny, respectively.", "Table 7: Examples with a given error (in %) of total test set size. See text for details.", "Table 8: Subqueries of “What sibling of M0 was M1’ s parent?” and their occurrences in training.", "Table 9: Subqueries of “Did a male film director edit and direct M0 and M1?” and their occurrences in training.", "Figure 7: Accuracy and divergence measurements for splits of SCAN as used in other work (see text for details). The numbers in brackets show the train / full data-set ratio, and the atom divergence.", "Figure 8: Accuracies of the three baseline systems on CFQ as a function of compound divergence at different training sizes.", "Figure 9: Accuracies of the three baseline systems on SCAN as a function of compound divergence at different training sizes.", "Figure 10: Accuracies of the three baseline systems on CFQ at different divergence levels as a function of training size.", "Figure 11: Accuracies of the three baseline systems on SCAN at different divergence levels as a function of training size.", "Figure 12: The normalized rule application DAG that was produced for “Who directed [entity]?” (grammar/inference rules portion, continued in Figures 13 and 14).", "Figure 13: The normalized rule application DAG that was produced for “Who directed [entity]?” (resolution rules portion, continued from Figure 12).", "Figure 14: The normalized rule application DAG that was produced for “Who directed [entity]?” (inference rules portion, continued from Figure 12).", "Figure 15: Examples subgraphs in the grammar/inference rules portion for “Who directed [entity]?” (from Figure 12): non-linear subgraph (red area), and two linear subgraphs (yellow and blue areas), of which one (yellow area) is a subgraph of the other (blue area)." ], "file": [ "4-Figure1-1.png", "4-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png", "8-Figure2-1.png", "8-Table4-1.png", "17-Table5-1.png", "18-Figure4-1.png", "18-Figure5-1.png", "20-Figure6-1.png", "21-Table6-1.png", "21-Table7-1.png", "22-Table8-1.png", "23-Table9-1.png", "24-Figure7-1.png", "26-Figure8-1.png", "26-Figure9-1.png", "26-Figure10-1.png", "26-Figure11-1.png", "32-Figure12-1.png", "33-Figure13-1.png", "34-Figure14-1.png", "36-Figure15-1.png" ] }
1901.03860
Prototypical Metric Transfer Learning for Continuous Speech Keyword Spotting With Limited Training Data
Continuous Speech Keyword Spotting (CSKS) is the problem of spotting keywords in recorded conversations, when a small number of instances of keywords are available in training data. Unlike the more common Keyword Spotting, where an algorithm needs to detect lone keywords or short phrases like"Alexa","Cortana","Hi Alexa!","Whatsup Octavia?"etc. in speech, CSKS needs to filter out embedded words from a continuous flow of speech, ie. spot"Anna"and"github"in"I know a developer named Anna who can look into this github issue."Apart from the issue of limited training data availability, CSKS is an extremely imbalanced classification problem. We address the limitations of simple keyword spotting baselines for both aforementioned challenges by using a novel combination of loss functions (Prototypical networks' loss and metric loss) and transfer learning. Our method improves F1 score by over 10%.
{ "section_name": [ "Introduction", "Related work", "Dataset", "Data Preprocessing", "Feature Engineering", "Deep Learning Architectures", "Experiments, Results and Discussion" ], "paragraphs": [ [ "Continuous Speech Keyword Spotting (CSKS) aims to detect embedded keywords in audio recordings. These spotted keyword frequencies can then be used to analyze theme of communication, creating temporal visualizations and word clouds BIBREF0 . Another use case is to detect domain specific keywords which ASR (Automatic Speech Recognition) systems trained on public data cannot detect. For example, to detect a TV model number “W884” being mentioned in a recording, we might not have a large number of training sentences containing the model number of a newly launched TV to finetune a speech recognition (ASR) algorithm. A trained CSKS algorithm can be used to quickly extract out all instances of such keywords.", "We train CSKS algorithms like other Keyword Spotting algorithms by classifying small fragments of audio in running speech. This requires the classifier model to have a formalized process to reject unseen instances (everything not a keyword, henceforth referred to as background) apart from ability to differentiate between classes (keywords). Another real world constraint that needs to be addressed while training such an algorithm is the availability of small amount of labeled keyword instances. We combine practices from fields of transfer learning, few-shot learning and metric learning to get better performance on this low training data imbalanced classification task.", "Our work involves :", "Our baselines, Honk( UID9 ), DeepSpeech-finetune( UID10 ), had comparatively both lower recall and precision. We noticed an improvement when fine tuning DeepSpeech model with prototypical loss (DeepSpeech-finetune-prototypical ( UID11 )). While analysing the false positives of this model, it was observed that the model gets confused between the keywords and it also wrongly classifies background noise as a keyword. To improve this, we combined prototypical loss with a metric loss to reject background (DeepSpeech-finetune-prototypical+metric( UID14 )). This model gave us the best results." ], [ "In the past, Hidden Markov Models (HMM) BIBREF6 , BIBREF7 , BIBREF8 have been used to solve the CSKS problem. But since the HMM techniques use Viterbi algorithms(computationally expensive) a faster approach is required.", "Owning to the popularity of deep learning, many recent works such as BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 have used deep learning techniques for many speech processing tasks. In tasks such as ASR, Hannun et al. BIBREF3 proposed a RNN based model to transcribe speech into text. Even for plain keyword spotting, BIBREF1 , BIBREF2 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 have proposed various deep learning architectures to solve the task. But to the best of our knowledge, no past work has deployed deep learning for spotting keywords in continuous speech.", "Recently, a lot of work is being done on training deep learning models with limited training data. Out of them, few-shot techniques as proposed by BIBREF18 , BIBREF4 have become really popular. Pons et al. BIBREF16 proposed a few-shot technique using prototypical networks BIBREF4 and transfer leaning BIBREF19 , BIBREF20 to solve a different audio task.", "We took inspiration from these works to design our experiments to solve the CSKS task." ], [ "Our learning data, which was created in-house, has 20 keywords to be spotted about television models of a consumer electronics brand. It was collected by making 40 participants utter each keyword 3 times. Each participant recorded in normal ambient noise conditions. As a result, after collection of learning data we have 120 (3 x 40) instances of each of the 20 keywords. We split the learning data 80:20 into train and validation sets. Train/Validation split was done on speaker level, so as to make sure that all occurrences of a particular speaker is present only on either of two sets. For testing, we used 10 different 5 minutes long simulated conversational recordings of television salesmen and customers from a shopping mall in India. These recordings contain background noise (as is expected in a mall) and have different languages (Indians speak a mixture of English and Hindi). The CSKS algorithm trained on instances of keywords in learning data is supposed to detect keywords embedded in conversations of test set." ], [ "Our dataset consisted of keyword instances but the algorithm trained using this data needs to classify keywords in fragments of running conversations. To address this, we simulate the continuous speech scenario, both for keyword containing audio and background fragments, by using publicly available audio data which consisted of podcasts audio, songs, and audio narration files. For simulating fragments with keywords, we extract two random contiguous chunks from these publicly available audio files and insert the keyword either in the beginning, in the middle or in the end of the chunks, thus creating an audio segment of 2 seconds. Random 2 second segments taken from publicly available audio are used to simulate segments with no keywords(also referred to as background elsewhere in the paper). These artificially simulated audio chunks from train/validation set of pure keyword utterances were used to train/validate the model. Since the test data is quite noisy, we further used various kinds of techniques such as time-shift, pitch-shift and intensity variation to augment the data. Furthermore we used the same strategy as Tang et al. BIBREF2 of caching the data while training deep neural network on batches and artificially generating only 30% data which goes into a batch. By following these techniques, we could increase the data by many folds which not only helped the model to generalise better but also helped reduce the data preparation time during every epoch." ], [ "For all the experiments using Honk architecture, MFCC features were used. To extract these features, 20Hz/4kHz band pass filters was used to reduce the random noise. Mel-Frequency Cepstrum Coefficient (MFCC) of forty dimension were constructed and stacked using 20 milliseconds window size with 10 miliseconds overlap. For all the experiments using deep speech architecture, we have extracted spectrograms of audio files using 20 milliseconds window size with 10 milliseconds overlap and 480 nfft value." ], [ "Honk is a baseline Neural Network architecture we used to address the problem. Honk has shown good performance on normal Keyword Spotting and thus was our choice as the first baseline. The neural network is a Deep Residual Convolutional Neural Network BIBREF21 which has number of feature maps fixed for all residual blocks. The python code of the model was taken from the open source repository BIBREF22 . We tried changing training strategies of Honk architecture by the methods we will describe later for DeepSpeech, but this did not improve the accuracy.", "DeepSpeech-finetune is fine tuning the weights of openly available DeepSpeech BIBREF3 model (initial feature extraction layers and not the final ASR layer) for CSKS task. The architecture consists of pretrained initial layers of DeepSpeech followed by a set of LSTM layers and a Fully Connected layer (initialized randomly) for classification. Pretrained layers taken from DeepSpeech are the initial 2D convolution layers and the GRU layers which process the output of the 2D convolutions. The output of Fully Connected layer is fed into a softmax and then a cross entropy loss for classification is used to train the algorithm. Please note that the finetune trains for 21 classes (20 keywords + 1 background) as in aforementioned Honk model. The architecture can be seen in Fig. FIGREF6 .", "The next model we try is fine tuning DeepSpeech model but with a different loss function. This loss function is taken from BIBREF4 . Prototypical loss works by concentrating embeddings of all data points of a class around the class prototype. This is done by putting a softmax on the negative distances from different prototypes to determine the probability to belong to corresponding classes. The architecture FIGREF7 is same as DeepSpeech-finetune, except output of pre-final layer is taken as embedding rather than applying a Fully Connected layer for classification. These embeddings are then used to calculate euclidean distances between datapoints and prototypes, represented as INLINEFORM0 in formulae. The softmax over negative distances from prototypes is used to train cross-entropy loss. During training, examples of each class are divided into support and query embeddings. The support embeddings are used to determine prototypes of the class. Equation EQREF12 shows derivation of prototype of INLINEFORM1 class where INLINEFORM2 is the neural network yielding the embedding and INLINEFORM3 is the set of support vectors for the class. The distance of query vectors from the prototypes of the class they belong to are minimized and prototypes of other classes is maximized when training the prototypical loss. The negative distances from the prototypes of each class are passed into softmax to get the probability of belonging in a class as shown in equation EQREF13 . We see better results when we train the algorithm using prototypical loss than normal cross entropy. On qualitatively observing the output from DeepSpeech-finetune-prototypical we see that the mistakes involving confusion between keywords are very less compared to datapoints of the class background being classified as one of the keywords. We hypothesize that this might be due to treating the entire background data as one class. The variance of background is very high and treating it as one class (a unimodal class in case of prototypes) might not be the best approach. To address this, we propose the next method where we use prototypes for classification within keywords and an additional metric loss component to keep distances of background datapoints from each prototype high. DISPLAYFORM0 DISPLAYFORM1 ", "We hypothesize the components of loss function of this variant from failures of prototypical loss as stated earlier. The architecture is same as in FIGREF7 , but the loss function is different from DeepSpeech-finetune-prototypical. While in DeepSpeech-finetune-prototypical, we trained prototype loss with 21 classes(20 keywords + 1 background), in DeepSpeech-finetune-prototypical+metric prototype loss is trained only amongst the 20 keywords and a new additional metric loss component inspired from BIBREF5 is added to loss function. This metric loss component aims to bring datapoints of same class together and datapoints of different class further. Datapoints belonging to background are treated as different class objects for all other datapoints in a batch. So for each object in a batch, we add a loss component like equation EQREF15 to prototypical loss. INLINEFORM0 is all datapoints in the batch belonging to the same class as INLINEFORM1 and INLINEFORM2 is all datapoints belonging to different classes than INLINEFORM3 (including background). This architecture gets the best results. DISPLAYFORM0 " ], [ "While testing, the distance of a datapoint is checked with all the prototypes to determine its predicted class. Overlapping chunks of running audio are sent to the classifier to get classified for presence of a keyword.", "Train set numbers corresponding to all the models have shown in Table TABREF16 . DeepSpeech-finetune-prototypical+metric clearly beats the baselines in terms of both precision and recall. Honk is a respectable baseline and gets second best results after DeepSpeech-finetune-prototypical+metric, however, attempts to better Honk's performance using prototype loss and metric loss did not work at all.", "Our method to combine prototypical loss with metric learning can be used for any classification problem which has a set of classes and a large background class, but its effectiveness needs to be tested on other datasets." ] ] }
{ "question": [ "What problem do they apply transfer learning to?", "What are the baselines?", "What languages are considered?" ], "question_id": [ "4e379d6d5f87554fabf6f7f7b6ed92d2025e7280", "518d0847b02b4f23a8f441faa38b935c9b892e1e", "8112d18681e266426cf7432ac4928b87f5ce8311" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "CSKS task" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We took inspiration from these works to design our experiments to solve the CSKS task." ], "highlighted_evidence": [ "We took inspiration from these works to design our experiments to solve the CSKS task." ] } ], "annotation_id": [ "1499b713a18921a7039b3a2d4d665193768e295d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Honk", "DeepSpeech-finetune" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our baselines, Honk( UID9 ), DeepSpeech-finetune( UID10 ), had comparatively both lower recall and precision. We noticed an improvement when fine tuning DeepSpeech model with prototypical loss (DeepSpeech-finetune-prototypical ( UID11 )). While analysing the false positives of this model, it was observed that the model gets confused between the keywords and it also wrongly classifies background noise as a keyword. To improve this, we combined prototypical loss with a metric loss to reject background (DeepSpeech-finetune-prototypical+metric( UID14 )). This model gave us the best results." ], "highlighted_evidence": [ "Our baselines, Honk( UID9 ), DeepSpeech-finetune( UID10 ), had comparatively both lower recall and precision." ] } ], "annotation_id": [ "9883764527bbd34e451ab4b14027fd0e9bdaaf5c" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "English", "Hindi" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our learning data, which was created in-house, has 20 keywords to be spotted about television models of a consumer electronics brand. It was collected by making 40 participants utter each keyword 3 times. Each participant recorded in normal ambient noise conditions. As a result, after collection of learning data we have 120 (3 x 40) instances of each of the 20 keywords. We split the learning data 80:20 into train and validation sets. Train/Validation split was done on speaker level, so as to make sure that all occurrences of a particular speaker is present only on either of two sets. For testing, we used 10 different 5 minutes long simulated conversational recordings of television salesmen and customers from a shopping mall in India. These recordings contain background noise (as is expected in a mall) and have different languages (Indians speak a mixture of English and Hindi). The CSKS algorithm trained on instances of keywords in learning data is supposed to detect keywords embedded in conversations of test set." ], "highlighted_evidence": [ "These recordings contain background noise (as is expected in a mall) and have different languages (Indians speak a mixture of English and Hindi)." ] } ], "annotation_id": [ "fdc9d9b76432e95dae04236f6db890f02394e1ed" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Fig. 1. Architecture for DeepSpeech-finetune", "Table 1. Results of all experiments" ], "file": [ "3-Figure1-1.png", "4-Table1-1.png" ] }
1909.02480
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
Most sequence-to-sequence (seq2seq) models are autoregressive; they generate each token by conditioning on previously generated tokens. In contrast, non-autoregressive seq2seq models generate all tokens in one pass, which leads to increased efficiency through parallel processing on hardware such as GPUs. However, directly modeling the joint distribution of all tokens simultaneously is challenging, and even with increasingly complex model structures accuracy lags significantly behind autoregressive models. In this paper, we propose a simple, efficient, and effective model for non-autoregressive sequence generation using latent variable models. Specifically, we turn to generative flow, an elegant technique to model complex distributions using neural networks, and design several layers of flow tailored for modeling the conditional density of sequential latent variables. We evaluate this model on three neural machine translation (NMT) benchmark datasets, achieving comparable performance with state-of-the-art non-autoregressive NMT models and almost constant decoding time w.r.t the sequence length.
{ "section_name": [ "Introduction", "Background", "Background ::: Flow-based Generative Models", "Background ::: Variational Inference and Training", "FlowSeq", "FlowSeq ::: Source Encoder", "FlowSeq ::: Posterior ::: Generation of Latent Variables.", "FlowSeq ::: Posterior ::: Zero initialization.", "FlowSeq ::: Posterior ::: Token Dropout.", "FlowSeq ::: Decoder", "FlowSeq ::: Flow Architecture for Prior", "FlowSeq ::: Flow Architecture for Prior ::: Actnorm.", "FlowSeq ::: Flow Architecture for Prior ::: Invertible Multi-head Linear Layers.", "FlowSeq ::: Flow Architecture for Prior ::: Affine Coupling Layers.", "FlowSeq ::: Flow Architecture for Prior ::: Multi-scale Architecture.", "FlowSeq ::: Predicting Target Sequence Length", "FlowSeq ::: Decoding Process", "FlowSeq ::: Decoding Process ::: Argmax Decoding.", "FlowSeq ::: Decoding Process ::: Noisy Parallel Decoding (NPD).", "FlowSeq ::: Decoding Process ::: Importance Weighted Decoding (IWD)", "FlowSeq ::: Discussion", "Experiments ::: Experimental Setups ::: Translation Datasets", "Experiments ::: Experimental Setups ::: Modules and Hyperparameters", "Experiments ::: Experimental Setups ::: Optimization", "Experiments ::: Experimental Setups ::: Knowledge Distillation", "Experiments ::: Main Results", "Experiments ::: Analysis on Decoding Speed", "Experiments ::: Analysis on Decoding Speed ::: How does batch size affect the decoding speed?", "Experiments ::: Analysis on Decoding Speed ::: How does sentence length affect the decoding speed?", "Experiments ::: Analysis of Rescoring Candidates", "Experiments ::: Analysis of Translation Diversity", "Conclusion", "Acknowledgments", "Flow Layers ::: ActNorm", "Flow Layers ::: Invertible Linear", "Flow Layers ::: Affine Coupling", "Analysis of training dynamics", "Analysis of Translation Results", "Results of Translation Diversity" ], "paragraphs": [ [ "Neural sequence-to-sequence (seq2seq) models BIBREF0, BIBREF1, BIBREF2, BIBREF3 generate an output sequence $\\mathbf {y} = \\lbrace y_1, \\ldots , y_T\\rbrace $ given an input sequence $\\mathbf {x} = \\lbrace x_1, \\ldots , x_{T^{\\prime }}\\rbrace $ using conditional probabilities $P_\\theta (\\mathbf {y}|\\mathbf {x})$ predicted by neural networks (parameterized by $\\theta $).", "Most seq2seq models are autoregressive, meaning that they factorize the joint probability of the output sequence given the input sequence $P_\\theta (\\mathbf {y}|\\mathbf {x})$ into the product of probabilities over the next token in the sequence given the input sequence and previously generated tokens:", "Each factor, $P_\\theta (y_{t} | y_{<t}, \\mathbf {x})$, can be implemented by function approximators such as RNNs BIBREF0 and Transformers BIBREF3. This factorization takes the complicated problem of joint estimation over an exponentially large output space of outputs $\\mathbf {y}$, and turns it into a sequence of tractable multi-class classification problems predicting $y_t$ given the previous words, allowing for simple maximum log-likelihood training. However, this assumption of left-to-right factorization may be sub-optimal from a modeling perspective BIBREF4, BIBREF5, and generation of outputs must be done through a linear left-to-right pass through the output tokens using beam search, which is not easily parallelizable on hardware such as GPUs.", "Recently, there has been work on non-autoregressive sequence generation for neural machine translation (NMT; BIBREF6, BIBREF7, BIBREF8) and language modeling BIBREF9. Non-autoregressive models attempt to model the joint distribution $P_\\theta (\\mathbf {y}|\\mathbf {x})$ directly, decoupling the dependencies of decoding history during generation. A naïve solution is to assume that each token of the target sequence is independent given the input:", "Unfortunately, the performance of this simple model falls far behind autoregressive models, as seq2seq tasks usually do have strong conditional dependencies between output variables BIBREF6. This problem can be mitigated by introducing a latent variable $\\mathbf {z}$ to model these conditional dependencies:", "where $p_{\\theta }(\\mathbf {z}|\\mathbf {x})$ is the prior distribution over latent $\\mathbf {z}$ and $P_{\\theta }(\\mathbf {y}|\\mathbf {z}, \\mathbf {x})$ is the “generative” distribution (a.k.a decoder). Non-autoregressive generation can be achieved by the following independence assumption in the decoding process:", "BIBREF6 proposed a $\\mathbf {z}$ representing fertility scores specifying the number of output words each input word generates, significantly improving the performance over Eq. (DISPLAY_FORM4). But the performance still falls behind state-of-the-art autoregressive models due to the limited expressiveness of fertility to model the interdependence between words in $\\textbf {y}$.", "In this paper, we propose a simple, effective, and efficient model, FlowSeq, which models expressive prior distribution $p_{\\theta }(\\mathbf {z}|\\mathbf {x})$ using a powerful mathematical framework called generative flow BIBREF10. This framework can elegantly model complex distributions, and has obtained remarkable success in modeling continuous data such as images and speech through efficient density estimation and sampling BIBREF11, BIBREF12, BIBREF13. Based on this, we posit that generative flow also has potential to introduce more meaningful latent variables $\\mathbf {z}$ in the non-autoregressive generation in Eq. (DISPLAY_FORM5).", "FlowSeq is a flow-based sequence-to-sequence model, which is (to our knowledge) the first non-autoregressive seq2seq model utilizing generative flows. It allows for efficient parallel decoding while modeling the joint distribution of the output sequence. Experimentally, on three benchmark datasets for machine translation – WMT2014, WMT2016 and IWSLT-2014, FlowSeq achieves comparable performance with state-of-the-art non-autoregressive models, and almost constant decoding time w.r.t. the sequence length compared to a typical left-to-right Transformer model, which is super-linear." ], [ "As noted above, incorporating expressive latent variables $\\mathbf {z}$ is essential to decouple the dependencies between tokens in the target sequence in non-autoregressive models. However, in order to model all of the complexities of sequence generation to the point that we can read off all of the words in the output in an independent fashion (as in Eq. (DISPLAY_FORM6)), the prior distribution $p_{\\theta }(\\mathbf {z}|\\mathbf {x})$ will necessarily be quite complex. In this section, we describe generative flows BIBREF10, an effective method for arbitrary modeling of complicated distributions, before describing how we apply them to sequence-to-sequence generation in §SECREF3." ], [ "Put simply, flow-based generative models work by transforming a simple distribution (e.g. a simple Gaussian) into a complex one (e.g. the complex prior distribution over $\\mathbf {z}$ that we want to model) through a chain of invertible transformations.", "Formally, a set of latent variables $\\mathbf {\\upsilon } \\in \\Upsilon $ are introduced with a simple prior distribution $p_{\\Upsilon }(\\upsilon )$. We then define a bijection function $f: \\mathcal {Z} \\rightarrow \\Upsilon $ (with $g = f^{-1}$), whereby we can define a generative process over variables $\\mathbf {z}$:", "An important insight behind flow-based models is that given this bijection function, the change of variable formula defines the model distribution on $\\mathbf {z}\\in \\mathcal {Z}$ by:", "Here $\\frac{\\partial f_{\\theta }(\\mathbf {z})}{\\partial \\mathbf {z}}$ is the Jacobian matrix of $f_{\\theta }$ at $\\mathbf {z}$.", "Eq. (DISPLAY_FORM9) provides a way to calculate the (complex) density of $\\mathbf {z}$ by calculating the (simple) density of $\\upsilon $ and the Jacobian of the transformation from $\\mathbf {z}$ to $\\upsilon $. For efficiency purposes, flow-based models generally use certain types of transformations $f_{\\theta }$ where both the inverse functions $g_{\\theta }$ and the Jacobian determinants are tractable to compute. A stacked sequence of such invertible transformations is also called a (normalizing) flow BIBREF10:", "where $f = f_1 \\circ f_2 \\circ \\cdots \\circ f_K$ is a flow of $K$ transformations (omitting $\\theta $s for brevity)." ], [ "In the context of maximal likelihood estimation (MLE), we wish to minimize the negative log-likelihood of the parameters:", "where $D=\\lbrace (\\mathbf {x}^i, \\mathbf {y}^i)\\rbrace _{i=1}^{N}$ is the set of training data. However, the likelihood $P_{\\theta }(\\mathbf {y}| \\mathbf {x})$ after marginalizing out latent variables $\\mathbf {z}$ (LHS in Eq. (DISPLAY_FORM5)) is intractable to compute or differentiate directly. Variational inference BIBREF14 provides a solution by introducing a parametric inference model $q_{\\phi }(\\mathbf {z}|\\mathbf {y}, \\mathbf {x})$ (a.k.a posterior) which is then used to approximate this integral by sampling individual examples of $\\mathbf {z}$. These models then optimize the evidence lower bound (ELBO), which considers both the “reconstruction error” $\\log P_\\theta (\\mathbf {y}|\\mathbf {z},\\mathbf {x})$ and KL-divergence between the posterior and the prior:", "Both inference model $\\phi $ and decoder $\\theta $ parameters are optimized according to this objective." ], [ "We first overview FlowSeq's architecture (shown in Figure FIGREF13) and training process here before detailing each component in following sections. Similarly to classic seq2seq models, at both training and test time FlowSeq first reads the whole input sequence $\\mathbf {x}$ and calculates a vector for each word in the sequence, the source encoding.", "At training time, FlowSeq's parameters are learned using a variational training paradigm overviewed in §SECREF10. First, we draw samples of latent codes $\\mathbf {z}$ from the current posterior $q_{\\phi } (\\mathbf {z}|\\mathbf {y}, \\mathbf {x})$. Next, we feed $\\mathbf {z}$ together with source encodings into the decoder network and the prior flow to compute the probabilities of $P_{\\theta }(\\mathbf {y}|\\mathbf {z}, \\mathbf {x})$ and $p_{\\theta }(\\mathbf {z}|\\mathbf {x})$ for optimizing the ELBO (Eq. (DISPLAY_FORM12)).", "At test time, generation is performed by first sampling a latent code $\\mathbf {z}$ from the prior flow by executing the generative process defined in Eq. (DISPLAY_FORM8). In this step, the source encodings produced from the encoder are used as conditional inputs. Then the decoder receives both the sampled latent code $\\mathbf {z}$ and the source encoder outputs to generate the target sequence $\\mathbf {y}$ from $P_{\\theta }(\\mathbf {y}|\\mathbf {z}, \\mathbf {x})$." ], [ "The source encoder encodes the source sequences into hidden representations, which are used in computing attention when generating latent variables in the posterior network and prior network as well as the cross-attention with decoder. Any standard neural sequence model can be used as its encoder, including RNNs BIBREF0 or Transformers BIBREF3." ], [ "The latent variables $\\mathbf {z}$ are represented as a sequence of continuous random vectors $\\mathbf {z}=\\lbrace \\mathbf {z}_1, \\ldots , \\mathbf {z}_T\\rbrace $ with the same length as the target sequence $\\mathbf {y}$. Each $\\mathbf {z}_t$ is a $d_{\\mathrm {z}}$-dimensional vector, where $d_{\\mathrm {z}}$ is the dimension of the latent space. The posterior distribution $q_{\\phi } (\\mathbf {z}|\\mathbf {y}, \\mathbf {x})$ models each $\\mathbf {z}_t$ as a diagonal Gaussian with learned mean and variance:", "where $\\mu _{t}(\\cdot )$ and $\\sigma _{t}(\\cdot )$ are neural networks such as RNNs or Transformers." ], [ "While we perform standard random initialization for most layers of the network, we initialize the last linear transforms that generate the $\\mu $ and $\\log \\sigma ^2$ values with zeros. This ensures that the posterior distribution as a simple normal distribution, which we found helps train very deep generative flows more stably." ], [ "The motivation of introducing the latent variable $\\mathbf {z}$ into the model is to model the uncertainty in the generative process. Thus, it is preferable that $\\mathbf {z}$ capture contextual interdependence between tokens in $\\mathbf {y}$. However, there is an obvious local optimum where the posterior network generates a latent vector $\\mathbf {z}_t$ that only encodes the information about the corresponding target token $y_t$, and the decoder simply generates the “correct” token at each step $t$ with $\\mathbf {z}_t$ as input. In this case, FlowSeq reduces to the baseline model in Eq. (DISPLAY_FORM4). To escape this undesired local optimum, we apply token-level dropout to randomly drop an entire token when calculating the posterior, to ensure the model also has to learn how to use contextual information. This technique is similar to the “masked language model” in previous studies BIBREF15, BIBREF16, BIBREF17.", "" ], [ "As the decoder, we take the latent sequence $\\mathbf {z}$ as input, run it through several layers of a neural sequence model such as a Transformer, then directly predict the output tokens in $\\mathbf {y}$ individually and independently. Notably, unlike standard seq2seq decoders, we do not perform causal masking to prevent attending to future tokens, making the model fully non-autoregressive." ], [ "The flow architecture is based on Glow BIBREF11. It consists of a series of steps of flow, combined in a multi-scale architecture (see Figure FIGREF13.) Each step of flow consists three types of elementary flows – actnorm, invertible multi-head linear, and coupling. Note that all three functions are invertible and conducive to calculation of log determinants (details in Appendix SECREF6)." ], [ "The activation normalization layer (actnorm; BIBREF11) is an alternative for batch normalization BIBREF18, that has mainly been used in the context of image data to alleviate problems in model training. Actnorm performs an affine transformation of the activations using a scale and bias parameter per feature for sequences:", "Both $\\mathbf {z}$ and $\\mathbf {z}^{\\prime }$ are tensors of shape $[T\\times d_{\\mathrm {z}}]$ with time dimension $t$ and feature dimension $d_{\\mathrm {z}}$. The parameters are initialized such that over each feature $\\mathbf {z}_{t}^{\\prime }$ has zero mean and unit variance given an initial mini-batch of data." ], [ "To incorporate general permutations of variables along the feature dimension to ensure that each dimension can affect every other ones after a sufficient number of steps of flow, BIBREF11 proposed a trainable invertible $1\\times 1$ convolution layer for 2D images. It is straightforward to apply similar transformations to sequential data:", "", "where $\\mathbf {W}$ is the weight matrix of shape $[d_{\\mathrm {z}} \\times d_{\\mathrm {z}}]$. The log-determinant of this transformation is:", "", "The cost of computing $\\mathrm {det}(\\mathbf {W})$ is $O(d_{\\mathrm {z}}^3)$.", "Unfortunately, $d_{\\mathrm {z}}$ in Seq2Seq generation is commonly large, e.g. 512, significantly slowing down the model for computing $\\mathrm {det}(\\mathbf {W})$. To apply this to sequence generation, we propose a multi-head invertible linear layer, which first splits each $d_{\\mathrm {z}}$-dimensional feature vector into $h$ heads with dimension $d_h = d_{\\mathrm {z}}/h$. Then the linear transformation in (DISPLAY_FORM26) is applied to each head, with $d_h\\times d_h$ weight matrix $\\mathbf {W}$, significantly reducing the dimension. For splitting of heads, one step of flow contains one linear layer with either row-major or column-major splitting format, and these steps with different linear layers are composed in an alternating pattern." ], [ "To model interdependence across time steps, we use affine coupling layers BIBREF19:", "where $\\mathrm {s}(\\mathbf {z}_a, \\mathbf {x})$ and $\\mathrm {b}(\\mathbf {z}_a, \\mathbf {x})$ are outputs of two neural networks with $\\mathbf {z}_a$ and $\\mathbf {x}$ as input. These are shown in Figure FIGREF21 (c). In experiments, we implement $\\mathrm {s}(\\cdot )$ and $\\mathrm {b}(\\cdot )$ with one Transformer decoder layer BIBREF3: multi-head self-attention over $\\mathbf {z}_a$, followed by multi-head inter-attention over $\\mathbf {x}$, followed by a position-wise feed-forward network. The input $\\mathbf {z}_a$ is fed into this layer in one pass, without causal masking.", "As in BIBREF19, the $\\mathrm {split}()$ function splits $\\mathbf {z}$ the input tensor into two halves, while the $\\mathrm {concat}$ operation performs the corresponding reverse concatenation operation. In our architecture, three types of split functions are used, based on the split dimension and pattern. Figure FIGREF21 (b) illustrates the three splitting types. The first type of split groups $\\mathbf {z}$ along the time dimension on alternate indices. In this case, FlowSeq mainly models the interactions between time-steps. The second and third types of splits perform on the feature dimension, with continuous and alternate patterns, respectively. For each type of split, we alternate $\\mathbf {z}_a$ and $\\mathbf {z}_b$ to increase the flexibility of the split function. Different types of affine coupling layers alternate in the flow, similar to the linear layers." ], [ "We follow BIBREF19 in implementing a multi-scale architecture using the squeezing operation on the feature dimension, which has been demonstrated helpful for training deep flows. Formally, each scale is a combination of several steps of the flow (see Figure FIGREF21 (a)). After each scale, the model drops half of the dimensions with the third type of split in Figure FIGREF21 (b) to reduce computational and memory cost, outputting the tensor with shape $[T \\times \\frac{d}{2}]$. Then the squeezing operation transforms the $T \\times \\frac{d}{2}$ tensor into an $\\frac{T}{2} \\times d$ one as the input of the next scale. We pad each sentence with EOS tokens to ensure $T$ is divisible by 2. The right component of Figure FIGREF13 illustrates the multi-scale architecture." ], [ "In autoregressive seq2seq models, it is natural to determine the length of the sequence dynamically by simply predicting a special EOS token. However, for FlowSeq to predict the entire sequence in parallel, it needs to know its length in advance to generate the latent sequence $\\mathbf {z}$. Instead of predicting the absolute length of the target sequence, we predict the length difference between source and target sequences using a classifier with a range of $[-20, 20]$. Numbers in this range are predicted by max-pooling the source encodings into a single vector, running this through a linear layer, and taking a softmax. This classifier is learned jointly with the rest of the model.", "" ], [ "At inference time, the model needs to identify the sequence with the highest conditional probability by marginalizing over all possible latent variables (see Eq. (DISPLAY_FORM5)), which is intractable in practice. We propose three approximating decoding algorithms to reduce the search space.", "" ], [ "Following BIBREF6, one simple and effective method is to select the best sequence by choosing the highest-probability latent sequence $\\mathbf {z}$:", "where identifying $\\mathbf {y}^*$ only requires independently maximizing the local probability for each output position (see Eq. DISPLAY_FORM6)." ], [ "A more accurate approximation of decoding, proposed in BIBREF6, is to draw samples from the latent space and compute the best output for each latent sequence. Then, a pre-trained autoregressive model is adopted to rank these sequences. In FlowSeq, different candidates can be generated by sampling different target lengths or different samples from the prior, and both of the strategies can be batched via masks during decoding. In our experiments, we first select the top $l$ length candidates from the length predictor in §SECREF29. Then, for each length candidate we use $r$ random samples from the prior network to generate output sequences, yielding a total of $l\\times r$ candidates." ], [ "The third approximating method is based on the lower bound of importance weighted estimation BIBREF20. Similarly to NPD, IWD first draws samples from the latent space and computes the best output for each latent sequence. Then, IWD ranks these candidate sequences with $K$ importance samples:", "IWD does not rely on a separate pre-trained model, though it significantly slows down the decoding speed. The detailed comparison of these three decoding methods is provided in §SECREF45." ], [ "Different from the architecture proposed in BIBREF9, the architecture of FlowSeq is not using any autoregressive flow BIBREF21, BIBREF22, yielding a truly non-autoregressive model with efficient generation. Note that the FlowSeq remains non-autoregressive even if we use an RNN in the architecture because RNN is only used to encode a complete sequence of codes and all the input tokens can be fed into the RNN in parallel. This makes it possible to use highly-optimized implementations of RNNs such as those provided by cuDNN. Thus while RNNs do experience some drop in speed, it is less extreme than that experienced when using autoregressive models." ], [ "We evaluate FlowSeq on three machine translation benchmark datasets: WMT2014 DE-EN (around 4.5M sentence pairs), WMT2016 RO-EN (around 610K sentence pairs) and a smaller dataset IWSLT2014 DE-EN (around 150K sentence pairs). We use scripts from fairseq BIBREF23 to preprocess WMT2014 and IWSLT2014, where the preprocessing steps follow BIBREF3 for WMT2014. We use the data provided in BIBREF7 for WMT2016. For both WMT datasets, the source and target languages share the same set of BPE embeddings while for IWSLT2014 we use separate embeddings. During training, we filter out sentences longer than 80 for WMT dataset and 60 for IWSLT, respectively." ], [ "We implement the encoder, decoder and posterior networks with standard (unmasked) Transformer layers BIBREF3. For WMT datasets, the encoder consists of 6 layers, and the decoder and posterior are composed of 4 layers, and 8 attention heads. and for IWSLT, the encoder has 5 layers, and decoder and posterior have 3 layers, and 4 attention heads. The prior flow consists of 3 scales with the number of steps $[48, 48, 16]$ from bottom to top. To dissect the impact of model dimension on translation quality and speed, we perform experiments on two versions of FlowSeq with $d_{model}/d_{hidden} = 256/512$ (base) and $d_{model}/d_{hidden} = 512/1024$ (large). More model details are provided in Appendix SECREF7." ], [ "Parameter optimization is performed with the Adam optimizer BIBREF24 with $\\beta =(0.9, 0.999)$ and $\\epsilon =1e-6$. Each mini-batch consist of 2048 sentences. The learning rate is initialized to $5e-4$, and exponentially decays with rate $0.999995$. The gradient clipping cutoff is $1.0$. For all the FlowSeq models, we apply $0.1$ label smoothing and averaged the 5 best checkpoints to create the final model.", "At the beginning of training, the posterior network is randomly initialized, producing noisy supervision to the prior. To mitigate this issue, we first set the weight of the $\\mathrm {KL}$ term in ELBO to zero for 30,000 updates to train the encoder, decoder and posterior networks. Then the $\\mathrm {KL}$ weight linearly increases to one for another 10,000 updates, which we found essential to accelerate training and achieve stable performance." ], [ "Previous work on non-autoregressive generation BIBREF6, BIBREF8 has used translations produced by a pre-trained autoregressive NMT model as the training data, noting that this can significantly improve the performance. We analyze the impact of distillation in § SECREF45." ], [ "We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8.", "Table TABREF39 provides the BLEU scores of FlowSeq with argmax decoding, together with baselines with purely non-autoregressive decoding methods that generate output sequence in one parallel pass. The first block lists results of models trained on raw data, while the second block are results using knowledge distillation. Without using knowledge distillation, FlowSeq base model achieves significant improvement (more than 9 BLEU points) over CMLM-base and LV NAR. It demonstrates the effectiveness of FlowSeq on modeling the complex interdependence in target languages.", "Towards the effect of knowledge distillation, we can mainly obtain two observations: i) Similar to the findings in previous work, knowledge distillation still benefits the translation quality of FlowSeq. ii) Compared to previous models, the benefit of knowledge distillation on FlowSeq is less significant, yielding less than 3 BLEU improvement on WMT2014 DE-EN corpus, and even no improvement on WMT2016 RO-EN corpus. The reason might be that FlowSeq does not rely much on knowledge distillation to alleviate the multi-modality problem.", "Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring. The first block in Table TABREF40 includes the baseline results from autoregressive Transformer. For the sampling procedure in IWD and NPD, we sampled from a reduced-temperature model BIBREF11 to obtain high-quality samples. We vary the temperature within $\\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 1.0\\rbrace $ and select the best temperature based on the performance on development sets. The analysis of the impact of sampling temperature and other hyper-parameters on samples is in § SECREF50. For FlowSeq, NPD obtains better results than IWD, showing that FlowSeq still falls behind auto-regressive Transformer on model data distributions. Comparing with CMLM BIBREF8 with 10 iterations of refinement, which is a contemporaneous work that achieves state-of-the-art translation performance, FlowSeq obtains competitive performance on both WMT2014 and WMT2016 corpora, with only slight degradation in translation quality. Leveraging iterative refinement to further improve the performance of FlowSeq has been left to future work." ], [ "In this section, we compare the decoding speed (measured in average time in seconds required to decode one sentence) of FlowSeq at test time with that of the autoregressive Transformer model. We use the test set of WMT14 EN-DE for evaluation and all experiments are conducted on a single NVIDIA TITAN X GPU." ], [ "First, we investigate how different decoding batch size can affect the decoding speed. We vary the decoding batch size within $\\lbrace 1, 4, 8, 32, 64, 128\\rbrace $. Figure. FIGREF44 shows that for both FlowSeq and Transformer decoding is faster when using a larger batch size. However, FlowSeq has much larger gains in the decoding speed w.r.t. the increase in batch size, gaining a speed up of 594% of base model and 403% of large model when using a batch size of 128. We hypothesize that this is because the operations in FlowSeq are more friendly to batching while the Transformer model with beam search at test time is less efficient in benefiting from batching." ], [ "Next, we examine if sentence length is a major factor affecting the decoding speed. We bucket the test data by the target sentence length. From Fig. FIGREF44, we can see that as the sentence length increases, FlowSeq achieves almost constant decoding time while Transformer has a linearly increasing decoding time. The relative decoding speed up of FlowSeq versus Transformer linearly increases as the sequence length increases. The potential of decoding long sequences with constant time is an attractive property of FlowSeq." ], [ "In Fig. FIGREF49, we analyze how different sampling hyperparameters affect the performance of rescoring. First, we observe that the number of samples $r$ for each length is the most important factor. The performance is always improved with a larger sample size. Second, a larger number of length candidates does not necessarily increase the rescoring performance. Third, we find that a larger sampling temperature (0.3 - 0.5) can increase the diversity of translations and leads to better rescoring BLEU. However, the latent samples become noisy when a large temperature (1.0) is used." ], [ "Following BIBREF28, we analyze the output diversity of FlowSeq. BIBREF28 proposed pairwise-BLEU and BLEU computed in a leave-one-out manner to calibrate the diversity and quality of translation hypotheses. A lower pairwise-BLEU score implies a more diverse hypothesis set. And a higher BLEU score implies a better translation quality. We experiment on a subset of test set of WMT14-ENDE with ten references each sentence BIBREF29. In Fig. FIGREF52, we compare FlowSeq with other multi-hypothesis generation methods (ten hypotheses each sentence) to analyze how well the generation outputs of FlowSeq are in terms of diversity and quality. The right corner area of the figure indicates the ideal generations: high diversity and high quality. While FlowSeq still lags behind the autoregressive generations, by increasing the sampling temperature it provides a way of generating more diverse outputs while keeping the translation quality almost unchanged. More analysis of translation outputs and detailed results are provided in the Appendix SECREF9 and SECREF10.", "" ], [ "We propose FlowSeq, an efficient and effective model for non-autoregressive sequence generation by using generative flows. One potential direction for future work is to leverage iterative refinement techniques such as masked language models to further improve translation quality. Another exciting direction is to, theoretically and empirically, investigate the latent space in FlowSeq, hence providing deep insights of the model, even enhancing controllable text generation.", "" ], [ "This work was supported in part by DARPA grant FA8750-18-2-0018 funded under the AIDA program and grant HR0011-15-C-0114 funded under the LORELEI program. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. The authors thank Amazon for their gift of AWS cloud credits and anonymous reviewers for their helpful suggestions.", "Appendix: FlowSeq" ], [ "Log-determinant:" ], [ "Log-determinant:", "where $h$ is the number of heads." ], [ "Log-determinant:" ], [ "In Fig. FIGREF57, we plot the train and dev loss together with dev BLEU scores for the first 50 epochs. We can see that the reconstruction loss is increasing at the initial stage of training, then start to decrease when training with full KL loss. In addition, we observed that FlowSeq does not suffer the KL collapse problem BIBREF30, BIBREF31. This is because the decoder of FlowSeq is non-autogressive, with latent variable $\\mathbf {z}$ as the only input." ], [ "In Tab. TABREF58, we present randomly picked translation outputs from the test set of WMT14-DEEN. For each German input sentence, we pick three hypotheses from 30 samples. We have the following observations: First, in most cases, it can accurately express the meaning of the source sentence, sometimes in a different way from the reference sentence, which cannot be precisely reflected by the BLEU score. Second, by controlling the sampling hyper-parameters such as the length candidates $l$, the sampling temperature $\\tau $ and the number of samples $r$ under each length, FlowSeq is able to generate diverse translations expressing the same meaning. Third, repetition and broken translations also exist in some cases due to the lack of language model dependencies in the decoder." ], [ "Table TABREF59 shows the detailed results of translation deversity." ] ] }
{ "question": [ "Does this model train faster than state of the art models?", "What is the performance difference between proposed method and state-of-the-arts on these datasets?", "What non autoregressive NMT models are used for comparison?", "What are three neural machine translation (NMT) benchmark datasets used for evaluation?" ], "question_id": [ "b14f13f2a3a316e5a5de9e707e1e6ed55e235f6f", "ba6422e22297c7eb0baa381225a2f146b9621791", "65e72ad72a9cbfc379f126b10b0ce80cfe44579b", "cf8edc6e8c4d578e2bd9965579f0ee81f4bf35a9" ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "e452412e9567ff9c42bc5c5df5aa2294ce83ef7a" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Difference is around 1 BLEU score lower on average than state of the art methods.", "evidence": [ "Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring. The first block in Table TABREF40 includes the baseline results from autoregressive Transformer. For the sampling procedure in IWD and NPD, we sampled from a reduced-temperature model BIBREF11 to obtain high-quality samples. We vary the temperature within $\\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 1.0\\rbrace $ and select the best temperature based on the performance on development sets. The analysis of the impact of sampling temperature and other hyper-parameters on samples is in § SECREF50. For FlowSeq, NPD obtains better results than IWD, showing that FlowSeq still falls behind auto-regressive Transformer on model data distributions. Comparing with CMLM BIBREF8 with 10 iterations of refinement, which is a contemporaneous work that achieves state-of-the-art translation performance, FlowSeq obtains competitive performance on both WMT2014 and WMT2016 corpora, with only slight degradation in translation quality. Leveraging iterative refinement to further improve the performance of FlowSeq has been left to future work.", "FLOAT SELECTED: Table 2: BLEU scores on two WMT datasets of models using advanced decoding methods. The first block are Transformer-base (Vaswani et al., 2017). The second and the third block are results of models trained w/w.o. knowledge distillation, respectively. n = l × r is the total number of candidates for rescoring." ], "highlighted_evidence": [ "Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring.", "FLOAT SELECTED: Table 2: BLEU scores on two WMT datasets of models using advanced decoding methods. The first block are Transformer-base (Vaswani et al., 2017). The second and the third block are results of models trained w/w.o. knowledge distillation, respectively. n = l × r is the total number of candidates for rescoring." ] } ], "annotation_id": [ "6438cbf42d18946a235a5140bfe434a96e788572" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "NAT w/ Fertility", "NAT-IR", "NAT-REG", "LV NAR", "CTC Loss", "CMLM" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8." ], "highlighted_evidence": [ "We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8." ] } ], "annotation_id": [ "14b4ca92daf3064f129800c1500a3de17129d73a" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "WMT2014, WMT2016 and IWSLT-2014" ], "yes_no": null, "free_form_answer": "", "evidence": [ "FlowSeq is a flow-based sequence-to-sequence model, which is (to our knowledge) the first non-autoregressive seq2seq model utilizing generative flows. It allows for efficient parallel decoding while modeling the joint distribution of the output sequence. Experimentally, on three benchmark datasets for machine translation – WMT2014, WMT2016 and IWSLT-2014, FlowSeq achieves comparable performance with state-of-the-art non-autoregressive models, and almost constant decoding time w.r.t. the sequence length compared to a typical left-to-right Transformer model, which is super-linear." ], "highlighted_evidence": [ " Experimentally, on three benchmark datasets for machine translation – WMT2014, WMT2016 and IWSLT-2014, FlowSeq achieves comparable performance with state-of-the-art non-autoregressive models, and almost constant decoding time w.r.t. the sequence length compared to a typical left-to-right Transformer model, which is super-linear." ] } ], "annotation_id": [ "dd4d47430c50b42e096f62ab94e8ba98175a1935" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: (a) Autoregressive (b) non-autoregressive and (c) our proposed sequence generation models. x is the source, y is the target, and z are latent variables.", "Figure 2: Neural architecture of FlowSeq, including the encoder, the decoder and the posterior networks, together with the multi-scale architecture of the prior flow. The architecture of each flow step is in Figure 3.", "Figure 3: (a) The architecture of one step of our flow. (b) The visualization of three split pattern for coupling layers, where the red color denotes za and the blue color denotes zvb. (c) The attention-based architecture of the NN function in coupling layers.", "Table 1: BLEU scores on three MT benchmark datasets for FlowSeq with argmax decoding and baselines with purely non-autoregressive decoding method. The first and second block are results of models trained w/w.o. knowledge distillation, respectively.", "Table 2: BLEU scores on two WMT datasets of models using advanced decoding methods. The first block are Transformer-base (Vaswani et al., 2017). The second and the third block are results of models trained w/w.o. knowledge distillation, respectively. n = l × r is the total number of candidates for rescoring.", "Figure 4: The decoding speed of Transformer (batched, beam size 5) and FlowSeq on WMT14 EN-DE test set (a) w.r.t different batch sizes (b) bucketed by different target sentence lengths (batch size 32).", "Figure 5: Impact of sampling hyperparameters on the rescoring BLEU on the dev set of WMT14 DE-EN. Experiments are performed with FlowSeq-base trained with distillation data. l is the number of length candidates. r is the number of samples for each length.", "Figure 6: Comparisons of FlowSeq with human translations, beam search and sampling results of Transformer-base, and mixture-of-experts model (Hard MoE (Shen et al., 2019)) on the averaged leave-one-out BLEU score v.s pairwise-BLEU in descending order.", "Table 3: Comparison of model size in our experiments.", "Figure 7: Training dynamics.", "Table 4: Examples of translation outputs from FlowSeq-base with sampling hyperparameters l = 3, r = 10, τ = 0.4 on WMT14-DEEN.", "Table 5: Translation diversity results of FlowSeq-large model on WMT14 EN-DE with knowledge distillation." ], "file": [ "1-Figure1-1.png", "4-Figure2-1.png", "5-Figure3-1.png", "7-Table1-1.png", "7-Table2-1.png", "8-Figure4-1.png", "9-Figure5-1.png", "9-Figure6-1.png", "12-Table3-1.png", "13-Figure7-1.png", "14-Table4-1.png", "15-Table5-1.png" ] }
1910.02754
On Leveraging the Visual Modality for Neural Machine Translation
Leveraging the visual modality effectively for Neural Machine Translation (NMT) remains an open problem in computational linguistics. Recently, Caglayan et al. posit that the observed gains are limited mainly due to the very simple, short, repetitive sentences of the Multi30k dataset (the only multimodal MT dataset available at the time), which renders the source text sufficient for context. In this work, we further investigate this hypothesis on a new large scale multimodal Machine Translation (MMT) dataset, How2, which has 1.57 times longer mean sentence length than Multi30k and no repetition. We propose and evaluate three novel fusion techniques, each of which is designed to ensure the utilization of visual context at different stages of the Sequence-to-Sequence transduction pipeline, even under full linguistic context. However, we still obtain only marginal gains under full linguistic context and posit that visual embeddings extracted from deep vision models (ResNet for Multi30k, ResNext for How2) do not lend themselves to increasing the discriminativeness between the vocabulary elements at token level prediction in NMT. We demonstrate this qualitatively by analyzing attention distribution and quantitatively through Principal Component Analysis, arriving at the conclusion that it is the quality of the visual embeddings rather than the length of sentences, which need to be improved in existing MMT datasets.
{ "section_name": [ "Introduction", "Proposed Fusion Techniques", "Proposed Fusion Techniques ::: Step-Wise Decoder Fusion", "Proposed Fusion Techniques ::: Multimodal Attention Modulation", "Proposed Fusion Techniques ::: Visual-Semantic (VS) Regularizer", "Results and Analysis", "Results and Analysis ::: Experimental Results", "Results and Analysis ::: Discussion", "Results and Analysis ::: Discussion ::: PCA of Visual Features", "Results and Analysis ::: Discussion ::: Comparison of Attention Components", "Conclusions and Future Work" ], "paragraphs": [ [ "A number of works have explored integrating the visual modality for Neural Machine Translation (NMT) models, though, there has been relatively modest gains or no gains at all by incorporating the visual modality in the translation pipeline BIBREF0. In particular, BIBREF1 leverage multi-task learning, BIBREF2 use visual adaptive training, while BIBREF3, BIBREF4, BIBREF5 use a number of fusion techniques to incorporate features obtained from the visual modality.", "Regarding the seemingly low utility of visual modality in machine translation, BIBREF6 hypothesize that the highly relevant visual properties are often not represented by linguistic models because they are too obvious to be explicitly mentioned in text (e.g., birds have wings, violins are brown). Similarly, BIBREF7 argue that perceptual information is already sufficiently encoded in textual cues. However, recently BIBREF0 have demonstrated that neural models are capable of leveraging the visual modality for translations, and posit that it is the nature of the Multi30k dataset (the only multimodal machine translation dataset at the time) which is inhibiting gains from the visual modality to emerge, due to the presence of short, simple and repetitive sentences, which renders the source text as sufficient context for translation. In this work, we further investigate this hypothesis on a large-scale multimodal machine translation (MMT) dataset, named How2 BIBREF2, which has 1.57 times longer sentences, in terms of the mean sentence length, when compared to Multi30k .", "To this end, we restrict ourselves to the Sequence-to-Sequence (Seq2Seq) framework and propose three simple but novel fusion techniques to ensure the utilization of visual context during different stages (Input Context Encoding, Attention and Supervision) of the Sequence-to-Sequence transduction pipeline. We then evaluate and analyze the results for further insights, with the goal of testing the utility of visual modality for NMT under full source-side linguistic context." ], [ "In this section, we describe three additions to the Seq2Seq model to ensure that the visual context is utilized at different stages, namely when computing context during each step of the decoder, during attention as well as when computing the supervision signal in the Sequence-to-Sequence pipeline. This is done to encourage the Seq2Seq NMT model to make use of the visual features under full linguistic context. In each case, we assume that the visual features are fine-tuned using a visual encoder, which is trained jointly alongside the Seq2Seq model." ], [ "Our first proposed technique is the step-wise decoder fusion of visual features during every prediction step i.e. we concatenate the visual encoding as context at each step of the decoding process. This differs from the usual practice of passing the visual feature only at the beginning of the decoding process BIBREF5." ], [ "Similar to general attention BIBREF8, wherein a variable-length alignment vector $a_{th}(s)$, whose size equals the number of time steps on the source side, is derived by comparing the current target hidden state $h_{t}$ with each source hidden state $\\overline{h_{s}}$; we consider a variant wherein the visual encoding $v_{t}$ is used to calculate an attention distribution $a_{tv}(s)$ over the source encodings as well. Then, the true attention distribution $a_{t}(s)$ is computed as an interpolation between the visual and text based attention scores. The score function is a content based scoring mechanism as usual.", "This formulation differs from BIBREF3 in that we use both the natural language as well as the visual modality to compute attention over the source sentence, rather than having attention over images. Since attention is computed over the same source embeddings (arising from a single encoder) using two different modalities, our approach also differs from BIBREF4, which focuses on combining the attention scores of multiple source encoders." ], [ "In terms of leveraging the visual modality for supervision, BIBREF1 use multi-task learning to learn grounded representations through image representation prediction. However, to our knowledge, visual-semantic supervision hasn't been much explored for multimodal translation in terms of loss functions.", "Our proposed technique is the inclusion of visual-semantic supervision to the machine translation model. Recently, BIBREF9 proposed an optimal transport based loss function which computes the distance between the word embeddings of the predicted sentence and the target sentence and uses it as a regularizer $L_{\\text{ot}}^{\\text{tgt}}$. The purpose of this term is to provide the model with sequence level supervision. We leverage this idea by including a Cosine distance term, $L_{\\text{cosine}}^{\\text{visual}}$, between the visual encoding (which is at the sentence level) and the target/predicted sentence embeddings (computed as the average of the target/predicted word embeddings). The purpose of this distance term is to provide sequence level supervision by aligning the visual and text embeddings. In practice, as in BIBREF9, we introduce a hyperparameter in the loss function:", "where $\\gamma $ is a hyper-parameter balancing the effect of loss components (a separate hyperparameter than in Section 2.2)." ], [ "Throughout our experiments, we use the 300 hours subset of How2 dataset BIBREF10, which contains 300 hours of videos, sentence-level time alignments to the ground-truth English subtitles, and Portuguese translations of English subtitles. The How2 dataset has 2048 dimensional pre-trained ResNeXt embeddings BIBREF11 available for each of the video clips aligned to the sentences.", "Further, our baseline model is the canonical Seq2Seq model BIBREF12 consisting of bidirectional LSTM as encoder and decoder, general attention BIBREF8 and length normalization BIBREF13. In all cases, we use the embedding size of 300 and the hidden size of 512. Whenever the visual modality is used, we encode each of the visual features to 300 dimensional vectors through an encoder (consisting of a Linear layer followed by Batch Normalization and ReLU non-linearity) which is also trained end-to-end with the Seq2Seq model. Further, to integrate sequence level supervision as in BIBREF9, we utilize the Geomloss library , which provides a batched implementation of the Sinkhorn algorithm for the Optimal Transport computation. For all the translation experiments, we preprocess the data by lowercasing and removing the punctuations BIBREF2, and construct vocabulary at word level. Adam optimizer with a learning rate of 0.001 and a learning rate decay of 0.5 is used to throughout to train our models." ], [ "The performances of the models are summarized in Table TABREF9, along with the gains in BLEU points. From Table TABREF9, we can make a few observations:", "The visual modality leads to modest gains in BLEU scores. The proposed VS regularizer leads to slightly higher gain when compared to Decoder-Fusion and Attention modulation techniques for the En-Pt language pair.", "Further, the gains from incorporating the visual modality are less for Multimodal Attention and VS Regularization in the case of the reversed language pair of Pt-En (Table TABREF10), even though the visual modality is common to both the languages. This can possibly be attributed to the How2 dataset creation process wherein first the videos were aligned with English sentences and then the Portuguese translations were created, implying a reduction in correspondence with the visual modality due to errors introduced in the translation process." ], [ "To analyze the reasons for modest gains, despite incorporating multiple techniques to effectively leverage the visual modality for machine translation, we inspect the dataset as well as the proposed mechanisms." ], [ "We first investigate and compare the visual feature quality of the How2 dataset with respect to that of the Multi30k dataset . To analyze the discriminativeness of the visual features for both of these datasets, we leverage an analysis mechanism used in BIBREF14 in the context of analyzing word embedding discriminativeness. We analyze the variance of the visual features corresponding to each sentence in the training set. Since the visual features semantically represent the sentence as well, we could analyze how well the features are able to discriminate between the sentences and consequently between the individual words, as a measure of their utility for NMT.", "Figure FIGREF14 (Top) shows the variance explained by the Top 100 principal components, obtained by applying PCA on the How2 and Multi30k training set visual features. The original feature dimensions are 2048 in both the cases. It is clear from the Figure FIGREF14 that most of the energy of the visual feature space resides in a low-dimensional subspace BIBREF14. In other words, there exist a few directions in the embedding space which disproportionately explain the variance. These \"common\" directions affect all of the embeddings in the same way, rendering them less discriminative. Figure FIGREF14 also shows the cumulative variance explained by Top 10, 20, 50 and 100 principal components respectively. It is clear that the visual features in the case of How2 dataset are much more dominated by the \"common\" dimensions, when compared to the Multi30k dataset. Further, this analysis is still at the sentence level, i.e. the visual features are much less discriminative among individual sentences, further aggravating the problem at the token level. This suggests that the existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT, since they won't provide discriminativeness among the vocabulary elements at the token level during prediction. Further, this also indicates that under subword vocabulary such as BPE BIBREF15 or Sentence-Piece BIBREF16, the utility of such visual embeddings will only aggravate." ], [ "In this section, we analyze the visual and text based attention mechanisms. We find that the visual attention is very sparse, in that just one source encoding is attended to (the maximum visual attention over source encodings, across the test set, has mean 0.99 and standard deviation 0.015), thereby limiting the use of modulation. Thus, in practice, we find that a small weight ($\\gamma =0.1$) is necessary to prevent degradation due to this sparse visual attention component. Figure FIGREF18 & FIGREF19 shows the comparison of visual and text based attention for two sentences, one long source sentence of length 21 and one short source sentence of length 7. In both cases, we find that the visual component of the attention hasn't learnt any variation over the source encodings, again suggesting that the visual embeddings do not lend themselves to enhancing token-level discriminativess during prediction. We find this to be consistent across sentences of different lengths." ], [ "To conclude, we investigated the utility of visual modality for NMT, under full linguistic context on a new large-scale MMT dataset named How2. Our results on the How2 dataset confirm the general consensus that the visual modality does not lead to any significant gains for NMT, however, unlike BIBREF0 we attribute the relatively modest gains to the limited discriminativeness offered by the existing visual features, rather than the length of the sentences in the dataset. We validate this hypothesis quantitatively through a PCA based analysis of the visual features as well as qualitatively by analyzing attention components. We hope that our work would lead to more useful techniques and better visual features for MMT. An immediate future direction to explore would be to construct more discriminative features for utilizing the visual modality in NMT." ] ] }
{ "question": [ "What is result of their attention distribution analysis?", "What is result of their Principal Component Analysis?", "What are 3 novel fusion techniques that are proposed?" ], "question_id": [ "04aff4add28e6343634d342db92b3ac36aa8c255", "a8e4522ce2ce7336e731286654d6ad0931927a4e", "f6202100cfb83286dc51f57c68cffdbf5cf50a3f" ], "nlp_background": [ "zero", "zero", "zero" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "computer vision", "computer vision", "computer vision" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "visual attention is very sparse", " visual component of the attention hasn't learnt any variation over the source encodings" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this section, we analyze the visual and text based attention mechanisms. We find that the visual attention is very sparse, in that just one source encoding is attended to (the maximum visual attention over source encodings, across the test set, has mean 0.99 and standard deviation 0.015), thereby limiting the use of modulation. Thus, in practice, we find that a small weight ($\\gamma =0.1$) is necessary to prevent degradation due to this sparse visual attention component. Figure FIGREF18 & FIGREF19 shows the comparison of visual and text based attention for two sentences, one long source sentence of length 21 and one short source sentence of length 7. In both cases, we find that the visual component of the attention hasn't learnt any variation over the source encodings, again suggesting that the visual embeddings do not lend themselves to enhancing token-level discriminativess during prediction. We find this to be consistent across sentences of different lengths." ], "highlighted_evidence": [ "We find that the visual attention is very sparse, in that just one source encoding is attended to (the maximum visual attention over source encodings, across the test set, has mean 0.99 and standard deviation 0.015), thereby limiting the use of modulation.", "In both cases, we find that the visual component of the attention hasn't learnt any variation over the source encodings, again suggesting that the visual embeddings do not lend themselves to enhancing token-level discriminativess during prediction. We find this to be consistent across sentences of different lengths." ] } ], "annotation_id": [ "14c58ddf4c93d8cd9ecbcccc3992b0db3023a2c1" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Figure FIGREF14 (Top) shows the variance explained by the Top 100 principal components, obtained by applying PCA on the How2 and Multi30k training set visual features. The original feature dimensions are 2048 in both the cases. It is clear from the Figure FIGREF14 that most of the energy of the visual feature space resides in a low-dimensional subspace BIBREF14. In other words, there exist a few directions in the embedding space which disproportionately explain the variance. These \"common\" directions affect all of the embeddings in the same way, rendering them less discriminative. Figure FIGREF14 also shows the cumulative variance explained by Top 10, 20, 50 and 100 principal components respectively. It is clear that the visual features in the case of How2 dataset are much more dominated by the \"common\" dimensions, when compared to the Multi30k dataset. Further, this analysis is still at the sentence level, i.e. the visual features are much less discriminative among individual sentences, further aggravating the problem at the token level. This suggests that the existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT, since they won't provide discriminativeness among the vocabulary elements at the token level during prediction. Further, this also indicates that under subword vocabulary such as BPE BIBREF15 or Sentence-Piece BIBREF16, the utility of such visual embeddings will only aggravate." ], "highlighted_evidence": [ "In other words, there exist a few directions in the embedding space which disproportionately explain the variance.", "It is clear that the visual features in the case of How2 dataset are much more dominated by the \"common\" dimensions, when compared to the Multi30k dataset. Further, this analysis is still at the sentence level, i.e. the visual features are much less discriminative among individual sentences, further aggravating the problem at the token level. This suggests that the existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT, since they won't provide discriminativeness among the vocabulary elements at the token level during prediction." ] } ], "annotation_id": [ "95adf536327f0fe38835241afa8c84662f3d6e04" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Step-Wise Decoder Fusion", "Multimodal Attention Modulation", "Visual-Semantic (VS) Regularizer" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Proposed Fusion Techniques ::: Step-Wise Decoder Fusion", "Our first proposed technique is the step-wise decoder fusion of visual features during every prediction step i.e. we concatenate the visual encoding as context at each step of the decoding process. This differs from the usual practice of passing the visual feature only at the beginning of the decoding process BIBREF5.", "Proposed Fusion Techniques ::: Multimodal Attention Modulation", "Similar to general attention BIBREF8, wherein a variable-length alignment vector $a_{th}(s)$, whose size equals the number of time steps on the source side, is derived by comparing the current target hidden state $h_{t}$ with each source hidden state $\\overline{h_{s}}$; we consider a variant wherein the visual encoding $v_{t}$ is used to calculate an attention distribution $a_{tv}(s)$ over the source encodings as well. Then, the true attention distribution $a_{t}(s)$ is computed as an interpolation between the visual and text based attention scores. The score function is a content based scoring mechanism as usual.", "Proposed Fusion Techniques ::: Visual-Semantic (VS) Regularizer", "In terms of leveraging the visual modality for supervision, BIBREF1 use multi-task learning to learn grounded representations through image representation prediction. However, to our knowledge, visual-semantic supervision hasn't been much explored for multimodal translation in terms of loss functions." ], "highlighted_evidence": [ "Proposed Fusion Techniques ::: Step-Wise Decoder Fusion\nOur first proposed technique is the step-wise decoder fusion of visual features during every prediction step i.e. we concatenate the visual encoding as context at each step of the decoding process.", "Proposed Fusion Techniques ::: Multimodal Attention Modulation\nSimilar to general attention BIBREF8, wherein a variable-length alignment vector $a_{th}(s)$, whose size equals the number of time steps on the source side, is derived by comparing the current target hidden state $h_{t}$ with each source hidden state $\\overline{h_{s}}$; we consider a variant wherein the visual encoding $v_{t}$ is used to calculate an attention distribution $a_{tv}(s)$ over the source encodings as well.", "Proposed Fusion Techniques ::: Visual-Semantic (VS) Regularizer\nIn terms of leveraging the visual modality for supervision, BIBREF1 use multi-task learning to learn grounded representations through image representation prediction." ] } ], "annotation_id": [ "c74a6ef39dcde4b0a0166cd3bcf2cfee71afd7ef" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 2: BLEU Score Comparison of the proposed methods", "Figure 1: Top: Variance Explained by the Top 100 Components. Bottom: Cumulative Variance Explained by the Top Components.", "Table 1: BLEU Score Comparison of the proposed methods", "Figure 2: Left: Text Based Attention (Horizontal Direction Represents the Source Sentence) Right: Visual Attention for a 21 word Source Sentence (Labels omitted to avoid cluttering).", "Figure 3: Left: Text Based Attention (Horizontal Direction Represents the Source Sentence) Right: Visual Attention for a 7 word Source Sentence." ], "file": [ "3-Table2-1.png", "3-Figure1-1.png", "3-Table1-1.png", "4-Figure2-1.png", "4-Figure3-1.png" ] }
2004.02393
Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games
We propose the new problem of learning to recover reasoning chains from weakly supervised signals, i.e., the question-answer pairs. We propose a cooperative game approach to deal with this problem, in which how the evidence passages are selected and how the selected passages are connected are handled by two models that cooperate to select the most confident chains from a large set of candidates (from distant supervision). For evaluation, we created benchmarks based on two multi-hop QA datasets, HotpotQA and MedHop; and hand-labeled reasoning chains for the latter. The experimental results demonstrate the effectiveness of our proposed approach.
{ "section_name": [ "Introduction", "Task Definition", "Method", "Method ::: Passage Ranking Model", "Method ::: Passage Ranking Model ::: Passage Scoring", "Method ::: Passage Ranking Model ::: Conditional Selection", "Method ::: Passage Ranking Model ::: Reward via Distant Supervision", "Method ::: Cooperative Reasoner", "Experiments ::: Settings ::: Datasets", "Experiments ::: Settings ::: Baselines and Evaluation Metric", "Experiments ::: Results ::: HotpotQA", "Experiments ::: Results ::: MedHop", "Conclusions", "Details of MatchLSTMs for Passage Scoring and Reasoner ::: MatchLSTM for Passage Scoring", "Details of MatchLSTMs for Passage Scoring and Reasoner ::: MatchLSTM for Reasoner", "Definition of Chain Accuracy" ], "paragraphs": [ [ "NLP tasks that require multi-hop reasoning have recently enjoyed rapid progress, especially on multi-hop question answering BIBREF0, BIBREF1, BIBREF2. Advances have benefited from rich annotations of supporting evidence, as in the popular multi-hop QA and relation extraction benchmarks, e.g., HotpotQA BIBREF3 and DocRED BIBREF4, where the evidence sentences for the reasoning process were labeled by human annotators.", "Such evidence annotations are crucial for modern model training, since they provide finer-grained supervision for better guiding the model learning. Furthermore, they allow a pipeline fashion of model training, with each step, such as passage ranking and answer extraction, trained as a supervised learning sub-task. This is crucial from a practical perspective, in order to reduce the memory usage when handling a large amount of inputs with advanced, large pre-trained models BIBREF5, BIBREF6, BIBREF7.", "Manual evidence annotation is expensive, so there are only a few benchmarks with supporting evidence annotated. Even for these datasets, the structures of the annotations are still limited, as new model designs keep emerging and they may require different forms of evidence annotations. As a result, the supervision from these datasets can still be insufficient for training accurate models.", "Taking question answering with multi-hop reasoning as an example, annotating only supporting passages is not sufficient to show the reasoning processes due to the lack of necessary structural information (Figure FIGREF1). One example is the order of annotated evidence, which is crucial in logic reasoning and the importance of which has also been demonstrated in text-based QA BIBREF8. The other example is how the annotated evidence pieces are connected, which requires at least the definition of arguments, such as a linking entity, concept, or event. Such information has proved useful by the recently popular entity-centric methods BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF0, BIBREF2 and intuitively will be a benefit to these methods if available.", "We propose a cooperative game approach to recovering the reasoning chains with the aforementioned necessary structural information for multi-hop QA. Each recovered chain corresponds to a list of ordered passages and each pair of adjacent passages is connected with a linking entity. Specifically, we start with a model, the Ranker, which selects a sequence of passages arriving at the answers, with the restriction that each adjacent passage pair shares at least an entity. This is essentially an unsupervised task and the selection suffers from noise and ambiguity. Therefore we introduce another model, the Reasoner, which predicts the exact linking entity that points to the next passage. The two models play a cooperative game and are rewarded when they find a consistent chain. In this way, we restrict the selection to satisfy not only the format constraints (i.e., ordered passages with connected adjacencies) but also the semantic constraints (i.e., finding the next passage given that the partial selection can be effectively modeled by a Reasoner). Therefore, the selection can be less noisy.", "We evaluate the proposed method on datasets with different properties, i.e., HotpotQA and MedHop BIBREF13, to cover cases with both 2-hop and 3-hop reasoning. We created labeled reasoning chains for both datasets. Experimental results demonstrate the significant advantage of our proposed approach." ], [ "Reasoning Chains Examples of reasoning chains in HotpotQA and MedHop are shown in Figure FIGREF1. Formally, we aim at recovering the reasoning chain in the form of $(p_1 \\rightarrow e_{1,2} \\rightarrow p_2 \\rightarrow e_{2,3} \\rightarrow \\cdots \\rightarrow e_{n-1,n} \\rightarrow p_n)$, where each $p_i$ is a passage and each $e_{i,i+1}$ is an entity that connects $p_i$ and $p_{i+1}$, i.e., appearing in both passages. The last passage $p_n$ in the chain contains the correct answer. We say $p_i$ connects $e_{i-1,i}$ and $e_{i,i+1}$ in the sense that it describes a relationship between the two entities.", "Our Task Given a QA pair $(q,a)$ and all its candidate passages $\\mathcal {P}$, we can extract all possible candidate chains that satisfy the conditions mentioned above, denoted as $\\mathcal {C}$. The goal of reasoning chain recovery is to extract the correct chains from all the candidates, given $q,a$ and $\\mathcal {P}$ as inputs.", "Related Work Although there are recent interests on predicting reasoning chains for multi-hop QA BIBREF0, BIBREF14, BIBREF2, they all consider a fully supervised setting; i.e., annotated reasoning chains are available. Our work is the first to recover reasoning chains in a more general unsupervised setting, thus falling into the direction of denoising over distant supervised signals. From this perspective, the most relevant studies in the NLP field includes BIBREF15, BIBREF16 for evidence identification in open-domain QA and BIBREF17, BIBREF18, BIBREF19 for rationale recovery." ], [ "The task of recovering reasoning chains is essentially an unsupervised problem, as we have no access to annotated reasoning chains. Therefore, we resort to the noisy training signal from chains obtained by distant supervision. We first propose a conditional selection model that optimizes the passage selection by considering their orders (Section SECREF4). We then propose a cooperative Reasoner-Ranker game (Section SECREF12) in which the Reasoner recovers the linking entities that point to the next passage. This enhancement encourages the Ranker to select the chains such that their distribution is easier for a linking entity prediction model (Reasoner) to capture. Therefore, it enables our model to denoise the supervision signals while recovering chains with entity information. Figure FIGREF3 gives our overall framework, with a flow describing how the Reasoner passes additional rewards to the Ranker." ], [ "The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\\mathcal {P} = \\lbrace p_1, p_2 ... p_K\\rbrace $ from a pool of candidates, and outputs a chain of selected passages." ], [ "For each step of the chain, the Ranker estimates a distribution of the selection of each passage. To this end we first encode the question and passage with a 2-layer bi-directional GRU network, resulting in an encoded question $\\mathbf {Q} = \\lbrace \\vec{\\mathbf {q}_0}, \\vec{\\mathbf {q}_1}, ..., \\vec{\\mathbf {q}_N}\\rbrace $ and $\\mathbf {H}_i = \\lbrace \\vec{\\mathbf {h}_{i,0}}, \\vec{\\mathbf {h}_{i,1}}, ..., \\vec{\\mathbf {h}_{i,M_i}}\\rbrace $ for each passage $p_i \\in P$ of length $M_i$. Then we use the MatchLSTM model BIBREF20 to get the matching score between $\\mathbf {Q}$ and each $\\mathbf {H}_i$ and derive the distribution of passage selection $P(p_i|q)$ (see Appendix SECREF6 for details). We denote $P(p_i|q)=\\textrm {MatchLSTM}(\\mathbf {H}_i, \\mathbf {Q})$ for simplicity." ], [ "To model passage dependency along the chain of reasoning, we use a hard selection model that builds a chain incrementally. Provided with the $K$ passages, at each step $t$ the Ranker computes $P^t(p_i|\\mathbf {Q}^{t-1}), i = 0, ..., K$, which is the probability of selecting passage $p_i$ conditioned on the query and previous states representation $\\mathbf {Q}^{t-1}$. Then we sample one passage $p^t_{\\tau }$ according to the predicted selection probability.", "", "The first step starts with the original question $\\mathbf {Q}^0$. A feed-forward network is used to project the concatenation of query encoding and selected passage encoding $\\tilde{\\mathbf {m}}^t_{p_{\\tau }}$ back to the query space, and the new query $\\mathbf {Q}^{t+1}$ is used to select the next passage." ], [ "We use policy gradient BIBREF21 to optimize our model. As we have no access to annotated reasoning chains during training, the reward comes from distant supervision. Specifically, we reward the Ranker if a selected passage appears as the corresponding part of a distant supervised chain in $\\mathcal {C}$. The model receives immediate reward at each step of selection.", "In this paper we only consider chains consist of $\\le 3$ passages (2-hop and 3-hop chains). For the 2-hop cases, our model predicts a chain of two passages from the candidate set $\\mathcal {C}$ in the form of $p_h\\rightarrow e \\rightarrow p_t$. Each candidate chain satisfies that $p_t$ contains the answer, while $p_h$ and $p_t$ contain a shared entity $e$. We call $p_h$ the head passage and $p_t$ the tail passage. Let $\\mathcal {P}_{T}/\\mathcal {P}_{H}$ denote the set of all tail/head passages from $\\mathcal {C}$. Our model receives rewards $r_h, r_t$ according to its selections:", "For the 3-hop cases, we need to select an additional intermediate passage $p_m$ between $p_h$ and $p_t$. If we reward any $p_m$ selection that appears in the middle of a chain in candidate chain set $\\mathcal {C}$, the number of feasible options can be very large. Therefore, we make our model first select the head passage $p_h$ and the tail passage $p_t$ independently and then select $p_m$ conditioned on $(p_h,p_t)$. We further restrict that each path in $\\mathcal {C}$ must have the head passage containing an entity from $q$. Then the selected $p_m$ is only rewarded if it appears in a chain in $\\mathcal {C}$ that starts with $p_h$ and ends with $p_t$:" ], [ "To alleviate the noise in the distant supervision signal $\\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game:", "Reasoner Step: Given the first passage $p_t$ selected by the trained Ranker, the Reasoner predicts the probability of each entity $e$ appearing in $p_t$. The Reasoner is trained with the cross-entropy loss:", "", "Ranker Step: Given the Reasoner's top-1 predicted linking entity $e$, the reward for Ranker at the $2^{\\textrm {nd}}$ step is defined as:", "", "", "The extension to 3-hop cases is straightforward; the only difference is that the Reasoner reads both the selected $p_h$ and $p_t$ to output two entities. The Ranker receives one extra reward if the Reasoner picks the correct linking entity from $p_h$, so does $p_t$." ], [ "We evaluate our path selection model on HotpotQA bridge type questions and on the MedHop dataset. In HotpotQA, the entities are pre-processed Wiki anchor link objects and in MedHop they are drug/protein database identifiers.", "For HotpotQA, two supporting passages are provided along with each question. We ignore the support annotations during training and use them to create ground truth on development set: following BIBREF8, we determine the order of passages according to whether a passage contains the answer. We discard ambiguous instances.", "For MedHop, there is no evidence annotated. Therefore we created a new evaluation dataset by manually annotating the correct paths for part of the development set: we first extract all candidate paths in form of passage triplets $(p_h, p_m, p_t)$, such that $p_h$ contains the query drug and $p_t$ contains the answer drug, and $p_h/p_m$ and $p_m/p_t$ are connected by shared proteins. We label a chain as positive if all the drug-protein or protein-protein interactions are described in the corresponding passages. Note that the positive paths are not unique for a question.", "During training we select chains based on the full passage set $\\mathcal {P}$; at inference time we extract the chains from the candidate set $\\mathcal {C}$ (see Section SECREF2).", "" ], [ "We compare our model with (1) random baseline, which randomly selects a candidate chain from the distant supervision chain set $\\mathcal {C}$; and (2) distant supervised MatchLSTM, which uses the same base model as ours but scores and selects the passages independently. We use accuracy as our evaluation metric. As HotpotQA does not provide ground-truth linking entities, we only evaluate whether the supporting passages are fully recovered (yet our model still output the full chains). For MedHop we evaluate whether the whole predicted chain is correct. More details can be found in Appendix SECREF7. We use BIBREF24 as word embedding for HotpotQA, and BIBREF25 for MedHop.", "" ], [ "We first evaluate on the 2-hop HotpotQA task. Our best performed model first selects the tail passage $p_t$ and then the head passage $p_h$, because the number of candidates of tail is smaller ($\\sim $2 per question). Table TABREF21 shows the results. First, training a ranker with distant supervision performs significantly better than the random baseline, showing that the training process itself has a certain degree of denoising ability to distinguish the more informative signals from distant supervision labels. By introducing additional inductive bias of orders, the conditional selection model further improves with a large margin. Finally, our cooperative game gives the best performance, showing that a trained Reasoner has the ability of ignoring entity links that are irrelevant to the reasoning chain.", "Table TABREF22 demonstrates the effect of selecting directions, together with the methods' recall on head passages and tail passages. The latter is evaluated on a subset of bridge-type questions in HotpotQA which has no ambiguous support annotations in passage orders; i.e., among the two human-labeled supporting passages, only one contains the answer and thus must be a tail. The results show that selecting tail first performs better. The cooperative game mainly improves the head selection." ], [ "Results in table TABREF21 show that recovering chains from MedHop is a much harder task: first, the large number of distant supervision chains in $\\mathcal {C}$ introduce too much noise so the Distant Supervised Ranker improves only 3%; second, the dependent model leads to no improvement because $\\mathcal {C}$ is strictly ordered given our data construction. Our cooperative game manages to remain effective and gives further improvement." ], [ "In this paper we propose the problem of recovering reasoning chains in multi-hop QA from weak supervision signals. Our model adopts an cooperative game approach where a ranker and a reasoner cooperate to select the most confident chains. Experiments on the HotpotQA and MedHop benchmarks show the effectiveness of the proposed approach." ], [ "Given the embeddings $\\mathbf {Q} = \\lbrace \\vec{\\mathbf {q}_0}, \\vec{\\mathbf {q}_1}, ..., \\vec{\\mathbf {q}_N}\\rbrace $ of the question $q$, and $\\mathbf {H}_i = \\lbrace \\vec{\\mathbf {h}_{i,0}}, \\vec{\\mathbf {h}_{i,1}}, ..., \\vec{\\mathbf {h}_{i,M_i}}\\rbrace $ of each passage $p_i \\in P$, we use the MatchLSTM BIBREF20 to match $\\mathbf {Q}$ and $\\mathbf {H}_i$ as follows:", "The final vector $\\tilde{\\mathbf {m}}_i$ represents the matching state between $q$ and $p_i$. All the $\\tilde{\\mathbf {m}}_i$s are then passed to a linear layer that outputs the ranking score of each passage. We apply softmax over the scores to get the probability of passage selection $P(p_i|q)$. We denote the above computation as $P(p_i|q)=\\textrm {MatchLSTM}(\\mathbf {H}_i, \\mathbf {Q})$ for simplicity." ], [ "Given the question embedding $\\mathbf {Q}^r = \\lbrace \\vec{\\mathbf {q}^r_0}, \\vec{\\mathbf {q}^r_1}, ..., \\vec{\\mathbf {q}^r_N}\\rbrace $ and the input passage embedding $\\mathbf {H}^r = \\lbrace \\vec{\\mathbf {h}^r_{0}}, \\vec{\\mathbf {h}^r_{1}}, ..., \\vec{\\mathbf {h}^r_{M}}\\rbrace $ of $p$, the Reasoner predicts the probability of each entity in the passage being the linking entity of the next passage in the chain. We use a reader model similar to BIBREF3 as our Reasoner network.", "We first describe an attention sub-module. Given input sequence embedding $\\mathbf {A} = \\lbrace \\vec{\\mathbf {a}_0}, \\vec{\\mathbf {a}_1}, ..., \\vec{\\mathbf {a}_N}\\rbrace $ and $\\mathbf {B} = \\lbrace \\vec{\\mathbf {b}_{0}}, \\vec{\\mathbf {b}_{1}}, ..., \\vec{\\mathbf {b}_{M}}\\rbrace $, we define $\\tilde{\\mathcal {M}} = \\text{Attention}(\\mathbf {A}, \\mathbf {B})$:", "where FFN denotes a feed forward layer which projects the concatenated embedding back to the original space.", "The Reasoner network consists of multiple attention layers, together with a bidirectional GRU encoder and skip connection.", "For each token $e_k, k = 0, 1,..., M$ represented by $h^r_{p,k}$ at the corresponding location, we have:", "where $g$ is the classification layer, softmax is applied across all entities to get the probability. We denote the computation above as $P^r(e_k| \\mathbf {p}) = \\textrm {MatchLSTM.Reader}(e_k, \\mathbf {p})$ for simplicity." ], [ "In HotpotQA, on average we can find 6 candidate chains (2-hop) in a instance, and the human labeled true reasoning chain is unique. A predicted chain is correct if the chain only contains all supporting passages (exact match of passages).", "In MedHop, on average we can find 30 candidate chains (3-hop). For each candidate chain our human annotators labeled whether it is correct or not, and the correct reasoning chain is not unique. A predicted chain is correct if it is one of the chains that human labeled as correct.", "The accuracy is defined as the ratio:", "" ] ] }
{ "question": [ "What are two models' architectures in proposed solution?", "How do two models cooperate to select the most confident chains?", "How many hand-labeled reasoning chains have been created?", "What benchmarks are created?" ], "question_id": [ "bd7039f81a5417474efa36f703ebddcf51835254", "022e5c996a72aeab890401a7fdb925ecd0570529", "2a950ede24b26a45613169348d5db9176fda4f82", "34af2c512ec38483754e94e1ea814aa76552d60a" ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Reasoner model, also implemented with the MatchLSTM architecture", "Ranker model" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Method ::: Passage Ranking Model", "The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\\mathcal {P} = \\lbrace p_1, p_2 ... p_K\\rbrace $ from a pool of candidates, and outputs a chain of selected passages.", "Method ::: Cooperative Reasoner", "To alleviate the noise in the distant supervision signal $\\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game:" ], "highlighted_evidence": [ "Method ::: Passage Ranking Model\nThe key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\\mathcal {P} = \\lbrace p_1, p_2 ... p_K\\rbrace $ from a pool of candidates, and outputs a chain of selected passages.", "Method ::: Cooperative Reasoner\nTo alleviate the noise in the distant supervision signal $\\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages." ] } ], "annotation_id": [ "8eefbea2f3cfcf402f9d072e674b0300e54adc66" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards" ], "yes_no": null, "free_form_answer": "", "evidence": [ "To alleviate the noise in the distant supervision signal $\\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game:" ], "highlighted_evidence": [ "Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards." ] } ], "annotation_id": [ "af6e29e48d2faba6721794b69df129ff67314a89" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "14dac62604a816e476874958f9232db308ef029e" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Answer with content missing: (formula) The accuracy is defined as the ratio # of correct chains predicted to # of evaluation samples", "evidence": [ "In HotpotQA, on average we can find 6 candidate chains (2-hop) in a instance, and the human labeled true reasoning chain is unique. A predicted chain is correct if the chain only contains all supporting passages (exact match of passages).", "In MedHop, on average we can find 30 candidate chains (3-hop). For each candidate chain our human annotators labeled whether it is correct or not, and the correct reasoning chain is not unique. A predicted chain is correct if it is one of the chains that human labeled as correct.", "The accuracy is defined as the ratio:" ], "highlighted_evidence": [ "In HotpotQA, on average we can find 6 candidate chains (2-hop) in a instance, and the human labeled true reasoning chain is unique. A predicted chain is correct if the chain only contains all supporting passages (exact match of passages).\n\nIn MedHop, on average we can find 30 candidate chains (3-hop). For each candidate chain our human annotators labeled whether it is correct or not, and the correct reasoning chain is not unique. A predicted chain is correct if it is one of the chains that human labeled as correct.\n\nThe accuracy is defined as the ratio:", "The accuracy is defined as the ratio:" ] } ], "annotation_id": [ "f0ac256d61835f95f747206c359e03b9e4acd2e3" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: An example of reasoning chains in HotpotQA (2- hop) and MedHop (3-hop). HotpotQA provides only supporting passages {P3, P9}, without order and linking information.", "Figure 2: Model overview. The cooperative Ranker and Reasoner are trained alternatively. The Ranker selects a passage p at each step conditioned on the question q and history selection, and receives reward r1 if p is evidence. Conditioned on q, the Reasoner predicts which entity from p links to the next evidence passage. The Ranker receives extra reward r2 if its next selection is connected by the entity predicted by the Reasoner. Both q and answer a are model inputs. While q is fed to the Ranker/Reasoner as input, empirically the best way of using a is for constructing the candidate set thus computing the reward r1. We omit the flow from q/a for simplicity.", "Table 1: Reasoning Chain selection results.", "Table 2: Ablation test on HotpotQA." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "4-Table1-1.png", "4-Table2-1.png" ] }
2004.01694
A Set of Recommendations for Assessing Human-Machine Parity in Language Translation
The quality of machine translation has increased remarkably over the past years, to the degree that it was found to be indistinguishable from professional human translation in a number of empirical investigations. We reassess Hassan et al.'s 2018 investigation into Chinese to English news translation, showing that the finding of human-machine parity was owed to weaknesses in the evaluation design - which is currently considered best practice in the field. We show that the professional human translations contained significantly fewer errors, and that perceived quality in human evaluation depends on the choice of raters, the availability of linguistic context, and the creation of reference translations. Our results call for revisiting current best practices to assess strong machine translation systems in general and human-machine parity in particular, for which we offer a set of recommendations based on our empirical findings.
{ "section_name": [ "Introduction", "Background", "Background ::: Human Evaluation of Machine Translation", "Background ::: Assessing Human–Machine Parity", "Background ::: Assessing Human–Machine Parity ::: Choice of Raters", "Background ::: Assessing Human–Machine Parity ::: Linguistic Context", "Background ::: Assessing Human–Machine Parity ::: Reference Translations", "Background ::: Translations", "Choice of Raters", "Choice of Raters ::: Evaluation Protocol", "Choice of Raters ::: Results", "Linguistic Context", "Linguistic Context ::: Evaluation Protocol", "Linguistic Context ::: Results", "Linguistic Context ::: Discussion", "Reference Translations", "Reference Translations ::: Quality", "Reference Translations ::: Directionality", "Recommendations", "Recommendations ::: (R1) Choose professional translators as raters.", "Recommendations ::: (R2) Evaluate documents, not sentences.", "Recommendations ::: (R3) Evaluate fluency in addition to adequacy.", "Recommendations ::: (R4) Do not heavily edit reference translations for fluency.", "Recommendations ::: (R5) Use original source texts.", "Conclusion" ], "paragraphs": [ [ "Machine translation (MT) has made astounding progress in recent years thanks to improvements in neural modelling BIBREF0, BIBREF1, BIBREF2, and the resulting increase in translation quality is creating new challenges for MT evaluation. Human evaluation remains the gold standard, but there are many design decisions that potentially affect the validity of such a human evaluation.", "This paper is a response to two recent human evaluation studies in which some neural machine translation systems reportedly performed at (or above) the level of human translators for news translation from Chinese to English BIBREF3 and English to Czech BIBREF4, BIBREF5.", "Both evaluations were based on current best practices in the field: they used a source-based direct assessment with non-expert annotators, using data sets and the evaluation protocol of the Conference on Machine Translation (WMT). While the results are intriguing, especially because they are based on best practices in MT evaluation, BIBREF5 warn against taking their results as evidence for human–machine parity, and caution that for well-resourced language pairs, an update of WMT evaluation style will be needed to keep up with the progress in machine translation. We concur that these findings have demonstrated the need to critically re-evaluate the design of human MT evaluation.", "Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis." ], [ "We first review current methods to assess the quality of machine translation system outputs, and highlight potential issues in using these methods to compare such outputs to translations produced by professional human translators." ], [ "The evaluation of MT quality has been the subject of controversial discussions in research and the language services industry for decades due to its high economic importance. While automatic evaluation methods are particularly important in system development, there is consensus that a reliable evaluation should—despite high costs—be carried out by humans.", "Various methods have been proposed for the human evaluation of MT quality BIBREF8. What they have in common is that the MT output to be rated is paired with a translation hint: the source text or a reference translation. The MT output is then either adapted or scored with reference to the translation hint by human post-editors or raters, respectively.", "As part of the large-scale evaluation campaign at WMT, two primary evaluation methods have been used in recent years: relative ranking and direct assessment BIBREF9. In the case of relative ranking, raters are presented with outputs from two or more systems, which they are asked to evaluate relative to each other (e.g., to determine system A is better than system B). Ties (e.g., system A is as good or as bad as system B) are typically allowed. Compared to absolute scores on Likert scales, data obtained through relative ranking show better inter- and intra-annotator agreement BIBREF10. However, they do not allow conclusions to be drawn about the order of magnitude of the differences, so that it is not possible to determine how much better system A was than system B.", "This is one of the reasons why direct assessment has prevailed as an evaluation method more recently. In contrast to relative ranking, the raters are presented with one MT output at a time, to which they assign a score between 0 and 100. To increase homogeneity, each rater's ratings are standardised BIBREF11. Reference translations serve as the basis in the context of WMT, and evaluations are carried out by monolingual raters. To avoid reference bias, the evaluation can be based on source texts instead, which presupposes bilingual raters, but leads to more reliable results overall BIBREF12." ], [ "BIBREF3 base their claim of achieving human–machine parity on a source-based direct assessment as described in the previous section, where they found no significant difference in ratings between the output of their MT system and a professional human translation. Similarly, BIBREF5 report that the best-performing English to Czech system submitted to WMT 2018 BIBREF4 significantly outperforms the human reference translation. However, the authors caution against interpreting their results as evidence of human–machine parity, highlighting potential limitations of the evaluation.", "In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations." ], [ "The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations." ], [ "MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents." ], [ "The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality.", "We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6)." ], [ "We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article:", "[labelwidth=1cm, leftmargin=1.25cm]", "The professional human translations in the dataset of BIBREF3.[1]", "Professional human translations that we ordered from a different translation vendor, which included a post-hoc native English check. We produced these only for the documents that were originally Chinese, as discussed in more detail in Section SECREF35.", "The machine translations produced by BIBREF3's BIBREF3 best system (Combo-6),[1] for which the authors found parity with H$_A$.", "The machine translations produced by Google's production system (Google Translate) in October 2017, as contained in BIBREF3's BIBREF3 dataset.[1]", "Statistical significance is denoted by * ($p\\le .05$), ** ($p\\le .01$), and *** ($p\\le .001$) throughout this article, unless otherwise stated." ], [ "Both professional and amateur evaluators can be involved in human evaluation of MT quality. However, from published work in the field BIBREF19, it is fair to say that there is a tendency to “rely on students and amateur evaluators, sometimes with an undefined (or self-rated) proficiency in the languages involved, an unknown expertise with the text type\" BIBREF8.", "Previous work on evaluation of MT output by professional translators against crowd workers by BIBREF20 showed that for all language pairs (involving 11 languages) evaluated, crowd workers tend to be more accepting of the MT output by giving higher fluency and adequacy scores and performing very little post-editing. The authors argued that non-expert translators lack knowledge of translation and so might not notice subtle differences that make one translation more suitable than another, and therefore, when confronted with a translation that is hard to post-edit, tend to accept the MT rather than try to improve it." ], [ "We test for difference in ratings of MT outputs and human translations between experts and non-experts. We consider professional translators as experts, and both crowd workers and MT researchers as non-experts.", "We conduct a relative ranking experiment using one professional human (H$_A$) and two machine translations (MT$_1$ and MT$_2$), considering the native Chinese part of the WMT 2017 Chinese–English test set (see Section SECREF35 for details). The 299 sentences used in the experiments stem from 41 documents, randomly selected from all the documents in the test set originally written in Chinese, and are shown in their original order. Raters are shown one sentence at a time, and see the original Chinese source alongside the three translations. The previous and next source sentences are also shown, in order to provide the annotator with local inter-sentential context.", "Five raters—two experts and three non-experts—participated in the assessment. The experts were professional Chinese to English translators: one native in Chinese with a fluent level of English, the other native in English with a fluent level of Chinese. The non-experts were NLP researchers native in Chinese, working in an English-speaking country.", "The ratings are elicited with Appraise BIBREF21. We derive an overall score for each translation (H$_A$, MT$_1$, and MT$_2$) based on the rankings. We use the TrueSkill method adapted to MT evaluation BIBREF22 following its usage at WMT15, i. e., we run 1,000 iterations of the rankings recorded with Appraise followed by clustering (significance level $\\alpha =0.05$)." ], [ "Table TABREF17 shows the TrueSkill scores for each translation resulting from the evaluations by expert and non-expert translators. We find that translation expertise affects the judgement of MT$_1$ and H$_A$, where the rating gap is wider for the expert raters. This indicates that non-experts disregard translation nuances in the evaluation, which leads to a more tolerant judgement of MT systems and a lower inter-annotator agreement ($\\kappa =0.13$ for non-experts versus $\\kappa =0.254$ for experts).", "It is worth noticing that, regardless of their expertise, the performance of human raters may vary over time. For example, performance may improve or decrease due to learning effects or fatigue, respectively BIBREF23. It is likely that such longitudinal effects are present in our data. They should be accounted for in future work, e. g., by using trial number as an additional predictor BIBREF24." ], [ "Another concern is the unit of evaluation. Historically, machine translation has primarily operated on the level of sentences, and so has machine translation evaluation. However, it has been remarked that human raters do not necessarily understand the intended meaning of a sentence shown out-of-context BIBREF25, which limits their ability to spot some mistranslations. Also, a sentence-level evaluation will be blind to errors related to textual cohesion and coherence.", "While sentence-level evaluation may be good enough when evaluating MT systems of relatively low quality, we hypothesise that with additional context, raters will be able to make more nuanced quality assessments, and will also reward translations that show more textual cohesion and coherence. We believe that this aspect should be considered in evaluation, especially when making claims about human–machine parity, since human translators can and do take inter-sentential context into account BIBREF15, BIBREF16." ], [ "We test if the availability of document-level context affects human–machine parity claims in terms of adequacy and fluency. In a pairwise ranking experiment, we show raters (i) isolated sentences and (ii) entire documents, asking them to choose the better (with ties allowed) from two translation outputs: one produced by a professional translator, the other by a machine translation system. We do not show reference translations as one of the two options is itself a human translation.", "We use source sentences and documents from the WMT 2017 Chinese–English test set (see Section SECREF8): documents are full news articles, and sentences are randomly drawn from these news articles, regardless of their position. We only consider articles from the test set that are native Chinese (see Section SECREF35). In order to compare our results to those of BIBREF3, we use both their professional human (H$_A$) and machine translations (MT$_1$).", "Each rater evaluates both sentences and documents, but never the same text in both conditions so as to avoid repetition priming BIBREF26. The order of experimental items as well as the placement of choices (H$_A$, MT$_1$; left, right) are randomised.", "We use spam items for quality control BIBREF27: In a small fraction of items, we render one of the two options nonsensical by randomly shuffling the order of all translated words, except for 10 % at the beginning and end. If a rater marks a spam item as better than or equal to an actual translation, this is a strong indication that they did not read both options carefully.", "We recruit professional translators (see Section SECREF3) from proz.com, a well-known online market place for professional freelance translation, considering Chinese to English translators and native English revisers for the adequacy and fluency conditions, respectively. In each condition, four raters evaluate 50 documents (plus 5 spam items) and 104 sentences (plus 16 spam items). We use two non-overlapping sets of documents and two non-overlapping sets of sentences, and each is evaluated by two raters." ], [ "", "Results are shown in Table TABREF21. We note that sentence ratings from two raters are excluded from our analysis because of unintentional textual overlap with documents, meaning we cannot fully rule out that sentence-level decisions were informed by access to the full documents they originated from. Moreover, we exclude document ratings from one rater in the fluency condition because of poor performance on spam items, and recruit an additional rater to re-rate these documents.", "We analyse our data using two-tailed Sign Tests, the null hypothesis being that raters do not prefer MT$_1$ over H$_A$ or vice versa, implying human–machine parity. Following WMT evaluation campaigns that used pairwise ranking BIBREF28, the number of successes $x$ is the number of ratings in favour of H$_A$, and the number of trials $n$ is the number of all ratings except for ties. Adding half of the ties to $x$ and the total number of ties to $n$ BIBREF29 does not impact the significance levels reported in this section.", "Adequacy raters show no statistically significant preference for MT$_1$ or H$_A$ when evaluating isolated sentences ($x=86, n=189, p=.244$). This is in accordance with BIBREF3, who found the same in a source-based direct assessment experiment with crowd workers. With the availability of document-level context, however, preference for MT$_1$ drops from 49.5 to 37.0 % and is significantly lower than preference for human translation ($x=104, n=178, p<.05$). This evidences that document-level context cues allow raters to get a signal on adequacy.", "Fluency raters prefer H$_A$ over MT$_1$ both on the level of sentences ($x=106, n=172, p<.01$) and documents ($x=99, n=143, p<.001$). This is somewhat surprising given that increased fluency was found to be one of the main strengths of NMT BIBREF30, as we further discuss in Section SECREF24. The availability of document-level context decreases fluency raters' preference for MT$_1$, which falls from 31.7 to 22.0 %, without increasing their preference for H$_A$ (Table TABREF21)." ], [ "Our findings emphasise the importance of linguistic context in human evaluation of MT. In terms of adequacy, raters assessing documents as a whole show a significant preference for human translation, but when assessing single sentences in random order, they show no significant preference for human translation.", "Document-level evaluation exposes errors to raters which are hard or impossible to spot in a sentence-level evaluation, such as coherent translation of named entities. The example in Table TABREF23 shows the first two sentences of a Chinese news article as translated by a professional human translator (H$_A$) and BIBREF3's BIBREF3 NMT system (MT$_1$). When looking at both sentences (document-level evaluation), it can be seen that MT$_1$ uses two different translations to refer to a cultural festival, “2016盂兰文化节\", whereas the human translation uses only one. When assessing the second sentence out of context (sentence-level evaluation), it is hard to penalise MT$_1$ for producing 2016 Python Cultural Festival, particularly for fluency raters without access to the corresponding source text. For further examples, see Section SECREF24 and Table TABREF34." ], [ "Yet another relevant element in human evaluation is the reference translation used. This is the focus of this section, where we cover two aspects of reference translations that can have an impact on evaluation: quality and directionality." ], [ "Because the translations are created by humans, a number of factors could lead to compromises in quality:", "If the translator is a non-native speaker of the source language, they may make mistakes in interpreting the original message. This is particularly true if the translator does not normally work in the domain of the text, e. g., when a translator who normally works on translating electronic product manuals is asked to translate news.", "If the translator is a non-native speaker of the target language, they might not be able to generate completely fluent text. This similarly applies to domain-specific terminology.", "Unlike computers, human translators have limits in time, attention, and motivation, and will generally do a better job when they have sufficient time to check their work, or are particularly motivated to do a good job, such as when doing a good job is necessary to maintain their reputation as a translator.", "In recent years, a large number of human translation jobs are performed by post-editing MT output, which can result in MT artefacts remaining even after manual post-editing BIBREF31, BIBREF32, BIBREF33.", "In this section, we examine the effect of the quality of underlying translations on the conclusions that can be drawn with regards to human–machine parity. We first do an analysis on (i) how the source of the human translation affects claims of human–machine parity, and (ii) whether significant differences exist between two varieties of human translation. We follow the same protocol as in Section SECREF19, having 4 professional translators per condition, evaluate the translations for adequacy and fluency on both the sentence and document level.", "The results are shown in Table TABREF30. From this, we can see that the human translation H$_B$, which was aggressively edited to ensure target fluency, resulted in lower adequacy (Table TABREF30). With more fluent and less accurate translations, raters do not prefer human over machine translation in terms of adequacy (Table TABREF30), but have a stronger preference for human translation in terms of fluency (compare Tables TABREF30 and TABREF21). In a direct comparison of the two human translations (Table TABREF30), we also find that H$_A$ is considered significantly more adequate than H$_B$, while there is no significant difference in fluency.", "To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32.", "From these results, we can glean a few interesting insights. First, we find significantly larger numbers of errors of the categories of Incorrect Word and Named Entity in MT$_1$, indicating that the MT system is less effective at choosing correct translations for individual words than the human translators. An example of this can be found in Table TABREF33, where we see that the MT system refers to a singular “point of view\" and translates 线路 (channel, route, path) into the semantically similar but inadequate lines. Interestingly, MT$_1$ has significantly more Word Order errors, one example of this being shown in Table TABREF33, with the relative placements of at the end of last year (去年年底) and stop production (停产). This result is particularly notable given previous reports that NMT systems have led to great increases in reordering accuracy compared to previous statistical MT systems BIBREF35, BIBREF36, demonstrating that the problem of generating correctly ordered output is far from solved even in very strong NMT systems. Moreover, H$_B$ had significantly more Missing Word (Semantics) errors than both H$_A$ ($p<.001$) and MT$_1$ ($p<.001$), an indication that the proofreading process resulted in drops of content in favour of fluency. An example of this is shown in Table TABREF33, where H$_B$ dropped the information that the meetings between Suning and Apple were recently (近期) held. Finally, while there was not a significant difference, likely due to the small number of examples overall, it is noticeable that MT$_1$ had a higher percentage of Collocation and Context errors, which indicate that the system has more trouble translating words that are dependent on longer-range context. Similarly, some Named Entity errors are also attributable to translation inconsistencies due to lack of longer-range context. Table TABREF34 shows an example where we see that the MT system was unable to maintain a consistently gendered or correct pronoun for the female Olympic shooter Zhang Binbin (张彬彬).", "Apart from showing qualitative differences between the three translations, the analysis also supports the finding of the pairwise ranking study: H$_A$ is both preferred over MT$_1$ in the pairwise ranking study, and exhibits fewer translation errors in our error classification. H$_B$ has a substantially higher number of missing words than the other two translations, which agrees with the lower perceived adequacy in the pairwise ranking.", "However, the analysis not only supports the findings of the pairwise ranking study, but also adds nuance to it. Even though H$_B$ has the highest number of deletions, and does worse than the other two translations in a pairwise adequacy ranking, it is similar to H$_A$, and better than MT$_1$, in terms of most other error categories." ], [ "Translation quality is also affected by the nature of the source text. In this respect, we note that from the 2,001 sentences in the WMT 2017 Chinese–English test set, half were originally written in Chinese; the remaining half were originally written in English and then manually translated into Chinese. This Chinese reference file (half original, half translated) was then manually translated into English by BIBREF3 to make up the reference for assessing human–machine parity. Therefore, 50 % of the reference comprises direct English translations from the original Chinese, while 50 % are English translations from the human-translated file from English into Chinese, i. e., backtranslations of the original English.", "According to BIBREF37, translated texts differ from their originals in that they are simpler, more explicit, and more normalised. For example, the synonyms used in an original text may be replaced by a single translation. These differences are referred to as translationese, and have been shown to affect translation quality in the field of machine translation BIBREF38, BIBREF39, BIBREF32, BIBREF33.", "We test whether translationese has an effect on assessing parity between translations produced by humans and machines, using relative rankings of translations in the WMT 2017 Chinese–English test set by five raters (see Section SECREF3). Our hypothesis is that the difference between human and machine translation quality is smaller when source texts are translated English (translationese) rather than original Chinese, because a translationese source text should be simpler and thus easier to translate for an MT system. We confirm Laviosa's observation that “translationese” Chinese (that started as English) exhibits less lexical variety than “natively” Chinese text and demonstrate that translationese source texts are generally easier for MT systems to score well on.", "Table TABREF36 shows the TrueSkill scores for translations (H$_A$, MT$_1$, and MT$_2$) of the entire test set (Both) versus only the sentences originally written in Chinese or English therein. The human translation H$_A$ outperforms the machine translation MT$_1$ significantly when the original language is Chinese, while the difference between the two is not significant when the original language is English (i. e., translationese input).", "We also compare the two subsets of the test set, original and translationese, using type-token ratio (TTR). Our hypothesis is that the TTR will be smaller for the translationese subset, thus its simpler nature getting reflected in a less varied use of language. While both subsets contain a similar number of sentences (1,001 and 1,000), the Chinese subset contains more tokens (26,468) than its English counterpart (22,279). We thus take a subset of the Chinese (840 sentences) containing a similar amount of words to the English data (22,271 words). We then calculate the TTR for these two subsets using bootstrap resampling. The TTR for Chinese ($M=0.1927$, $SD=0.0026$, 95 % confidence interval $[0.1925,0.1928]$) is 13 % higher than that for English ($M=0.1710$, $SD=0.0025$, 95 % confidence interval $[0.1708,0.1711]$).", "Our results show that using translationese (Chinese translated from English) rather than original source texts results in higher scores for MT systems in human evaluation, and that the lexical variety of translationese is smaller than that of original text." ], [ "Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general." ], [ "In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs." ], [ "When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4)." ], [ "Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality." ], [ "In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30)." ], [ "Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT.", "Our work empirically strengthens and extends the recommendations on human MT evaluation in previous work BIBREF6, BIBREF7, some of which have meanwhile been adopted by the large-scale evaluation campaign at WMT 2019 BIBREF40: the new evaluation protocol uses original source texts only (R5) and gives raters access to document-level context (R2). The findings of WMT 2019 provide further evidence in support of our recommendations. In particular, human English to Czech translation was found to be significantly better than MT BIBREF40; the comparison includes the same MT system (CUNI-Transformer-T2T-2018) which outperformed human translation according to the previous protocol BIBREF5. Results also show a larger difference between human translation and MT in document-level evaluation.", "We note that in contrast to WMT, the judgements in our experiments are provided by a small number of human raters: five in the experiments of Sections SECREF3 and SECREF35, four per condition (adequacy and fluency) in Section SECREF4, and one in the fine-grained error analysis presented in Section SECREF24. Moreover, the results presented in this article are based on one text domain (news) and one language direction (Chinese to English), and while a large-scale evaluation with another language pair supports our findings (see above), further experiments with more languages, domains, and raters will be required to increase their external validity." ], [ "We compared professional human Chinese to English translations to the output of a strong MT system. In a human evaluation following best practices, BIBREF3 found no significant difference between the two, concluding that their NMT system had reached parity with professional human translation. Our blind qualitative analysis, however, showed that the machine translation output contained significantly more incorrect words, omissions, mistranslated names, and word order errors.", "Our experiments show that recent findings of human–machine parity in language translation are owed to weaknesses in the design of human evaluation campaigns. We empirically tested alternatives to what is currently considered best practice in the field, and found that the choice of raters, the availability of linguistic context, and the creation of reference translations have a strong impact on perceived translation quality. As for the choice of raters, professional translators showed a significant preference for human translation, while non-expert raters did not. In terms of linguistic context, raters found human translation significantly more accurate than machine translation when evaluating full documents, but not when evaluating single sentences out of context. They also found human translation significantly more fluent than machine translation, both when evaluating full documents and single sentences. Moreover, we showed that aggressive editing of human reference translations for target language fluency can decrease adequacy to the point that they become indistinguishable from machine translation, and that raters found human translations significantly better than machine translations of original source texts, but not of source texts that were translations themselves.", "Our results strongly suggest that in order to reveal errors in the output of strong MT systems, the design of MT quality assessments with human raters should be revisited. To that end, we have offered a set of recommendations, supported by empirical data, which we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Our recommendations have the aim of increasing the validity of MT evaluation, but we are aware of the high cost of having MT evaluation done by professional translators, and on the level of full documents. We welcome future research into alternative evaluation protocols that can demonstrate their validity at a lower cost." ] ] }
{ "question": [ "What empricial investigations do they reference?", "What languages do they investigate for machine translation?", "What recommendations do they offer?", "What percentage fewer errors did professional translations make?", "What was the weakness in Hassan et al's evaluation design?" ], "question_id": [ "c1429f7fed5a4dda11ac7d9643f97af87a83508b", "a93d4aa89ac3abbd08d725f3765c4f1bed35c889", "bc473c5bd0e1a8be9b2037aa7006fd68217c3f47", "cc5d8e12f6aecf6a5f305e2f8b3a0c67f49801a9", "9299fe72f19c1974564ea60278e03a423eb335dc" ], "nlp_background": [ "five", "five", "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "somewhat", "somewhat", "somewhat", "somewhat", "somewhat" ], "search_query": [ "professional machine translation", "professional machine translation", "professional machine translation", "professional machine translation", "professional machine translation" ], "question_writer": [ "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis.", "In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations.", "We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6)." ], "highlighted_evidence": [ "We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation.", "In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations.", "We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. " ] } ], "annotation_id": [ "1ddd2172cbc25dc21125633fb2e28aec5c10e7d3" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "English ", "Chinese " ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article:" ], "highlighted_evidence": [ "We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article:" ] } ], "annotation_id": [ "28aa8fcfcab07884996f3a2b9fa3172dd6d2d6ce" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " Choose professional translators as raters", " Evaluate documents, not sentences", "Evaluate fluency in addition to adequacy", "Do not heavily edit reference translations for fluency", "Use original source texts" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general.", "Recommendations ::: (R1) Choose professional translators as raters.", "In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs.", "Recommendations ::: (R2) Evaluate documents, not sentences.", "When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4).", "Recommendations ::: (R3) Evaluate fluency in addition to adequacy.", "Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality.", "Recommendations ::: (R4) Do not heavily edit reference translations for fluency.", "In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30).", "Recommendations ::: (R5) Use original source texts.", "Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT." ], "highlighted_evidence": [ " In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general.\n\nRecommendations ::: (R1) Choose professional translators as raters.\nIn our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs.\n\nRecommendations ::: (R2) Evaluate documents, not sentences.\nWhen evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4).\n\nRecommendations ::: (R3) Evaluate fluency in addition to adequacy.\nRaters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality.\n\nRecommendations ::: (R4) Do not heavily edit reference translations for fluency.\nIn professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30).\n\nRecommendations ::: (R5) Use original source texts.\nRaters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT." ] } ], "annotation_id": [ "14ecb78fcacae0b5f0d6142a9a411d3529f85f49" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "36%", "evidence": [ "FLOAT SELECTED: Table 5: Classification of errors in machine translation MT1 and two professional human translation outputs HA and HB. Errors represent the number of sentences (out of N = 150) that contain at least one error of the respective type. We also report the number of sentences that contain at least one error of any category (Any), and the total number of error categories present in all sentences (Total). Statistical significance is assessed with Fisher’s exact test (two-tailed) for each pair of translation outputs.", "To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Classification of errors in machine translation MT1 and two professional human translation outputs HA and HB. Errors represent the number of sentences (out of N = 150) that contain at least one error of the respective type. We also report the number of sentences that contain at least one error of any category (Any), and the total number of error categories present in all sentences (Total). Statistical significance is assessed with Fisher’s exact test (two-tailed) for each pair of translation outputs.", "To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3", " The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32." ] } ], "annotation_id": [ "79ae7089b4dbabf590fb2d5377cf0d39c650ea2c" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "MT developers to which crowd workers were compared are usually not professional translators, evaluation of sentences in isolation prevents raters from detecting translation errors, used not originally written Chinese test set\n", "evidence": [ "The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations.", "MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents.", "The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality." ], "highlighted_evidence": [ " BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. ", "We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents.", "BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; " ] } ], "annotation_id": [ "77a93df6767ecde02e609c91ecef4f61735297e4" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] }
{ "caption": [ "Table 1: Ranks and TrueSkill scores (the higher the better) of one human (HA) and two machine translations (MT1, MT2) for evaluations carried out by expert and non-expert translators. An asterisk next to a translation indicates that this translation is significantly better than the one in the next rank at p ≤ .05.", "Table 2: Pairwise ranking results for machine (MT1) against professional human translation (HA) as obtained from blind evaluation by professional translators. Preference for MT1 is lower when document-level context is available.", "Table 4: Pairwise ranking results for one machine (MT1) and two professional human translations (HA, HB) as obtained from blind evaluation by professional translators.", "Table 5: Classification of errors in machine translation MT1 and two professional human translation outputs HA and HB. Errors represent the number of sentences (out of N = 150) that contain at least one error of the respective type. We also report the number of sentences that contain at least one error of any category (Any), and the total number of error categories present in all sentences (Total). Statistical significance is assessed with Fisher’s exact test (two-tailed) for each pair of translation outputs.", "Table 6: (Continued from previous page.)", "Table 7: Ranks of the translations given the original language of the source side of the test set shown with their TrueSkill score (the higher the better). An asterisk next to a translation indicates that this translation is significantly better than the one in the next rank at p ≤ .05." ], "file": [ "6-Table1-1.png", "8-Table2-1.png", "11-Table4-1.png", "12-Table5-1.png", "14-Table6-1.png", "15-Table7-1.png" ] }
2003.00576
StructSum: Incorporating Latent and Explicit Sentence Dependencies for Single Document Summarization
Traditional preneural approaches to single document summarization relied on modeling the intermediate structure of a document before generating the summary. In contrast, the current state of the art neural summarization models do not preserve any intermediate structure, resorting to encoding the document as a sequence of tokens. The goal of this work is two-fold: to improve the quality of generated summaries and to learn interpretable document representations for summarization. To this end, we propose incorporating latent and explicit sentence dependencies into single-document summarization models. We use structure-aware encoders to induce latent sentence relations, and inject explicit coreferring mention graph across sentences to incorporate explicit structure. On the CNN/DM dataset, our model outperforms standard baselines and provides intermediate latent structures for analysis. We present an extensive analysis of our summaries and show that modeling document structure reduces copying long sequences and incorporates richer content from the source document while maintaining comparable summary lengths and an increased degree of abstraction.
{ "section_name": [ "Introduction", "StructSum Model", "StructSum Model ::: Encoder", "StructSum Model ::: Latent Structure (LS) Attention", "StructSum Model ::: Explicit Structure (ES) Attention", "StructSum Model ::: Explicit Structure (ES) Attention ::: Incorporating explicit structure", "Experiments ::: Dataset:", "Experiments ::: Baselines:", "Experiments ::: Hyperparameters:", "Results", "Analysis", "Analysis ::: Analysis of Copying", "Analysis ::: Content Selection and Abstraction", "Analysis ::: Layout Bias", "Analysis ::: Document Structures", "Related Work", "Conclusion and Future Work" ], "paragraphs": [ [ "Traditional approaches to abstractive summarization have relied on interpretable structured representations such as graph based sentence centrality BIBREF0, AMR parses BIBREF1, discourse based compression and anaphora constraints BIBREF2. On the other hand, state of the art neural approaches to single document summarization encode the document as a sequence of tokens and compose them into a document representation BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Albeit being effective, these systems learn to rely significantly on layout bias associated with the source document BIBREF8 and do not lend themselves easily to interpretation via intermediate structures.", "Recent work provides evidence that structured representation of text leads to better document representations BIBREF9, BIBREF10. However, structured representations are under-explored in the neural summarization literature. Motivated by this, we propose a structure-aware end-to-end model (§SECREF2) for summarization. Our proposed model, StructSum, augments the existing pointer-generator network BIBREF3 with two novel components: (1) a latent-structure attention module that adapts structured representations BIBREF11, BIBREF12 for the summarization task, and (2) an explicit-structure attention module, that incorporates a coreference graph. The components together model sentence level dependencies in a document generating rich structured representations. The motivation of this work is to provide a framework to induce rich interpretable latent structures and inject external document structures that can be introduced into any document encoder model.", "Encoders with induced latent structures have been shown to benefit several tasks including document classification, natural language inference BIBREF12, BIBREF13, and machine translation BIBREF11. Building on this motivation, our latent structure attention module builds upon BIBREF12 to model the dependencies between sentences in a document. It uses a variant of Kirchhoff’s matrix-tree theorem BIBREF14 to model such dependencies as non-projective tree structures(§SECREF3). The explicit attention module is linguistically-motivated and aims to incorporate sentence-level structures from externally annotated document structures. We incorporate a coreference based sentence dependency graph, which is then combined with the output of the latent structure attention module to produce a hybrid structure-aware sentence representation (§SECREF5).", "We evaluate our model on the CNN/DM dataset BIBREF15 and show in §SECREF4 that it outperforms strong baselines by up to 1.1 ROUGE-L. We find that the latent and explicit structures are complementary, both contributing to the final performance improvement. Our modules are also independent of the underlying encoder-decoder architectures, rendering them flexible to be incorporated into any advanced models. Our analysis quantitatively compares our generated summaries with the baselines and reference documents (§SECREF5). It reveals that structure-aware summarization reduces the bias of copying large sequences from the source inherently making the summaries more abstractive by generating $\\sim $15% more novel n-grams compared to a competitive baseline. We also show qualitative examples of the learned interpretable sentence dependency structures, motivating further research for structure-aware modeling." ], [ "Consider a source document $\\mathbf {x}$ consisting of $n$ sentences $\\lbrace \\mathbf {s}\\rbrace $ where each sentence $\\mathbf {s}_i$ is composed of a sequence of words. Document summarization aims to map the source document to a target summary of $m$ words $\\lbrace y\\rbrace $. A typical neural abstractive summarization system is an attentional sequence-to-sequence model that encodes the input sequence $\\mathbf {x}$ as a continuous sequence of tokens $\\lbrace w\\rbrace $ using a BiLSTM. The encoder produces a set of hidden representations $\\lbrace \\mathbf {h}\\rbrace $. An LSTM decoder maps the previously generated token $y_{t-1}$ to a hidden state and computes a soft attention probability distribution $p(\\mathbf {a}_t \\mid \\mathbf {x}, \\mathbf {y}_{1:t-1})$ over encoder hidden states. A distribution $p$ over the vocabulary is computed at every timestep $t$ and the network is trained using negative log likelihood loss : $\\text{loss}_t = - \\mathrm {log}\\:p(y_t) $. The pointer-generator network BIBREF3 augments the standard encoder-decoder architecture by linearly interpolating a pointer based copy mechanism. StructSum uses the pointer-generator network as the base model. Our encoder is a structured hierarchical encoder BIBREF16, which computes hidden representations of the sequence both at the token and sentence level. The model then uses the explicit-structure and implicit-structure attention modules to augment the sentence representations with rich sentence dependency information, leveraging both learned latent structure and additional external structure from other NLP modules. The attended vectors are then passed to the decoder, which produces the output sequence for abstractive summarization. In the rest of this section, we describe our model architecture, shown in Figure FIGREF2, in detail." ], [ "Our hierarchical encoder consists of a BiLSTM encoder over words, followed by sentence level BiLSTM encoder. The word encoder takes a sequence of words in a sentence $\\mathbf {s}_i = \\lbrace w\\rbrace $ as input and produces contextual hidden representation for each word $\\mathbf {h}_{w_{ik}}$, where $w_{ik}$ is the $i^{th}$ word of the $k^{th}$ sentence, $k=1:q$ and $q$ is the number of words in the sentence $\\mathbf {s}_i$. The word hidden representations are max-pooled at the sentence level and the result is passed to a BiLSTM sentence-encoder which produces new hidden sentence representations for each sentence $\\mathbf {h}_{\\mathbf {s}_i}$. The sentence hidden representations are then passed as inputs to latent and explicit structure attention modules." ], [ "We model the latent structure of a source document as a non-projective dependency tree and force a pair-wise attention module to automatically induce this tree. We denote the marginal probability of a dependency edge as $a_{ij} = p(z_{ij}=1)$ where $z_{ij}$ is the latent variable representing the edge from sentence $i$ to sentence $j$. We parameterize with a neural network the unnormalized pair-wise scores between sentences and use the Kirchoff's matrix tree theorem BIBREF14 to compute the marginal probability of a dependency edge between any two sentences.", "We decompose the representation of sentence $\\mathbf {s}_i$ into a semantic vector $\\mathbf {g}_{\\mathbf {s}_i}$ and structure vector $\\mathbf {d}_{\\mathbf {s}_i}$ as $\\mathbf {h}_{\\mathbf {s}_i} = [\\mathbf {g}_{\\mathbf {s}_i}; \\mathbf {d}_{\\mathbf {s}_i}]$. Using the structure vectors $\\mathbf {d}_{\\mathbf {s}_i}, \\mathbf {d}_{\\mathbf {s}_j}$, we compute a score $f_{ij}$ between sentence pairs $(i,j)$ (where sentence $i$ is the parent node of sentence $j$) and a score for sentence $\\mathbf {s}_i$ being the root node $r_i$:", "", "where $F_p, F_c$ and $F_r$ are linear-projection functions to build representations for the parent, child and root node respectively and $W_a$ is the weight for bilinear transformation. Here, $f_{ij}$ is the edge weight between nodes $(i,j)$ in a weighted adjacency graph $\\mathbf {F}$ and is computed for all pairs of sentences. Using $f_{ij}$ and $r_i$, we compute normalized attention scores $a_{ij}$ and $a_{i}^r $ using a variant of Kirchhoff’s matrix-tree theorem BIBREF12, BIBREF14 where $a_{ij}$ is the marginal probability of a dependency edge between sentences $(i,j)$ and $a_{i}^r $ is the probability of sentence $i$ being the root.", "Using these probabilistic attention weights and the semantic vectors $\\lbrace \\mathbf {g}_{\\mathbf {s}}\\rbrace $, we compute the attended sentence representations as:", "", "where $\\mathbf {p}_{\\mathbf {s}_i}$ is the context vector gathered from possible parents of sentence $i$, $\\mathbf {c}_{\\mathbf {s}_i}$ is the context vector gathered from possible children, and $\\mathbf {g}_{root}$ is a special embedding for the root node. Here, the updated sentence representation $\\textit {l}_{\\mathbf {s}_i}$ incorporates the implicit structural information." ], [ "BIBREF2 showed that modeling coreference knowledge through anaphora constraints led to improved clarity or grammaticality in summaries. Taking inspiration from this, we choose coreference links across sentences as our explicit structure. First, we use an off-the-shelf coreference parser to identify coreferring mentions. We then build a coreference based sentence graph by adding a link between sentences $(\\mathbf {s}_i, \\mathbf {s}_j)$, if they have any coreferring mentions between them. This representation is then converted into a weighted graph by incorporating a weight on the edge between two sentences that is proportional to the number of unique coreferring mentions between them. We normalize these edge weights for every sentence, effectively building a weighted adjacency matrix $\\mathbf {K}$ where $k_{ij}$ is given by:", "where $m_i$ denotes the set of unique mentions in sentence $\\mathbf {s}_i$, ($m_i$ $\\bigcap $ $m_j$) denotes the set of co-referring mentions between the two sentences and $z$ is a latent variable representing a link in the coreference sentence graph. $\\epsilon = 5e-4$ is a smoothing hyperparameter." ], [ "Given contextual sentence representations $\\lbrace \\mathbf {h}_{\\mathbf {s}}\\rbrace $ and our explicit coreference based weighted adjacency matrix $\\mathbf {K}$, we learn an explicit-structure aware representation as follows:", "where $F_u$ and $F_e$ are linear projections and $\\mathbf {e}_{\\mathbf {s}_i}$ is an updated sentence representation which incorporates explicit structural information.", "Finally, to combine the two structural representations, we concatenate the latent and explicit sentence vectors as: $\\mathbf {h}_{\\mathbf {s}_i} = [\\mathbf {l}_{\\mathbf {s}_i};\\mathbf {e}_{\\mathbf {s}_i}]$ to form encoder sentence representations of the source document. To provide every token representation with context of the entire document, we keep the same formulation as pointer-generator networks, where each token $w_{ij}$ is mapped to its hidden representation $\\mathbf {h}_{w_{ij}}$ using a BiLSTM. The token representation is concatenated with their corresponding structure-aware sentence representation: $\\mathbf {h}_{w_{ij}} = [\\mathbf {h}_{w_{ij}};\\mathbf {h}_{\\mathbf {s}_i}]$ where $\\mathbf {s}_i$ is the sentence to which the word $w_{ij}$ belongs. The resulting structure-aware token representations can be used to directly replace previous token representations as input to the decoder." ], [ "We evaluate our approach on the CNN/Daily Mail corpus BIBREF15, BIBREF17 and use the same preprocessing steps as shown in BIBREF3. The CNN/DM summaries have an average of 66 tokens ($\\sigma = 26$) and 4.9 sentences. Differing from BIBREF3, we truncate source documents to 700 tokens instead of 400 in training and validation sets to model longer documents with more sentences." ], [ "We choose the following baselines based on their relatedness to the task and wide applicability:", "BIBREF3 : We re-implement the base pointer-generator model and the additional coverage mechanism. This forms the base model of our implementation and hence our addition of modeling document structure can be directly compared to it.", "BIBREF6 : This is a graph-based attention model that is closest in spirit to the method we present in this work. They use a graph attention module to learn attention between sentences, but cannot be easily used to induce interpretable document structures, since their attention scores are not constrained to learn structure. In addition to learning latent and interpretable structured attention between sentences, StructSum also introduces an explicit structure component to inject external document structure.", "BIBREF7 : We compare with the DiffMask experiment with this work. This work introduces a separate content selector which tags words and phrases to be copied. The DiffMask variant is an end-to-end variant like ours and hence is included in our baselines.", "Our baselines exclude Reinforcement Learning (RL) based systems as they aren't directly comparable, but our approach can be easily introduced in any encoder-decoder based RL system. Since we do not incorporate any pretraining, we do not compare with recent contextual representation based models BIBREF18." ], [ "Our encoder uses 256 hidden states for both directions in the one-layer LSTM, and 512 for the single-layer decoder. We use the adagrad optimizer BIBREF19 with a learning rate of 0.15 and an initial accumulator value of 0.1. We do not use dropout and use gradient-clipping with a maximum norm of 2. We selected the best model using early stopping based on the ROUGE score on the validation dataset as our criteria. We also used the coverage penalty during inference as shown in BIBREF7. For decoding, we use beam-search with a beam width of 3. We did not observe significant improvements with higher beam widths." ], [ "Table TABREF8 shows the results of our work on the CNN/DM dataset. We use the standard ROUGE-1,2 and L BIBREF20 F1 metric to evaluate all our summarization output. We first observe that introducing the capability to learn latent structures already improves our performance on ROUGE-L. It suggests that modeling dependencies between sentences helps the model compose better long sequences w.r.t reference compared to baselines. We do not see a significant improvement in ROUGE-1 and ROUGE-2, hinting that we retrieve similar content words as the baseline but compose them into better contiguous sequences.", "We observe similar results when using explicit structures only with the ES attention module. This shows that adding inductive bias in the form of coreference based sentence graphs helps compose long sequences. Our results here are close to the model that uses just LS attention. This demonstrates that LS attention induces good latent dependencies that make up for pure external coreference knowledge.", "Finally, our combined model which uses both Latent and Explicit structure performs the best with a strong improvement of 1.08 points in ROUGE-L over our base pointer-generator model and 0.6 points in ROUGE-1. It shows that the latent and explicit information are complementary and a model can jointly leverage them to produce better summaries.", "Modeling structure and adding inductive biases also helps a model to converge faster where the combined LS+ES Attention model took 126K iterations for training in comparison to 230K iterations required to train the plain pointer-generator network and an additional 3K iterations for the coverage loss BIBREF3." ], [ "We present below analysis on the quality of summarization as compared to our base model, the pointer-generator network with coverage BIBREF3 and the reference." ], [ "Despite being an abstractive model, the pointer-generator model tends to copy very long sequences of words including whole sentences from the source document (also observed by BIBREF7). Table TABREF15 shows a comparison of the Average Length (Copy Len) of contiguous copied sequences greater than length 3. We observe that the pointer-generator baseline on average copies 16.61 continuous tokens from the source which shows the extractive nature of the model. This indicates that pointer networks, aimed at combining advantages from abstractive and extractive methods by allowing to copy content from the input document, tend to skew towards copying, particularly in this dataset. A consequence of this is that the model fails to interrupt copying at desirable sequence length.", "In contrast, modeling document structure through StructSum reduces the length of copied sequences to 9.13 words on average reducing the bias of copying sentences in entirety. This average is closer to the reference (5.07 words) in comparison, without sacrificing task performance. StructSum learns to stop when needed, only copying enough content to generate a coherent summary." ], [ "A direct outcome of copying shorter sequences is being able to cover more content from the source document within given length constraints. We observe that this leads to better summarization performance. In our analysis, we compute coverage by computing the number of source sentences from which sequences greater than length 3 are copied in the summary. Table TABREF15 shows a comparison of the coverage of source sentences in the summary content. We see that while the baseline pointer-generator model only copies from 12.1% of the source sentences, we copy content from 24.0% of the source sentences. Additionally, the average length of the summaries produced by StructSum remains mostly unchanged at 66 words on average compared to 61 of the baseline model. This indicates that StructSum produces summaries that draw from a wider selection of sentences from the original article compared to the baseline models.", "BIBREF21 show that copying more diverse content in isolation does not necessarily lead to better summaries for extractive summarization. Our analysis suggests that this observation might not extend to abstractive summarization methods. The proportion of novel n-grams generated has been used in the literature to measure the degree of abstraction of summarization models BIBREF3. Figure FIGREF17 compares the percentage of novel n-grams in StructSum as compared to the baseline model. Our model produces novel trigrams 21.0% of the time and copies whole sentences only 21.7% of the time. In comparison, the pointer-generator network has only 6.1% novel trigrams and copies entire sentences 51.7% of the time. This shows that StructSum on average generates 14.7% more novel n-grams in comparison to the pointer-generator baseline." ], [ "Neural abstractive summarization methods applied to news articles are typically biased towards selecting and generating summaries based on the first few sentences of the articles. This stems from the structure of news articles, which present the salient information of the article in the first few sentences and expand in the subsequent ones. As a result, the LEAD 3 baseline, which selects the top three sentences of an article, is widely used in the literature as a strong baseline to evaluate summarization models applied to the news domain BIBREF22. BIBREF8 observed that the current summarization models learn to exploit the layout biases of current datasets and offer limited diversity in their outputs.", "To analyze whether StructSum also holds the same layout biases, we compute a distribution of source sentence indices that are used for copying content (copied sequences of length 3 or more are considered). Figure FIGREF19 shows the comparison of coverage of sentences. The coverage of sentences in the reference summaries shows a high proportion of the top 5 sentences of any article being copied to the summary. Additionally, the reference summaries have a smoother tail end distribution with relevant sentences in all positions being copied. It shows that a smooth distribution over all sentences is a desirable feature. We notice that the sequence-to-sequence and pointer-generator framework (with and without coverage enabled) have a stronger bias towards the beginning of the article with a high concentration of copied sentences within the top 5 sentences of the article. In contrast, StructSum improves coverage slightly having a lower concentration of top 5 sentences and copies more tail end sentences than the baselines. However, although the modeling of structure does help, our model has a reasonable gap compared to the reference distribution. We see this as an area of improvement and a direction for future work." ], [ "Similar to BIBREF12, we also look at the quality of the intermediate structures learned by the model. We use the Chu-Liu-Edmonds algorithm BIBREF23, BIBREF24 to extract the maximum spanning tree from the attention score matrix as our sentence structure. Table TABREF20 shows the frequency of various tree depths. We find that the average tree depth is 2.9 and the average proportion of leaf nodes is 88%, consistent with results from tree induction in document classification BIBREF25. Further, we compare latent trees extracted from StructSum with undirected graphs based on coreference and NER. These are constructed similarly to our explicit coreference based sentence graphs in §SECREF5 by linking sentences with overlapping coreference mentions or named entities. We measure the similarity between the learned latent trees and the explicit graphs through precision and recall over edges. The results are shown in Table TABREF22. We observe that our latent graphs have low recall with the linguistic graphs showing that our latent graphs do not capture the coreference or named entity overlaps explicitly, suggesting that the latent and explicit structures capture complementary information.", "Figure FIGREF24 shows qualitative examples of our induced structures along with generated summaries from the StructSum model. The first example shows a tree with sentence 3 chosen as root, which was the key sentence mentioned in the reference. We notice that in both examples, the sentences in the lower level of the dependency tree contribute less to the generated summary. Along the same lines, in the examples source sentences used to generate summaries tend to be closer to the root node. In the first summary, all sentences from which content was drawn are either the root node or within depth 1 of the root node. Similarly, in the second example, 4 out of 5 source sentences were at depth=1 in the tree. In the two examples, generated summaries diverged from the reference by omitting certain sentences used in the reference. These sentences appear in the lower section of the tree giving us some insights on which sentences were preferred for the summary generation. Further, in example 1, we notice that the latent structures cluster sentences based on the main topic of the document. Sentences 1,2,3 differ from sentences 5,6,7 on the topic being discussed and our model has clustered the two sets separately." ], [ "Prior to neural models for summarization, document structure played a critical role in generating relevant, diverse and coherent summaries. BIBREF26 formulated document summarization using linguistic features to construct a semantic graph of the document and building a subgraph for the summary. BIBREF27 leverage language-independent syntactic graphs of the source document to do unsupervised document summarization. BIBREF1 parse the source text into a set of AMR graphs, transform the graphs to summary graphs and then generate text from the summary graph. While such systems generate grammatical summaries and preserve linguistic quality BIBREF2, they are often computationally demanding and do not generalize well BIBREF21.", "Data-driven neural models for summarization fall into extractive BIBREF13, BIBREF28 or abstractive BIBREF29, BIBREF3, BIBREF7, BIBREF30. BIBREF3 proposed a pointer-generator framework that learns to either generate novel in-vocabulary words or copy words from the source. This model has been the foundation for a lot of follow up work on abstractive summarization BIBREF7, BIBREF31, BIBREF32. Our model extends the pointer-generator model by incorporating latent structure and explicit structure knowledge, making our extension applicable to any of the followup work. BIBREF6 present a graph-based attention system to improve the saliency of summaries. While this model learns attention between sentences, it does not induce interpretable intermediate structures. A lot of recent work looks into incorporating structure into neural models. BIBREF32 infuse source side syntactic structure into the copy mechanism of the pointer-generator model. They identify explicit word-level syntactic features based on dependency parses and parts of speech tags and augment the decoder copy mechanism to attend to them. In contrast, we model sentence level dependency structures in the form of latent or induced structures and explicit coreference based structures. We do not identify any heuristic or salient features other than linking dependent sentences. BIBREF33 propose structural compression and coverage regularizers to provide an objective to neural models to generate concise and informative content. Here, they incorporate structural bias about the target summaries but we choose to model the structure of the source sentence to produce rich document representations. BIBREF34 induce latent document structure for aspect based summarization. BIBREF35 use present long document summarization model applicable for scientific papers, which attends to discourse sections in a document, while BIBREF36 propose an unsupervised model for review summarization which learns a latent discourse structure and uses it to summarize a review. BIBREF37 use discourse structures to improve coherence in blog summarization. These are all complementary directions to our work. To our knowledge, we are the first to simultaneously incorporate latent and explicit document structure in a single framework for document summarization." ], [ "To summarize, our contributions are three-fold. We propose a framework for incorporating latent and explicit document structure in neural abstractive summarization. We introduce a novel explicit-attention module which can incorporate external linguistic structures, and we show one such application where we use coreference to enhance summarization. We show quantitative improvements on the ROUGE metric over strong summarization baselines and demonstrate improvements in abstraction and coverage through extensive qualitative analysis.", "StructSum has demonstrated performance gain and higher quality output summaries; with a potential direction to study the role of latent structures in the interpretability of models in the future. Another possible direction is to investigate whether structured representations allow better generalization for transfer learning and summarization in other domains with limited data." ] ] }
{ "question": [ "By how much they improve over the previous state-of-the-art?", "Is there any evidence that encoders with latent structures work well on other tasks?" ], "question_id": [ "2ed02be0c183fca7031ccb8be3fd7bc109f3694b", "be73a88d5b695200e2ead4c2c24e2a977692970e" ], "nlp_background": [ "infinity", "infinity" ], "topic_background": [ "familiar", "familiar" ], "paper_read": [ "no", "no" ], "search_query": [ "long document summarization", "long document summarization" ], "question_writer": [ "798ee385d7c8105b83b032c7acc2347588e09d61", "798ee385d7c8105b83b032c7acc2347588e09d61" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "1.08 points in ROUGE-L over our base pointer-generator model ", "0.6 points in ROUGE-1" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Finally, our combined model which uses both Latent and Explicit structure performs the best with a strong improvement of 1.08 points in ROUGE-L over our base pointer-generator model and 0.6 points in ROUGE-1. It shows that the latent and explicit information are complementary and a model can jointly leverage them to produce better summaries." ], "highlighted_evidence": [ "Finally, our combined model which uses both Latent and Explicit structure performs the best with a strong improvement of 1.08 points in ROUGE-L over our base pointer-generator model and 0.6 points in ROUGE-1. " ] } ], "annotation_id": [ "6c84252cd935557d343f57beebaf78fe31cbecf2" ], "worker_id": [ "ea4394112c1549185e6b763d6f36733a9f2ed794" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Encoders with induced latent structures have been shown to benefit several tasks including document classification, natural language inference BIBREF12, BIBREF13, and machine translation BIBREF11. Building on this motivation, our latent structure attention module builds upon BIBREF12 to model the dependencies between sentences in a document. It uses a variant of Kirchhoff’s matrix-tree theorem BIBREF14 to model such dependencies as non-projective tree structures(§SECREF3). The explicit attention module is linguistically-motivated and aims to incorporate sentence-level structures from externally annotated document structures. We incorporate a coreference based sentence dependency graph, which is then combined with the output of the latent structure attention module to produce a hybrid structure-aware sentence representation (§SECREF5)." ], "highlighted_evidence": [ "Encoders with induced latent structures have been shown to benefit several tasks including document classification, natural language inference BIBREF12, BIBREF13, and machine translation BIBREF11. " ] } ], "annotation_id": [ "14f124b598a6e285715eb201a8696583973aa52f" ], "worker_id": [ "ea4394112c1549185e6b763d6f36733a9f2ed794" ] } ] }
{ "caption": [ "Figure 1: StructSum Model Architecture.", "Table 1: Results of abstractive summarizers on the CNN/DM dataset. The top part shows abstractive summarization baselines. The second section are re-implementations of See et al. (2017) 2 and results from StructSum.", "Table 2: Results of analysis of copying, coverage and distribution over the source sentences on CNN/DM test set. Copy Len denotes the average length of copied sequences; Coverage – coverage of source sentences.", "Figure 2: Comparison of % Novel N-grams between StructSum, Pointer-Generator+Coverage and the Reference. Here, “sent” indicates full novel sentences.", "Figure 3: Coverage of source sentences in summary. Here the x-axis is the sentence position in the source article and y-axis shows the normalized count of sentences in that position copied to the summary.", "Table 3: Distribution of latent tree depth.", "Table 4: Precision and recall of shared edges between the latent and explicit structures", "Figure 4: Examples of induced structures and generated summaries." ], "file": [ "3-Figure1-1.png", "5-Table1-1.png", "5-Table2-1.png", "6-Figure2-1.png", "6-Figure3-1.png", "6-Table3-1.png", "7-Table4-1.png", "8-Figure4-1.png" ] }
1909.02635
Effective Use of Transformer Networks for Entity Tracking
Tracking entities in procedural language requires understanding the transformations arising from actions on entities as well as those entities' interactions. While self-attention-based pre-trained language encoders like GPT and BERT have been successfully applied across a range of natural language understanding tasks, their ability to handle the nuances of procedural texts is still untested. In this paper, we explore the use of pre-trained transformer networks for entity tracking tasks in procedural text. First, we test standard lightweight approaches for prediction with pre-trained transformers, and find that these approaches underperform even simple baselines. We show that much stronger results can be attained by restructuring the input to guide the transformer model to focus on a particular entity. Second, we assess the degree to which transformer networks capture the process dynamics, investigating such factors as merged entities and oblique entity references. On two different tasks, ingredient detection in recipes and QA over scientific processes, we achieve state-of-the-art results, but our models still largely attend to shallow context clues and do not form complex representations of intermediate entity or process state.
{ "section_name": [ "Introduction", "Background: Process Understanding", "Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models", "Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models ::: Task Specific Input Token", "Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models ::: Entity Based Attention", "Studying Basic Transformer Representations for Entity Tracking ::: Results and Observations", "Entity-Conditioned Models", "Entity-Conditioned Models ::: Sentence Level vs. Document Level", "Entity-Conditioned Models ::: Training Details", "Entity-Conditioned Models ::: Training Details ::: Domain Specific LM fine-tuning", "Entity-Conditioned Models ::: Training Details ::: Supervised Task Fine-Tuning", "Entity-Conditioned Models ::: Experiments: Ingredient Detection", "Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Systems to Compare", "Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Systems to Compare ::: Neural Process Networks", "Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Results", "Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations", "Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations ::: Ingredient Specificity", "Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations ::: Context Importance", "Entity-Conditioned Models ::: State Change Detection (ProPara)", "Entity-Conditioned Models ::: State Change Detection (ProPara) ::: Systems to Compare", "Entity-Conditioned Models ::: State Change Detection (ProPara) ::: Results", "Challenging Task Phenomena", "Challenging Task Phenomena ::: Ingredient Detection", "Challenging Task Phenomena ::: Ingredient Detection ::: Intermediate Compositions", "Challenging Task Phenomena ::: Ingredient Detection ::: Hypernymy and Synonymy", "Challenging Task Phenomena ::: Ingredient Detection ::: Impact of external data", "Challenging Task Phenomena ::: State Change Detection", "Analysis", "Analysis ::: Gradient based Analysis", "Analysis ::: Input Ablations", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Transformer based pre-trained language models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 have been shown to perform remarkably well on a range of tasks, including entity-related tasks like coreference resolution BIBREF5 and named entity recognition BIBREF0. This performance has been generally attributed to the robust transfer of lexical semantics to downstream tasks. However, these models are still better at capturing syntax than they are at more entity-focused aspects like coreference BIBREF6, BIBREF7; moreover, existing state-of-the-art architectures for such tasks often perform well looking at only local entity mentions BIBREF8, BIBREF9, BIBREF10 rather than forming truly global entity representations BIBREF11, BIBREF12. Thus, performance on these tasks does not form sufficient evidence that these representations strongly capture entity semantics. Better understanding the models' capabilities requires testing them in domains involving complex entity interactions over longer texts. One such domain is that of procedural language, which is strongly focused on tracking the entities involved and their interactions BIBREF13, BIBREF14, BIBREF15.", "This paper investigates the question of how transformer-based models form entity representations and what these representations capture. We expect that after fine-tuning on a target task, a transformer's output representations should somehow capture relevant entity properties, in the sense that these properties can be extracted by shallow classification either from entity tokens or from marker tokens. However, we observe that such “post-conditioning” approaches don't perform significantly better than rule-based baselines on the tasks we study. We address this by proposing entity-centric ways of structuring input to the transformer networks, using the entity to guide the intrinsic self-attention and form entity-centric representations for all the tokens. We find that our proposed methods lead to a significant improvement in performance over baselines.", "Although our entity-specific application of transformers is more effective at the entity tracking tasks we study, we perform additional analysis and find that these tasks still do not encourage transformers to form truly deep entity representations. Our performance gain is largely from better understanding of verb semantics in terms of associating process actions with entity the paragraph is conditioned on. The model also does not specialize in “tracking” composed entities per se, again using surface clues like verbs to identify the components involved in a new composition.", "We evaluate our models on two datasets specifically designed to invoke procedural understanding: (i) Recipes BIBREF16, and (ii) ProPara BIBREF14. For the Recipes dataset, we classify whether an ingredient was affected in a certain step, which requires understanding when ingredients are combined or the focus of the recipe shifts away from them. The ProPara dataset involves answering a more complex set of questions about physical state changes of components in scientific processes. To handle this more structured setting, our transformer produces potentials consumed by a conditional random field which predicts entity states over time. Using a unidirectional GPT-based architecture, we achieve state-of-the-art results on both the datasets; nevertheless, analysis shows that our approach still falls short of capturing the full space of entity interactions." ], [ "Procedural text is a domain of text involved with understanding some kind of process, such as a phenomenon arising in nature or a set of instructions to perform a task. Entity tracking is a core component of understanding such texts.", "BIBREF14 introduced the ProPara dataset to probe understanding of scientific processes. The goal is to track the sequence of physical state changes (creation, destruction, and movement) entites undergo over long sequences of process steps. Past work involves both modeling entities across time BIBREF17 and capturing structural constraints inherent in the processes BIBREF18, BIBREF19 Figure FIGREF2b shows an example of the dataset posed as a structured prediction task, as in BIBREF19. For such a domain, it is crucial to capture implicit event occurrences beyond explicit entity mentions. For example, in fuel goes into the generator. The generator converts mechanical energy into electrical energy”, the fuel is implicitly destroyed in the process.", "BIBREF15 introduced the task of detecting state changes in recipes in the Recipes dataset and proposed an entity-centric memory network neural architecture for simulating action dynamics. Figure FIGREF2a shows an example from the Recipes dataset with a grid showing ingredient presence. We focus specifically on this core problem of ingredient detection; while only one of the sub-tasks associated with their dataset, it reflects some complex semantics involving understanding the current state of the recipe. Tracking of ingredients in the cooking domain is challenging owing to the compositional nature of recipes whereby ingredients mix together and are aliased as intermediate compositions.", "We pose both of these procedural understanding tasks as classification problems, predicting the state of the entity at each timestep from a set of pre-defined classes. In Figure FIGREF2, these classes correspond to either the presence (1) or absence (0) or the sequence of state changes create (C), move (M), destroy (D), exists (E), and none (O).", "State-of-the-art approaches on these tasks are inherently entity-centric. Separately, it has been shown that entity-centric language modeling in a continuous framework can lead to better performance for LM related tasks BIBREF20, BIBREF21. Moreover, external data has shown to be useful for modeling process understanding tasks in prior work BIBREF18, BIBREF15, suggesting that pre-trained models may be effective.", "With such tasks in place, a strong model will ideally learn to form robust entity-centric representation at each time step instead of solely relying on extracting information from the local entity mentions. This expectation is primarily due to the evolving nature of the process domain where entities undergo complex interactions, form intermediate compositions, and are often accompanied by implicit state changes. We now investigate to what extent this is true in a standard application of transformer models to this problem." ], [ "The most natural way to use the pre-trained transformer architectures for the entity tracking tasks is to simply encode the text sequence and then attempt to “read off” entity states from the contextual transformer representation. We call this approach post-conditioning: the transformer runs with no knowledge of which entity or entities we are going to make predictions on, but we only condition on the target entity after the transformer stage.", "Figure FIGREF4 depicts this model. Formally, for a labelled pair $(\\lbrace s_1, s_2, \\dots , s_t\\rbrace , y_{et})$, we encode the tokenized sequence of steps up to the current timestep (the sentences are separated by using a special [SEP] token), independent of the entity. We denote by $X=[h_{1}, h_{2},\\dots , h_{m}]$ the contextualized hidden representation of the $m$ input tokens from the last layer, and by $\\textstyle g_{e}\\!=\\!\\!\\!\\sum \\limits _{\\text{ent toks}}\\!emb(e_i)$ the entity representation for post conditioning. We now use one of the following two ways to make an entity-specific prediction:" ], [ "We append a $\\texttt {[CLS]}$ token to the input sequence and use the output representation of the $\\texttt {[CLS]}$ token denoted by $h_{ \\texttt {[CLS]}}$ concatenated with the learned BPE embeddings of the entity as the representation $c_{e,t}$ for our entity tracking system. We then use a linear layer over it to get class probabilities:", "The aim of the [CLS] token is to encode information related to general entity related semantics participating in the recipe (sentence priors). We then use a single linear layer to learn sentence priors and entity priors independently, without strong interaction. We call this model GPT$_{indep}$." ], [ "Second, we explore a more fine-grained way of using the GPT model outputs. Specifically, we use bilinear attention between $g_e$ and the transformer output for the process tokens $X$ to get a contextual representation $c_{e,t}$ for a given entity. Finally, using a feed-forward network followed by softmax layer gives us the class probabilities:", "", "The bilinear attention over the contextual representations of the process tokens allows the model to fetch token content relevant to that particular entity. We call this model GPT$_{attn}$." ], [ "We evaluate the discussed post-conditioning models on the ingredient detection task of the Recipes dataset. To benchmark the performance, we compare to three rule-based baselines. This includes (i) Majority Class, (ii) Exact Match of an ingredient $e$ in recipe step $s_t$, and (iii) First Occurrence, where we predict the ingredient to be present in all steps following the first exact match. These latter two baselines capture natural modes of reasoning about the dataset: an ingredient is used when it is directly mentioned, or it is used in every step after it is mentioned, reflecting the assumption that a recipe is about incrementally adding ingredients to an ever-growing mixture. We also construct a LSTM baseline to evaluate the performance of ELMo embeddings (ELMo$_{token}$ and ELMo$_{sent}$) BIBREF22 compared to GPT.", "Table TABREF10 compares the performance of the discussed models against the baselines, evaluating per-step entity prediction performance. Using the ground truth about ingredient's state, we also report the uncombined (UR) and combined (CR) recalls, which are per-timestep ingredient recall distinguished by whether the ingredient is explicitly mentioned (uncombined) or part of a mixture (combined). Note that Exact Match and First Occ baselines represent high-precision and high-recall regimes for this task, respectively.", "As observed from the results, the post-conditioning frameworks underperform compared to the First Occ baseline. While the CR values appear to be high, which would suggest that the model is capturing the addition of ingredients to the mixture, we note that this value is also lower than the corresponding value for First Occ. This result suggests that the model may be approximating the behavior of this baseline, but doing so poorly. The unconditional self-attention mechanism of the transformers does not seem sufficient to capture the entity details at each time step beyond simple presence or absence. Moreover, we see that GPT$_{indep}$ performs somewhat comparably to GPT$_{attn}$, suggesting that consuming the transformer's output with simple attention is not able to really extract the right entity representation.", "For ProPara, we observe similar performance trends where the post-conditioning model performed below par with the state-of-the-art architectures." ], [ "The post-conditioning framework assumes that the transformer network can form strong representations containing entity information accessible in a shallow way based on the target entity. We now propose a model architecture which more strongly conditions on the entity as a part of the intrinsic self-attention mechanism of the transformers.", "Our approach consists of structuring input to the transformer network to use and guide the self-attention of the transformers, conditioning it on the entity. Our main mode of encoding the input, the entity-first method, is shown in Figure FIGREF4. The input sequence begins with a [START] token, then the entity under consideration, then a [SEP] token. After each sentence, a [CLS] token is used to anchor the prediction for that sentence. In this model, the transformer can always observe the entity it should be primarily “attending to” from the standpoint of building representations. We also have an entity-last variant where the entity is primarily observed just before the classification token to condition the [CLS] token's self-attention accordingly. These variants are naturally more computationally-intensive than post-conditioned models, as we need to rerun the transformer for each distinct entity we want to make a prediction for." ], [ "As an additional variation, we can either run the transformer once per document with multiple [CLS] tokens (a document-level model as shown in Figure FIGREF4) or specialize the prediction to a single timestep (a sentence-level model). In a sentence level model, we formulate each pair of entity $e$ and process step $t$ as a separate instance for our classification task. Thus, for a process with $T$ steps and $m$ entities we get $T \\times m$ input sequences for fine tuning our classification task." ], [ "In most experiments, we initialize the network with the weights of the standard pre-trained GPT model, then subsequently do either domain specific LM fine-tuning and supervised task specific fine-tuning." ], [ "For some procedural domains, we have access to additional unlabeled data. To adapt the LM to capture domain intricacies, we fine-tune the transformer network on this unlabeled corpus." ], [ "After the domain specific LM fine-tuning, we fine-tune our network parameters for the end task of entity tracking. For fine-tuning for the task, we have a labelled dataset which we denote by $\\mathcal {C}$, the set of labelled pairs $(\\lbrace s_1, s_2, \\dots , s_t\\rbrace , y_{et})$ for a given process. The input is converted according to our chosen entity conditioning procedure, then fed through the pre-trained network.", "In addition, we observed that adding the language model loss during task specific fine-tuning leads to better performance as well, possibly because it adapts the LM to our task-specific input formulation. Thus," ], [ "We first evaluate the proposed entity conditioned self-attention model on the Recipes dataset to compare the performance with the post-conditioning variants." ], [ "We use the pre-trained GPT architecture in the proposed entity conditioned framework with all its variants. BERT mainly differs in that it is bidirectional, though we also use the pre-trained [CLS] and [SEP] tokens instead of introducing new tokens in the input vocabulary and training them from scratch during fine-tuning. Owing to the lengths of the processes, all our experiments are performed on BERT$_{BASE}$." ], [ "The most significant prior work on this dataset is the work of BIBREF15. However, their data condition differs significantly from ours: they train on a large noisy training set and do not use any of the high-quality labeled data, instead treating it as dev and test data. Consequently, their model achieves low performance, roughly 56 $F_1 $ while ours achieves $82.5$ $F_1$ (though these are not the exact same test set). Moreover, theirs underperforms the first occurrence baseline, which calls into question the value of that training data. Therefore, we do not compare to this model directly. We use the small set of human-annotated data for our probing task. Our train/dev/test split consists of $600/100/175$ recipes, respectively." ], [ "Table TABREF20 compares the overall performances of our proposed models. Our best ET$_{GPT}$ model achieves an $F_1$ score of $82.50$. Comparing to the baselines (Majority through First) and post-conditioned models, we see that the early entity conditioning is critical to achieve high performance.", "Although the First model still achieves the highest CR, due to operating in a high-recall regime, we see that the ET$_{GPT}$ models all significantly outperform the post-conditioning models on this metric, indicating better modeling of these compositions. Both recall and precision are substantially increaesd compared to these baseline models. Interestingly, the ELMo-based model under-performs the first-occurrence baseline, indicating that the LSTM model is not learning much in terms of recognizing complex entity semantics grounded in long term contexts.", "Comparing the four variants of structuring input in proposed architectures as discussed in Section SECREF4, we observe that the document-level, entity-first model is the best performing variant. Given the left-to-right unidirectional transformer architecture, this model notably forms target-specific representations for all process tokens, compared to using the transformer self-attention only to extract entity specific information at the end of the process." ], [ "We perform ablations to evaluate the model's dependency on the context and on the target ingredient. Table TABREF23 shows the results for these ablations." ], [ "In the “no ingredient” baseline (w/o ing.), the model is not provided with the specific ingredient information. Table TABREF23 shows that while not being a strong baseline, the model achieves decent overall accuracy with the drop in UR being higher compared to CR. This indicates that there are some generic indicators (mixture) that it can pick up to try to guess at overall ingredient presence or absence." ], [ "We compare with a “no context” model (w/o context) which ignore the previous context and only use the current recipe step in determining the ingredient's presence. Table TABREF23 shows that the such model is able to perform surprisingly well, nearly as well as the first occurrence baseline.", "This is because the model can often recognize words like verbs (for example, add) or nouns (for example, mixture) that indicate many ingredients are being used, and can do well without really tracking any specific entity as desired for the task." ], [ "Next, we now focus on a structured task to evaluate the performance of the entity tracking architecture in capturing the structural information in the continuous self-attention framework. For this, we use the ProPara dataset and evaluate our proposed model on the comprehension task.", "Figure FIGREF2b shows an example of a short instance from the ProPara dataset. The task of identifying state change follows a structure satisfying the existence cycle; for example, an entity can not be created after destruction. Our prior work BIBREF19 proposed a structured model for the task that achieved state-of-the-art performance. We adapt our proposed entity tracking transformer models to this structured prediction framework, capturing creation, movement, existence (distinct from movement or creation), destruction, and non-existence.", "We use the standard evaluation scheme of the ProPara dataset, which is framed as answering the following categories of questions: (Cat-1) Is e created (destroyed, moved) in the process?, (Cat-2) When (step #) is e created (destroyed, moved)?, (Cat-3) Where is e created/destroyed/moved from/to)?" ], [ "We compare our proposed models to the previous work on the ProPara dataset. This includes the entity specific MRC models, EntNet BIBREF23, QRN BIBREF24, and KG-MRC BIBREF17. Also, BIBREF14 proposed two task specific models, ProLocal and ProGlobal, as baselines for the dataset. Finally, we compare against our past neural CRF entity tracking model (NCET) BIBREF19 which uses ELMo embeddings in a neural CRF architecture.", "For the proposed GPT architecture, we use the task specific [CLS] token to generate tag potentials instead of class probabilities as we did previously. For BERT, we perform a similar modification as described in the previous task to utilize the pre-trained [CLS] token to generate tag potentials. Finally, we perform a Viterbi decoding at inference time to infer the most likely valid tag sequence." ], [ "Table TABREF28 compares the performance of the proposed entity tracking models on the sentence level task. Since, we are considering the classification aspect of the task, we compare our model performance for Cat-1 and Cat-2. As shown, the structured document level, entity first ET$_{GPT}$ and ET$_{BERT}$ models achieve state-of-the-art results. We observe that the major source of performance gain is attributed to the improvement in identifying the exact step(s) for the state changes (Cat-2). This shows that the model are able to better track the entities by identifying the exact step of state change (Cat-2) accurately rather than just detecting the presence of such state changes (Cat-1).", "This task is more highly structured and in some ways more non-local than ingredient prediction; the high performance here shows that the ET$_{GPT}$ model is able to capture document level structural information effectively. Further, the structural constraints from the CRF also aid in making better predictions. For example, in the process “higher pressure causes the sediment to heat up. the heat causes chemical processes. the material becomes a liquid. is known as oil.”, the material is a by-product of the chemical process but there's no direct mention of it. However, the material ceases to exist in the next step, and because the model is able to predict this correctly, maintaining consistency results in the model finally predicting the entire state change correctly as well." ], [ "Based on the results in the previous section, our models clearly achieve strong performance compared to past approaches. We now revisit the challenging cases discussed in Section SECREF2 to see if our entity tracking approaches are modeling sophisticated entity phenomena as advertised. For both datasets and associated tasks, we isolate the specific set of challenging cases grounded in tracking (i) intermediate compositions formed as part of combination of entities leading to no explicit mention, and (ii) implicit events which change entities' states without explicit mention of the affects." ], [ "For Recipes, we mainly want to investigate cases of ingredients getting re-engaged in the recipe not in a raw form but in a combined nature with other ingredients and henceforth no explicit mention. For example, eggs in step 4 of Figure FIGREF2a exemplifies this case. The performance in such cases is indicative of how strongly the model can track compositional entities. We also examine the performance for cases where the ingredient is referred by some other name." ], [ "Formally, we pick the set of examples where the ground truth is a transition from $0 \\rightarrow 1$ (not present to present) and the 1 is a “combined” case. Table TABREF31 shows the model's performance on this subset of cases, of which there are 1049 in the test set. The model achieves an accuracy of 51.1% on these bigrams, which is relatively low given the overall model performance. In the error cases, the model defaults to the $1\\rightarrow 1$ pattern indicative of the First Occ baseline." ], [ "We observe the model is able to capture ingredients based on their hypernyms (nuts $\\rightarrow $ pecans, salad $\\rightarrow $ lettuce) and rough synonymy (bourbon $\\rightarrow $ scotch). This performance can be partially attributed to the language model pre-training. We can isolate these cases by filtering for uncombined ingredients when there is no matching ingredient token in the step. Out of 552 such cases in the test set, the model predicts 375 correctly giving a recall of $67.9$. This is lower than overall UR; if pre-training behaves as advertised, we expect little degradation in this case, but instead we see performance significantly below the average on uncombined ingredients." ], [ "One question we can ask of the model's capabilities is to what extent they arise from domain knowledge in the large pre-trained data. We train transformer models from scratch and additionally investigate using the large corpus of unlabeled recipes for our LM pre-training. As can be seen in Table TABREF35, the incorporation of external data leads to major improvements in the overall performance. This gain is largely due to the increase in combined recall. One possible reason could be that external data leads to better understanding of verb semantics and in turn the specific ingredients forming part of the intermediate compositions. Figure FIGREF37 shows that verbs are a critical clue the model relies on to make predictions. Performing LM fine-tuning on top of GPT also gives gains." ], [ "For ProPara, Table TABREF28 shows that the model does not significantly outperform the SOTA models in state change detection (Cat-1). However, for those correctly detected events, the transformer model outperforms the previous models for detecting the exact step of state change (Cat-2), primarily based on verb semantics. We do a finer-grained study in Table TABREF36 by breaking down the performance for the three state changes: creation (C), movement (M), and destruction (D), separately. Across the three state changes, the model suffers a loss of performance in the movement cases. This is owing to the fact that the movement cases require a deeper compositional and implicit event tracking. Also, a majority of errors leading to false negatives are due to the the formation of new sub-entities which are then mentioned with other names. For example, when talking about weak acid in “the water becomes a weak acid. the water dissolves limestone” the weak acid is also considered to move to the limestone." ], [ "The model's performance on these challenging task cases suggests that even though it outperforms baselines, it may not be capturing deep reasoning about entities. To understand what the model actually does, we perform analysis of the model's behavior with respect to the input to understand what cues it is picking up on." ], [ "One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics.", "In an ideal scenario, we would want the model to track constituent entities by translating the “focus” to track their newly formed compositions with other entities, often aliased by other names like mixture, blend, paste etc. However, the low performance on such cases shown in Section SECREF5 gives further evidence that the model is not doing this." ], [ "We can study which inputs are important more directly by explicitly removing specific certain words from the input process paragraph and evaluating the performance of the resulting input under the current model setup. We mainly did experiments to examine the importance of: (i) verbs, and (ii) other ingredients.", "Table TABREF40 presents these ablation studies. We only observe a minor performance drop from $84.59$ to $82.71$ (accuracy) when other ingredients are removed entirely. Removing verbs dropped the performance to $79.08$ and further omitting both leads to $77.79$. This shows the model’s dependence on verb semantics over tracking the other ingredients." ], [ "In this paper, we examined the capabilities of transformer networks for capturing entity state semantics. First, we show that the conventional framework of using the transformer networks is not rich enough to capture entity semantics in these cases. We then propose entity-centric ways to formulate richer transformer encoding of the process paragraph, guiding the self-attention in a target entity oriented way. This approach leads to significant performance improvements, but examining model performance more deeply, we conclude that these models still do not model the intermediate compositional entities and perform well by largely relying on surface entity mentions and verb semantics." ], [ "This work was partially supported by NSF Grant IIS-1814522 and an equipment grant from NVIDIA. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research. Results presented in this paper were obtained using the Chameleon testbed supported by the National Science Foundation. Thanks as well to the anonymous reviewers for their helpful comments." ] ] }
{ "question": [ "Do they report results only on English?", "What evidence do they present that the model attends to shallow context clues?", "In what way is the input restructured?" ], "question_id": [ "0e45aae0e97a6895543e88705e153f084ce9c136", "c515269b37cc186f6f82ab9ada5d9ca176335ded", "43f86cd8aafe930ebb35ca919ada33b74b36c7dd" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "", "", "" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "1516c86c36ecb2bb8a543465d6ac12220ed1a226" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Using model gradients with respect to input features they presented that the most important model inputs are verbs associated with entities which shows that the model attends to shallow context clues", "evidence": [ "One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics." ], "highlighted_evidence": [ "One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics." ] } ], "annotation_id": [ "e43a469126ec868403db8a7b388c56e5276b943d" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "In four entity-centric ways - entity-first, entity-last, document-level and sentence-level", "evidence": [ "Our approach consists of structuring input to the transformer network to use and guide the self-attention of the transformers, conditioning it on the entity. Our main mode of encoding the input, the entity-first method, is shown in Figure FIGREF4. The input sequence begins with a [START] token, then the entity under consideration, then a [SEP] token. After each sentence, a [CLS] token is used to anchor the prediction for that sentence. In this model, the transformer can always observe the entity it should be primarily “attending to” from the standpoint of building representations. We also have an entity-last variant where the entity is primarily observed just before the classification token to condition the [CLS] token's self-attention accordingly. These variants are naturally more computationally-intensive than post-conditioned models, as we need to rerun the transformer for each distinct entity we want to make a prediction for.", "As an additional variation, we can either run the transformer once per document with multiple [CLS] tokens (a document-level model as shown in Figure FIGREF4) or specialize the prediction to a single timestep (a sentence-level model). In a sentence level model, we formulate each pair of entity $e$ and process step $t$ as a separate instance for our classification task. Thus, for a process with $T$ steps and $m$ entities we get $T \\times m$ input sequences for fine tuning our classification task." ], "highlighted_evidence": [ "Our approach consists of structuring input to the transformer network to use and guide the self-attention of the transformers, conditioning it on the entity. Our main mode of encoding the input, the entity-first method, is shown in Figure FIGREF4. ", "We also have an entity-last variant where the entity is primarily observed just before the classification token to condition the [CLS] token's self-attention accordingly. ", "As an additional variation, we can either run the transformer once per document with multiple [CLS] tokens (a document-level model as shown in Figure FIGREF4) or specialize the prediction to a single timestep (a sentence-level model)." ] } ], "annotation_id": [ "2ca5d3901d40c6f75a521812fe5ba4706f954ed8" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Figure 1: Process Examples from (a) RECIPES as a binary classification task of ingredient detection, and (b) PROPARA as a structured prediction task of identifying state change sequences. Both require cross-sentence reasoning, such as knowing what components are in a mixture and understanding verb semantics like combine.", "Figure 2: Post-conditioning entity tracking models. Bottom: the process paragraph is encoded in an entity-independent manner with transformer network and a separate entity representation g[water] for postconditioning. Top: the two variants for the conditioning: (i) GPTattn, and (ii) GPTindep.", "Table 1: Templates for different proposed entity-centric modes of structuring input to the transformer networks.", "Table 2: Performance of the rule-based baselines and the post conditioned models on the ingredient detection task of the RECIPES dataset. These models all underperform First Occ.", "Figure 3: Entity conditioning model for guiding selfattention: the entity-first, sentence-level input variant fed into a left-to-right unidirectional transformer architecture. Task predictions are made at [CLS] tokens about the entity’s state after the prior sentence.", "Table 4: Top: we compare how much the model degrades when it conditions on no ingredient at all (w/o ing.), instead making a generic prediction. Bottom: we compare how much using previous context beyond a single sentence impacts the model.", "Table 3: Performances of different baseline models discussed in Section 3, the ELMo baselines, and the proposed entity-centric approaches with the (D)ocument v (S)entence level variants formulated with both entity (F)irst v. (L)ater. Our ETGPT variants all substantially outperform the baselines.", "Table 5: Performance of the proposed models on the PROPARA dataset. Our models outperform strong approaches from prior work across all metrics.", "Table 7: Performance for using unsupervised data for LM training.", "Table 8: Results for each state change type. Performance on predicting creation and destruction are highest, partially due to the model’s ability to use verb semantics for these tasks.", "Table 6: Model predictions from the document level entity first GPT model in 1049 cases of intermediate compositions. The model achieves only 51% accuracy in these cases.", "Figure 4: Gradient of the classification loss of the gold class with respect to inputs when predicting the status of butter in the last sentence. We follow a similar approach as Jain and Wallace (2019) to compute associations. Exact matches of the entity receive high weight, as does a seemingly unrelated verb dredge, which often indicates that the butter has already been used and is therefore present.", "Table 9: Model’s performance degradation with input ablations. We see that the model’s major source of performance is from verbs than compared to other ingredient’s explicit mentions." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png", "4-Table2-1.png", "5-Figure3-1.png", "6-Table4-1.png", "6-Table3-1.png", "7-Table5-1.png", "8-Table7-1.png", "8-Table8-1.png", "8-Table6-1.png", "9-Figure4-1.png", "9-Table9-1.png" ] }
1904.00648
Recognizing Musical Entities in User-generated Content
Recognizing Musical Entities is important for Music Information Retrieval (MIR) since it can improve the performance of several tasks such as music recommendation, genre classification or artist similarity. However, most entity recognition systems in the music domain have concentrated on formal texts (e.g. artists' biographies, encyclopedic articles, etc.), ignoring rich and noisy user-generated content. In this work, we present a novel method to recognize musical entities in Twitter content generated by users following a classical music radio channel. Our approach takes advantage of both formal radio schedule and users' tweets to improve entity recognition. We instantiate several machine learning algorithms to perform entity recognition combining task-specific and corpus-based features. We also show how to improve recognition results by jointly considering formal and user-generated content
{ "section_name": [ "Introduction", "Related Work", "Methodology", "Dataset", "NER system", "Schedule matching", "Results", "Conclusion" ], "paragraphs": [ [ "The increasing use of social media and microblogging services has broken new ground in the field of Information Extraction (IE) from user-generated content (UGC). Understanding the information contained in users' content has become one of the main goal for many applications, due to the uniqueness and the variety of this data BIBREF0 . However, the highly informal and noisy status of these sources makes it difficult to apply techniques proposed by the NLP community for dealing with formal and structured content BIBREF1 .", "In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work.", "The method proposed makes use of the information extracted from the radio schedule for creating links between users' tweets and tracks broadcasted. Thanks to this linking, we aim to detect when users refer to entities included into the schedule. Apart from that, we consider a series of linguistic features, partly taken from the NLP literature and partly specifically designed for this task, for building statistical models able to recognize the musical entities. To that aim, we perform several experiments with a supervised learning model, Support Vector Machine (SVM), and a recurrent neural network architecture, a bidirectional LSTM with a CRF layer (biLSTM-CRF).", "The contributions in this work are summarized as follows:", "The paper is structured as follows. In Section 2, we present a review of the previous works related to Named Entity Recognition, focusing on its application on UGC and MIR. Afterwards, in Section 3 it is presented the methodology of this work, describing the dataset and the method proposed. In Section 4, the results obtained are shown. Finally, in Section 5 conclusions are discussed." ], [ "Named Entity Recognition (NER), or alternatively Named Entity Recognition and Classification (NERC), is the task of detecting entities in an input text and to assign them to a specific class. It starts to be defined in the early '80, and over the years several approaches have been proposed BIBREF2 . Early systems were based on handcrafted rule-based algorithms, while recently several contributions by Machine Learning scientists have helped in integrating probabilistic models into NER systems.", "In particular, new developments in neural architectures have become an important resource for this task. Their main advantages are that they do not need language-specific knowledge resources BIBREF3 , and they are robust to the noisy and short nature of social media messages BIBREF4 . Indeed, according to a performance analysis of several Named Entity Recognition and Linking systems presented in BIBREF5 , it has been found that poor capitalization is one of the main issues when dealing with microblog content. Apart from that, typographic errors and the ubiquitous occurrence of out-of-vocabulary (OOV) words also cause drops in NER recall and precision, together with shortenings and slang, particularly pronounced in tweets.", "Music Information Retrieval (MIR) is an interdisciplinary field which borrows tools of several disciplines, such as signal processing, musicology, machine learning, psychology and many others, for extracting knowledge from musical objects (be them audio, texts, etc.) BIBREF6 . In the last decade, several MIR tasks have benefited from NLP, such as sound and music recommendation BIBREF7 , automatic summary of song review BIBREF8 , artist similarity BIBREF9 and genre classification BIBREF10 .", "In the field of IE, a first approach for detecting musical named entities from raw text, based on Hidden Markov Models, has been proposed in BIBREF11 . In BIBREF12 , the authors combine state-of-the-art Entity Linking (EL) systems to tackle the problem of detecting musical entities from raw texts. The method proposed relies on the argumentum ad populum intuition, so if two or more different EL systems perform the same prediction in linking a named entity mention, the more likely this prediction is to be correct. In detail, the off-the-shelf systems used are: DBpedia Spotlight BIBREF13 , TagMe BIBREF14 , Babelfy BIBREF15 . Moreover, a first Musical Entity Linking, MEL has been presented in BIBREF16 which combines different state-of-the-art NLP libraries and SimpleBrainz, an RDF knowledge base created from MusicBrainz after a simplification process.", "Furthermore, Twitter has also been at the center of many studies done by the MIR community. As example, for building a music recommender system BIBREF17 analyzes tweets containing keywords like nowplaying or listeningto. In BIBREF9 , a similar dataset it is used for discovering cultural listening patterns. Publicly available Twitter corpora built for MIR investigations have been created, among others the Million Musical Tweets dataset BIBREF18 and the #nowplaying dataset BIBREF19 ." ], [ "We propose a hybrid method which recognizes musical entities in UGC using both contextual and linguistic information. We focus on detecting two types of entities: Contributor: person who is related to a musical work (composer, performer, conductor, etc). Musical Work: musical composition or recording (symphony, concerto, overture, etc).", "As case study, we have chosen to analyze tweets extracted from the channel of a classical music radio, BBC Radio 3. The choice to focus on classical music has been mostly motivated by the particular discrepancy between the informal language used in the social platform and the formal nomenclature of contributors and musical works. Indeed, users when referring to a musician or to a classical piece in a tweet, rarely use the full name of the person or of the work, as shown in Table 2.", "We extract information from the radio schedule for recreating the musical context to analyze user-generated tweets, detecting when they are referring to a specific work or contributor recently played. We manage to associate to every track broadcasted a list of entities, thanks to the tweets automatically posted by the BBC Radio3 Music Bot, where it is described the track actually on air in the radio. In Table 3, examples of bot-generated tweets are shown.", "Afterwards, we detect the entities on the user-generated content by means of two methods: on one side, we use the entities extracted from the radio schedule for generating candidates entities in the user-generated tweets, thanks to a matching algorithm based on time proximity and string similarity. On the other side, we create a statistical model capable of detecting entities directly from the UGC, aimed to model the informal language of the raw texts. In Figure 1, an overview of the system proposed is presented." ], [ "In May 2018, we crawled Twitter using the Python library Tweepy, creating two datasets on which Contributor and Musical Work entities have been manually annotated, using IOB tags.", "The first set contains user-generated tweets related to the BBC Radio 3 channel. It represents the source of user-generated content on which we aim to predict the named entities. We create it filtering the messages containing hashtags related to BBC Radio 3, such as #BBCRadio3 or #BBCR3. We obtain a set of 2,225 unique user-generated tweets. The second set consists of the messages automatically generated by the BBC Radio 3 Music Bot. This set contains 5,093 automatically generated tweets, thanks to which we have recreated the schedule.", "In Table 4, the amount of tokens and relative entities annotated are reported for the two datasets. For evaluation purposes, both sets are split in a training part (80%) and two test sets (10% each one) randomly chosen. Within the user-generated corpora, entities annotated are only about 5% of the whole amount of tokens. In the case of the automatically generated tweets, the percentage is significantly greater and entities represent about the 50%." ], [ "According to the literature reviewed, state-of-the-art NER systems proposed by the NLP community are not tailored to detect musical entities in user-generated content. Consequently, our first objective has been to understand how to adapt existing systems for achieving significant results in this task.", "In the following sections, we describe separately the features, the word embeddings and the models considered. All the resources used are publicy available.", "We define a set of features for characterizing the text at the token level. We mix standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, together with several gazetteers specifically built for classical music, and a series of features representing tokens' left and right context. For extracting the POS and the chunk tag we use the Python library twitter_nlp, presented in BIBREF1 .", "In total, we define 26 features for describing each token: 1)POS tag; 2)Chunk tag; 3)Position of the token within the text, normalized between 0 and 1; 4)If the token starts with a capital letter; 5)If the token is a digit. Gazetteers: 6)Contributor first names; 7)Contributor last names; 8)Contributor types (\"soprano\", \"violinist\", etc.); 9)Classical work types (\"symphony\", \"overture\", etc.); 10)Musical instruments; 11)Opus forms (\"op\", \"opus\"); 12)Work number forms (\"no\", \"number\"); 13)Work keys (\"C\", \"D\", \"E\", \"F\" , \"G\" , \"A\", \"B\", \"flat\", \"sharp\"); 14)Work Modes (\"major\", \"minor\", \"m\"). Finally, we complete the tokens' description including as token's features the surface form, the POS and the chunk tag of the previous and the following two tokens (12 features).", "We consider two sets of GloVe word embeddings BIBREF20 for training the neural architecture, one pre-trained with 2B of tweets, publicy downloadable, one trained with a corpora of 300K tweets collected during the 2014-2017 BBC Proms Festivals and disjoint from the data used in our experiments.", "The first model considered for this task has been the John Platt's sequential minimal optimization algorithm for training a support vector classifier BIBREF21 , implemented in WEKA BIBREF22 . Indeed, in BIBREF23 results shown that SVM outperforms other machine learning models, such as Decision Trees and Naive Bayes, obtaining the best accuracy when detecting named entities from the user-generated tweets.", "However, recent advances in Deep Learning techniques have shown that the NER task can benefit from the use of neural architectures, such as biLSTM-networks BIBREF3 , BIBREF4 . We use the implementation proposed in BIBREF24 for conducting three different experiments. In the first, we train the model using only the word embeddings as feature. In the second, together with the word embeddings we use the POS and chunk tag. In the third, all the features previously defined are included, in addition to the word embeddings. For every experiment, we use both the pre-trained embeddings and the ones that we created with our Twitter corpora. In section 4, results obtained from the several experiments are reported." ], [ "The bot-generated tweets present a predefined structure and a formal language, which facilitates the entities detection. In this dataset, our goal is to assign to each track played on the radio, represented by a tweet, a list of entities extracted from the tweet raw text. For achieving that, we experiment with the algorithms and features presented previously, obtaining an high level of accuracy, as presented in section 4. The hypothesis considered is that when a radio listener posts a tweet, it is possible that she is referring to a track which has been played a relatively short time before. In this cases, we want to show that knowing the radio schedule can help improving the results when detecting entities.", "Once assigned a list of entities to each track, we perform two types of matching. Firstly, within the tracks we identify the ones which have been played in a fixed range of time (t) before and after the generation of the user's tweet. Using the resulting tracks, we create a list of candidates entities on which performing string similarity. The score of the matching based on string similarity is computed as the ratio of the number of tokens in common between an entity and the input tweet, and the total number of token of the entity: DISPLAYFORM0 ", "In order to exclude trivial matches, tokens within a list of stop words are not considered while performing string matching. The final score is a weighted combination of the string matching score and the time proximity of the track, aimed to enhance matches from tracks played closer to the time when the user is posting the tweet.", "The performance of the algorithm depends, apart from the time proximity threshold t, also on other two thresholds related to the string matching, one for the Musical Work (w) and one for the Contributor (c) entities. It has been necessary for avoiding to include candidate entities matched against the schedule with a low score, often source of false positives or negatives. Consequently, as last step Contributor and Musical Work candidates entities with respectively a string matching score lower than c and w, are filtered out. In Figure 2, an example of Musical Work entity recognized in an user-generated tweet using the schedule information is presented.", "The entities recognized from the schedule matching are joined with the ones obtained directly from the statistical models. In the joined results, the criteria is to give priority to the entities recognized from the machine learning techniques. If they do not return any entities, the entities predicted by the schedule matching are considered. Our strategy is justified by the poorer results obtained by the NER based only on the schedule matching, compared to the other models used in the experiments, to be presented in the next section." ], [ "The performances of the NER experiments are reported separately for three different parts of the system proposed.", "Table 6 presents the comparison of the various methods while performing NER on the bot-generated corpora and the user-generated corpora. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease. It can be considered a natural consequence of the complex nature of the users' informal language in comparison to the structured message created by the bot.", "In Table 7, results of the schedule matching are reported. We can observe how the quality of the linking performed by the algorithm is correlated to the choice of the three thresholds. Indeed, the Precision score increase when the time threshold decrease, admitting less candidates as entities during the matching, and when the string similarity thresholds increase, accepting only candidates with an higher degree of similarity. The behaviour of the Recall score is inverted.", "Finally, we test the impact of using the schedule matching together with a biLSTM-CRF network. In this experiment, we consider the network trained using all the features proposed, and the embeddings not pre-trained. Table 8 reports the results obtained. We can observe how generally the system benefits from the use of the schedule information. Especially in the testing part, where the neural network recognizes with less accuracy, the explicit information contained in the schedule can be exploited for identifying the entities at which users are referring while listening to the radio and posting the tweets." ], [ "We have presented in this work a novel method for detecting musical entities from user-generated content, modelling linguistic features with statistical models and extracting contextual information from a radio schedule. We analyzed tweets related to a classical music radio station, integrating its schedule to connect users' messages to tracks broadcasted. We focus on the recognition of two kinds of entities related to the music field, Contributor and Musical Work.", "According to the results obtained, we have seen a pronounced difference between the system performances when dealing with the Contributor instead of the Musical Work entities. Indeed, the former type of entity has been shown to be more easily detected in comparison to the latter, and we identify several reasons behind this fact. Firstly, Contributor entities are less prone to be shorten or modified, while due to their longness, Musical Work entities often represent only a part of the complete title of a musical piece. Furthermore, Musical Work titles are typically composed by more tokens, including common words which can be easily misclassified. The low performances obtained in the case of Musical Work entities can be a consequences of these observations. On the other hand, when referring to a Contributor users often use only the surname, but in most of the cases it is enough for the system to recognizing the entities.", "From the experiments we have seen that generally the biLSTM-CRF architecture outperforms the SVM model. The benefit of using the whole set of features is evident in the training part, but while testing the inclusion of the features not always leads to better results. In addition, some of the features designed in our experiments are tailored to the case of classical music, hence they might not be representative if applied to other fields. We do not exclude that our method can be adapted for detecting other kinds of entity, but it might be needed to redefine the features according to the case considered. Similarly, it has not been found a particular advantage of using the pre-trained embeddings instead of the one trained with our corpora. Furthermore, we verified the statistical significance of our experiment by using Wilcoxon Rank-Sum Test, obtaining that there have been not significant difference between the various model considered while testing.", "The information extracted from the schedule also present several limitations. In fact, the hypothesis that a tweet is referring to a track broadcasted is not always verified. Even if it is common that radios listeners do comments about tracks played, or give suggestion to the radio host about what they would like to listen, it is also true that they might refer to a Contributor or Musical Work unrelated to the radio schedule." ] ] }
{ "question": [ "What are their results on the entity recognition task?", "What task-specific features are used?", "What kind of corpus-based features are taken into account?", "Which machine learning algorithms did the explore?", "What language is the Twitter content in?" ], "question_id": [ "aa60b0a6c1601e09209626fd8c8bdc463624b0b3", "3837ae1e91a4feb27f11ac3b14963e9a12f0c05e", "ef4d6c9416e45301ea1a4d550b7c381f377cacd9", "689d1d0c4653a8fa87fd0e01fa7e12f75405cd38", "7920f228de6ef4c685f478bac4c7776443f19f39" ], "nlp_background": [ "", "", "", "", "" ], "topic_background": [ "", "", "", "", "" ], "paper_read": [ "", "", "", "", "" ], "search_query": [ "", "", "", "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "With both test sets performances decrease, varying between 94-97%" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The performances of the NER experiments are reported separately for three different parts of the system proposed.", "Table 6 presents the comparison of the various methods while performing NER on the bot-generated corpora and the user-generated corpora. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease. It can be considered a natural consequence of the complex nature of the users' informal language in comparison to the structured message created by the bot." ], "highlighted_evidence": [ "The performances of the NER experiments are reported separately for three different parts of the system proposed.", "Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease." ] } ], "annotation_id": [ "15418edd8c72bc8bc3efceb68fa9202d76da15a7" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "6)Contributor first names", "7)Contributor last names", "8)Contributor types (\"soprano\", \"violinist\", etc.)", "9)Classical work types (\"symphony\", \"overture\", etc.)", "10)Musical instruments", "11)Opus forms (\"op\", \"opus\")", "12)Work number forms (\"no\", \"number\")", "13)Work keys (\"C\", \"D\", \"E\", \"F\" , \"G\" , \"A\", \"B\", \"flat\", \"sharp\")", "14)Work Modes (\"major\", \"minor\", \"m\")" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In total, we define 26 features for describing each token: 1)POS tag; 2)Chunk tag; 3)Position of the token within the text, normalized between 0 and 1; 4)If the token starts with a capital letter; 5)If the token is a digit. Gazetteers: 6)Contributor first names; 7)Contributor last names; 8)Contributor types (\"soprano\", \"violinist\", etc.); 9)Classical work types (\"symphony\", \"overture\", etc.); 10)Musical instruments; 11)Opus forms (\"op\", \"opus\"); 12)Work number forms (\"no\", \"number\"); 13)Work keys (\"C\", \"D\", \"E\", \"F\" , \"G\" , \"A\", \"B\", \"flat\", \"sharp\"); 14)Work Modes (\"major\", \"minor\", \"m\"). Finally, we complete the tokens' description including as token's features the surface form, the POS and the chunk tag of the previous and the following two tokens (12 features)." ], "highlighted_evidence": [ "Gazetteers: 6)Contributor first names; 7)Contributor last names; 8)Contributor types (\"soprano\", \"violinist\", etc.); 9)Classical work types (\"symphony\", \"overture\", etc.); 10)Musical instruments; 11)Opus forms (\"op\", \"opus\"); 12)Work number forms (\"no\", \"number\"); 13)Work keys (\"C\", \"D\", \"E\", \"F\" , \"G\" , \"A\", \"B\", \"flat\", \"sharp\"); 14)Work Modes (\"major\", \"minor\", \"m\")." ] } ], "annotation_id": [ "b6163a58c88f9e2b89b84689a1fbdda6414d2e3c" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "standard linguistic features, such as Part-Of-Speech (POS) and chunk tag", "series of features representing tokens' left and right context" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We define a set of features for characterizing the text at the token level. We mix standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, together with several gazetteers specifically built for classical music, and a series of features representing tokens' left and right context. For extracting the POS and the chunk tag we use the Python library twitter_nlp, presented in BIBREF1 ." ], "highlighted_evidence": [ "We mix standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, together with several gazetteers specifically built for classical music, and a series of features representing tokens' left and right context." ] } ], "annotation_id": [ "5114686c571dbe9da3b0a0a7692a4eec5c53d856" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "biLSTM-networks" ], "yes_no": null, "free_form_answer": "", "evidence": [ "However, recent advances in Deep Learning techniques have shown that the NER task can benefit from the use of neural architectures, such as biLSTM-networks BIBREF3 , BIBREF4 . We use the implementation proposed in BIBREF24 for conducting three different experiments. In the first, we train the model using only the word embeddings as feature. In the second, together with the word embeddings we use the POS and chunk tag. In the third, all the features previously defined are included, in addition to the word embeddings. For every experiment, we use both the pre-trained embeddings and the ones that we created with our Twitter corpora. In section 4, results obtained from the several experiments are reported." ], "highlighted_evidence": [ "However, recent advances in Deep Learning techniques have shown that the NER task can benefit from the use of neural architectures, such as biLSTM-networks BIBREF3 , BIBREF4 . We use the implementation proposed in BIBREF24 for conducting three different experiments." ] } ], "annotation_id": [ "3122f0c4f10f3f50f4c501ac9affc51aeca276a1" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "English", "evidence": [ "In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work." ], "highlighted_evidence": [ "In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work." ] } ], "annotation_id": [ "69a345333a5e18bacc4a7af86bdf08ba2943a19f" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 2. Example of entities annotated and corresponding formal forms, from the user-generated tweet (1) in Table 1.", "Table 3. Examples of bot-generated tweets.", "Table 4. Tokens’ distributions within the two datasets: user-generated tweets (top) and bot-generated tweets (bottom)", "Fig. 2. Example of the workflow for recognizing entities in UGC using the information from the radio schedule", "Table 6. F1 score for Contributor(C) and Musical Work(MW) entities recognized from bot-generated tweets (top) and user-generated tweets (bottom)", "Table 7. Precision (P), Recall (R) and F1 score for Contributor (C) and Musical Work (MW) of the schedule matching algorithm. w indicates the Musical Work string similarity threshold, c indicates the Contributor string similarity threshold and t indicates the time proximity threshold in seconds", "Table 8. Precision (P), Recall (R) and F1 score for Contributor (C) and Musical Work (MW) entities recognized from user-generated tweets using the biLSTM-CRF network together with the schedule matching. The thresholds used for the matching are t=1200, w=0.5, c=0.5." ], "file": [ "4-Table2-1.png", "4-Table3-1.png", "5-Table4-1.png", "8-Figure2-1.png", "9-Table6-1.png", "9-Table7-1.png", "10-Table8-1.png" ] }
1709.00387
MIT-QCRI Arabic Dialect Identification System for the 2017 Multi-Genre Broadcast Challenge
In order to successfully annotate the Arabic speech con- tent found in open-domain media broadcasts, it is essential to be able to process a diverse set of Arabic dialects. For the 2017 Multi-Genre Broadcast challenge (MGB-3) there were two possible tasks: Arabic speech recognition, and Arabic Dialect Identification (ADI). In this paper, we describe our efforts to create an ADI system for the MGB-3 challenge, with the goal of distinguishing amongst four major Arabic dialects, as well as Modern Standard Arabic. Our research fo- cused on dialect variability and domain mismatches between the training and test domain. In order to achieve a robust ADI system, we explored both Siamese neural network models to learn similarity and dissimilarities among Arabic dialects, as well as i-vector post-processing to adapt domain mismatches. Both Acoustic and linguistic features were used for the final MGB-3 submissions, with the best primary system achieving 75% accuracy on the official 10hr test set.
{ "section_name": [ "Introduction", "MGB-3 Arabic Dialect Identification", "Dialect Identification Task & System", "Baseline ADI System", "Siamese Neural Network-based ADI", "i-vector Post-Processing", "Phoneme Features", "Character Features", "Score Calibration", "ADI Experiments", "Using Training Data for Training", "Using Training and Development Data for Training", "Performance Evaluation of Submission", "Conclusion" ], "paragraphs": [ [ "One of the challenges of processing real-world spoken content, such as media broadcasts, is the potential presence of different dialects of a language in the material. Dialect identification can be a useful capability to identify which dialect is being spoken during a recording. Dialect identification can be regarded as a special case of language recognition, requiring an ability to discriminate between different members within the same language family, as opposed to across language families (i.e., for language recognition). The dominant approach, based on i-vector extraction, has proven to be very effective for both language and speaker recognition BIBREF0 . Recently, phonetically aware deep neural models have also been found to be effective in combination with i-vectors BIBREF1 , BIBREF2 , BIBREF3 . Phonetically aware models could be beneficial for dialect identification, since they provide a mechanism to focus attention on small phonetic differences between dialects with predominantly common phonetic inventories.", "Since 2015, the Arabic Multi-Genre Broadcast (MGB) Challenge tasks have provided a valuable resource for researchers interested in processing multi-dialectal Arabic speech. For the ASRU 2017 MGB-3 Challenge, there were two possible tasks. The first task was aimed at developing an automatic speech recognition system for Arabic dialectal speech based on a multi-genre broadcast audio dataset. The second task was aimed at developing an Arabic Dialect Identification (ADI) capability for five major Arabic dialects. This paper reports our experimentation efforts for the ADI task.", "While the MGB-3 Arabic ASR task included seven different genres from the broadcast domain, the ADI task focused solely on broadcast news. Participants were provided high-quality Aljazeera news broadcasts as well as transcriptions generated by a multi-dialect ASR system created from the MGB-2 dataset BIBREF4 . The biggest difference from previous MGB challenges is that only a relatively small development set of in-domain data is provided for adaptation to the test set (i.e., the training data is mismatched with the test data). For the ADI baseline, participants were also provided with i-vector features from the audio dataset, and lexical features from the transcripts. Evaluation software was shared with all participants using baseline features available via Github.", "The evaluation scenario for the MGB-3 ADI task can be viewed as channel and domain mismatch because the recording environment of the training data is different from the development and test data. In general, channel or domain mismatch between training and test data can be a significant factor affecting system performance. Differences in channel, genre, language, topic etc. produce shifts in low-dimensional projections of the corresponding speech and ultimately cause performance degradations on evaluation data.", "In order to address performance degradation of speaker and language recognition systems due to domain mismatches, researchers have proposed various approaches to compensate for, and to adapt to the mismatch BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . For the MGB-3 ADI task, we utilized the development data to adapt to the test data recording domain, and investigated approaches to improve ADI performance both on the domain mismatched scenario, and the matching scenario, by using a recursive whitening transformation, a weighted dialect i-vector model, and a Siamese Neural Network.", "In contrast to the language recognition scenario, where there are different linguistic units across languages, language dialects typically share a common phonetic inventory and written language. Thus, we can potentially use ASR outputs such as phones, characters, and lexicons as features. N-gram histograms of phonemes, characters and lexicons can be used as feature vectors directly, and indeed, a lexicon-based n-gram feature vector was provided for the MGB-3 ADI baseline. The linguistic feature space is, naturally, completely different to the audio feature space, so a fusion of the results from both feature representations has been previously shown to be beneficial BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . Moreover, the linguistic feature has an advantage in channel domain mismatch situations because the transcription itself does not reflect the recording environment, and only contains linguistic information.", "In this paper, we describe our work for the MGB-3 ADI Challenge. The final MIT-QCRI submitted system is a combination of audio and linguistic feature-based systems, and includes multiple approaches to address the challenging mismatched conditions. From the official results, this system achieved the best performance among all participants. The following sections describe our research in greater detail." ], [ "For the MGB-3 ADI task, the challenge organizers provided 13,825 utterances (53.6 hours) for the training (TRN) set, 1,524 utterances (10 hours) for a development (DEV) set, and 1,492 utterances (10.1 hours) for a test (TST) set. Each dataset consisted of five Arabic dialects: Egyptian (EGY), Levantine (LEV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA). Detailed statistics of the ADI dataset can be found in BIBREF23 . Table TABREF3 shows some facts about the evaluation conditions and data properties. Note that the development set is relatively small compared to the training set. However, it is matched with the test set channel domain. Thus, the development set provides valuable information to adapt or compensate the channel (recording) domain mismatch between the train and test sets." ], [ "The MGB-3 ADI task asks participants to classify speech as one of five dialects, by specifying one dialect for each audio file for their submission. Performance is evaluated via three indices: overall accuracy, average precision, and average recall for the five dialects." ], [ "The challenge organizers provided features and code for a baseline ADI system. The features consisted of 400 dimensional i-vector features for each audio file (based on bottleneck feature inputs for their frame-level acoustic representation), as well as lexical features using bigrams generated from transcriptions BIBREF23 . For baseline dialect identification, a multi-class Support Vector Machine (SVM) was used. The baseline i-vector performance was 57.3%, 60.8%, and 58.0% for accuracy, precision and recall respectively. Lexical features achieved 48.4%, 51.0%, and 49.3%, respectively. While the audio-based features achieved better performance than the lexical features, both systems only obtained approximately 50% accuracy, indicating that this ADI task is difficult, considering that there are only five classes to choose from." ], [ "To further distinguish speech from different Arabic dialects, while making speech from the same dialect more similar, we adopted a Siamese neural network architecture BIBREF24 based on an i-vector feature space. The Siamese neural network has two parallel convolutional networks, INLINEFORM0 , that share the same set of weights, INLINEFORM1 , as shown in Figure FIGREF5 (a). Let INLINEFORM2 and INLINEFORM3 be a pair of i-vectors for which we wish to compute a distance. Let INLINEFORM4 be the label for the pair, where INLINEFORM5 = 1 if the i-vectors INLINEFORM6 and INLINEFORM7 belong to same dialect, and INLINEFORM8 otherwise. To optimize the network, we use a Euclidean distance loss function between the label and the cosine distance, INLINEFORM9 , where INLINEFORM10 ", "For training, i-vector pairs and their corresponding labels can be processed by combinations of i-vectors from the training dataset. The trained convolutional network INLINEFORM0 transforms an i-vector INLINEFORM1 to a low-dimensional subspace that is more robust for distinguishing dialects. A detailed illustration of the convolutional network INLINEFORM2 is shown in Figure FIGREF5 (b). The final transformed i-vector, INLINEFORM3 , is a 200-dimensional vector. No nonlinear activation function was used on the fully connected layer. A cosine distance is used for scoring." ], [ "In this section we describe the domain adaptation techniques we investigated using the development set to help adapt our models to the test set.", "Although the baseline system used an SVM classifier, Cosine Distance Scoring (CDS) is a fast, simple, and effective method to measure the similarity between an enrolled i-vector dialect model, and a test utterance i-vector. Under CDS, ZT-norm or S-norm can be also applied for score normalization BIBREF25 . Dialect enrollment can be obtained by means of i-vectors for each dialect, and is called the i-vector dialect model: INLINEFORM0 , where INLINEFORM1 is the number of utterances for each dialect INLINEFORM2 . Since we have two datasets for dialect enrollment, INLINEFORM3 for the training set, and INLINEFORM4 for the development set, we use an interpolation approach with parameter INLINEFORM5 , where INLINEFORM6 ", "We observed that the mismatched training set is useful when combined with matched development set. Figure FIGREF7 shows the performance evaluation by parameter INLINEFORM0 on the same experimental conditions of System 2 in Section 4.3. This approach can be thought of as exactly the same as score fusion for different system. However, score fusion is usually performed at the system score level, while this approach uses a combination of knowledge of in-domain and out-of-domain i-vectors with a gamma weight on a single system.", "For i-vector-based speaker and language recognition approaches, a whitening transformation and length normalization is considered essential BIBREF26 . Since length normalization is inherently a nonlinear, non-whitening operation, recently, a recursive whitening transformation has been proposed to reduce residual un-whitened components in the i-vector space, as illustrated in Figure FIGREF10 BIBREF14 . In this approach, the data subset that best matches the test data is used at each iteration to calculate the whitening transformation. In our ADI experiments, we applied 1 to 3 levels of recursive whitening transformation using the training and development data." ], [ "Phoneme feature extraction consists of extracting the phone sequence, and phone duration statistics using four different speech recognizers: Czech, Hungarian, and Russian using narrowband model, and English using a broadband model BIBREF27 . We evaluated the four systems using a Support Vector Machine (SVM). The hyper-parameters for the SVM are distance from the hyperplane (C is 0.01), and penalty l2. We used the training data for training the SVM and the development data for testing. Table TABREF13 shows the results for the four phoneme recognizers. The Hungarian phoneme recognition obtained the best results, so we used it for the final system combination." ], [ "Word sequences are extracted using a state-of-the-art Arabic speech-to-text transcription system built as part of the MGB-2 BIBREF28 . The system is a combination of a Time Delayed Neural Network (TDNN), a Long Short-Term Memory Recurrent Neural Network (LSTM) and Bidirectional LSTM acoustic models, followed by 4-gram and Recurrent Neural Network (RNN) language model rescoring. Our system uses a grapheme lexicon during both training and decoding. The acoustic models are trained on 1,200 hours of Arabic broadcast speech. We also perform data augmentation (speed and volume perturbation) which gives us three times the original training data. For more details see the system description paper BIBREF4 . We kept the <UNK> from the ASR system, which indicates out-of-vocabulary (OOV) words, we replaced it with special symbol. Space was inserted between all characters including the word boundaries. An SVM classifier was trained similarly to the one used for the phoneme ASR systems, and we achieved 52% accuracy, 51.2% precision and 51.8% recall. The confusion matrix is different between the phoneme classifier and the character classifier systems, which motivates us to use both of them in the final system combination." ], [ "All scores are calibrated to be between 0 and 1. A linear calibration is done by the Bosaris toolkit BIBREF29 . Fusion is also done in a linear manner." ], [ "For experiments and evaluation, we use i-vectors and transcriptions that are provided by the challenge organizers. Please refer to BIBREF23 for descriptions of i-vector extraction and Arabic speech-to-text configuration." ], [ "The first experiment we conducted used only the training data for developing the ADI system. Thus, the interpolated i-vector dialect model cannot be used for this experimental condition. Table TABREF14 shows the performance on dimension reduced i-vectors using the Siamese network (Siam i-vector), and Linear Discriminant Analysis (LDA i-vector), as compared to the baseline i-vector system. LDA reduces the 400-dimension i-vector to 4, while the Siamese network reduces it from 400 to 200. Since the Siamese network used a cosine distance for the loss function, the Siam i-vector showed better performance with the CDS scoring method, while others achieved better performance with an SVM. The best system using Siam i-vector showed overall 10% better performance accuracy, as compared to the baseline." ], [ "For our second experiment, both the training and development data were used for training. For phoneme and character features, we show development set experimental results in Table TABREF15 . For i-vector experiments, we show results in Table TABREF16 . In the table we see that the interpolated dialect model gave significant improvements in all three metrics. The recursive whitening transformation gave slight improvements on the original i-vector, but not after LDA and the Siamese network. The best system is the original i-vector with recursive whitening, and an interpolated i-vector dialect model, which achieves over 20% accuracy improvement over the baseline.", "While the Siamese i-vector network helped in the training data only experiments, it does not show any advantage over the baseline i-vector for this condition. We suspect this result is due to the composition of the data used for training the Siamese network. To train the network, i-vector pairs are chosen from from training dataset. We selected the pairs using both the training and development datasets. However, if we could put more emphasis on the development data, we suspect the Siamese i-vector network would be more robust on the test data. We plan to further examine the performances due to different compositions of data in the future." ], [ "Tables TABREF21 and TABREF22 show detailed performance evaluations of our three submitted systems. System 1 was trained using only the training data as shown in Table TABREF21 . Systems 2 and 3 were trained using both the training and development sets as shown in Table TABREF22 . We found the best linear fusion weight based on System 1 to prevent over-fitting was 0.7, 0.2 and 0.1 for i-vector, character, and phonetic based scores respectively. We applied the same weights to Systems 2 and 3 for fusion.", "From Table TABREF21 , we see that the Siamese network demonstrates its effectiveness on both the development and test sets without using any information of the test domain. The interpolated i-vector dialect model also demonstrates that it reflects test domain information well as shown by Systems 2 and 3 in Table TABREF22 . Although we expected that the linguistic features would not affected by the domain mismatch, character and phoneme features show useful contributions for all systems. We believe the reason for the performance degradation of Systems 2 and 3 after fusion on the development data can be seen in the fusion rule. We applied the fusion rule derived from System 1 which was not optimal for Systems 2 and 3, considering the development set evaluation. By including the development data as part of their training, Systems 2 and 3 are subsequently overfit on the development data, which was why we used the fusion rule of System 1. From the excellent fusion performance on the test data for Systems 2 and 3, we believe that the fusion rule from System 1 prevented an over-fitted result." ], [ "In this paper, we describe the MIT-QCRI ADI system using both audio and linguistic features for the MGB-3 challenge. We studied several approaches to address dialect variability and domain mismatches between the training and test sets. Without knowledge of the test domain where the system will be applied, i-vector dimensionality reduction using a Siamese network was found to be useful, while an interpolated i-vector dialect model showed effectiveness with relatively small amounts of test domain information from the development data. On both conditions, fusion of audio and linguistic feature guarantees substantial improvements on dialect identification. As these approaches are not limited to dialect identification, we plan to explore their utility on other speaker and language recognition problems in the future." ] ] }
{ "question": [ "What is the architecture of the siamese neural network?", "How do they explore domain mismatch?", "How do they explore dialect variability?", "Which are the four Arabic dialects?" ], "question_id": [ "41844d1d1ee6d6d38f31b3a17a2398f87566ed92", "ae17066634bd2731a07cd60e9ca79fc171692585", "4fa2faa08eeabc09d78d89aaf0ea86bb36328172", "e87f47a293e0b49ab8b15fc6633d9ca6dc9de071" ], "nlp_background": [ "five", "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "two parallel convolutional networks, INLINEFORM0 , that share the same set of weights" ], "yes_no": null, "free_form_answer": "", "evidence": [ "To further distinguish speech from different Arabic dialects, while making speech from the same dialect more similar, we adopted a Siamese neural network architecture BIBREF24 based on an i-vector feature space. The Siamese neural network has two parallel convolutional networks, INLINEFORM0 , that share the same set of weights, INLINEFORM1 , as shown in Figure FIGREF5 (a). Let INLINEFORM2 and INLINEFORM3 be a pair of i-vectors for which we wish to compute a distance. Let INLINEFORM4 be the label for the pair, where INLINEFORM5 = 1 if the i-vectors INLINEFORM6 and INLINEFORM7 belong to same dialect, and INLINEFORM8 otherwise. To optimize the network, we use a Euclidean distance loss function between the label and the cosine distance, INLINEFORM9 , where INLINEFORM10" ], "highlighted_evidence": [ "The Siamese neural network has two parallel convolutional networks, INLINEFORM0 , that share the same set of weights, INLINEFORM1 , as shown in Figure FIGREF5 (a)." ] } ], "annotation_id": [ "c846cc418f9e862b1b933621e2bd177812ad2e9b" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "155e92d7fbd73d0d2786596f7c49aeec14b7bc7e" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "3c8d323143f29e12a4305c321691358a5cde0ade" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Egyptian (EGY)", "Levantine (LEV)", "Gulf (GLF)", "North African (NOR)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For the MGB-3 ADI task, the challenge organizers provided 13,825 utterances (53.6 hours) for the training (TRN) set, 1,524 utterances (10 hours) for a development (DEV) set, and 1,492 utterances (10.1 hours) for a test (TST) set. Each dataset consisted of five Arabic dialects: Egyptian (EGY), Levantine (LEV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA). Detailed statistics of the ADI dataset can be found in BIBREF23 . Table TABREF3 shows some facts about the evaluation conditions and data properties. Note that the development set is relatively small compared to the training set. However, it is matched with the test set channel domain. Thus, the development set provides valuable information to adapt or compensate the channel (recording) domain mismatch between the train and test sets." ], "highlighted_evidence": [ "Each dataset consisted of five Arabic dialects: Egyptian (EGY), Levantine (LEV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA)." ] } ], "annotation_id": [ "f21509337fad3ee6fc775a4c73362a71201a1760" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Fig. 2 Overall accuracy on DEV and TST sets by gamma: The DEV set shows the best performance at gamma = 0.91, while the TST set shows the best result at gamma=0.83. For our experiments, we used gamma = 0.91" ], "file": [ "1-Figure2-1.png" ] }
1806.11322
Bias in Semantic and Discourse Interpretation
In this paper, we show how game-theoretic work on conversation combined with a theory of discourse structure provides a framework for studying interpretive bias. Interpretive bias is an essential feature of learning and understanding but also something that can be used to pervert or subvert the truth. The framework we develop here provides tools for understanding and analyzing the range of interpretive biases and the factors that contribute to them.
{ "section_name": [ "Introduction", "Objective of the paper", "Some examples of bias", "Organization of the paper", "The model of interpretive bias", "Epistemic ME games", "Generalizing from the case study", "ME persuasion games", "ME truth games", "Looking ahead", "Conclusions" ], "paragraphs": [ [ "Bias is generally considered to be a negative term: a biased story is seen as one that perverts or subverts the truth by offering a partial or incomplete perspective on the facts. But bias is in fact essential to understanding: one cannot interpret a set of facts—something humans are disposed to try to do even in the presence of data that is nothing but noise [38]—without relying on a bias or hypothesis to guide that interpretation. Suppose someone presents you with the sequence INLINEFORM0 and tells you to guess the next number. To make an educated guess, you must understand this sequence as instantiating a particular pattern; otherwise, every possible continuation of the sequence will be equally probable for you. Formulating a hypothesis about what pattern is at work will allow you to predict how the sequence will play out, putting you in a position to make a reasonable guess as to what comes after 3. Formulating the hypothesis that this sequence is structured by the Fibonacci function (even if you don't know its name), for example, will lead you to guess that the next number is 5; formulating the hypothesis that the sequence is structured by the successor function but that every odd successor is repeated once will lead you to guess that it is 3. Detecting a certain pattern allows you to determine what we will call a history: a set of given entities or eventualities and a set of relations linking those entities together. The sequence of numbers INLINEFORM1 and the set of relation instances that the Fibonacci sequence entails as holding between them is one example of a history. Bias, then, is the set of features, constraints, and assumptions that lead an interpreter to select one history—one way of stitching together a set of observed data—over another.", "Bias is also operative in linguistic interpretation. An interpreter's bias surfaces, for example, when the interpreter connects bits of information content together to resolve ambiguities. Consider: . Julie isn't coming. The meeting has been cancelled.", "While these clauses are not explicitly connected, an interpreter will typically have antecedent biases that lead her to interpret eventualities described by the two clauses as figuring in one of two histories: one in which the eventuality described by the first clause caused the second, or one in which the second caused the first. Any time that structural connections are left implicit by speakers—and this is much if not most of the time in text— interpreters will be left to infer these connections and thereby potentially create their own history or version of events.", "Every model of data, every history over that data, comes with a bias that allows us to use observed facts to make predictions; bias even determines what kind of predictions the model is meant to make. Bayesian inference, which underlies many powerful models of inference and machine learning, likewise relies on bias in several ways: the estimate of a state given evidence depends upon a prior probability distribution over states, on assumptions about what parameters are probabilistically independent, and on assumptions about the kind of conditional probability distribution that each parameter abides by (e.g., normal distribution, noisy-or, bimodal). Each of these generates a (potentially different) history." ], [ "In this paper, we propose a program for research on bias. We will show how to model various types of bias as well as the way in which bias leads to the selection of a history for a set of data, where the data might be a set of nonlinguistic entities or a set of linguistically expressed contents. In particular, we'll look at what people call “unbiased” histories. For us these also involve a bias, what we call a “truth seeking bias”. This is a bias that gets at the truth or acceptably close to it. Our model can show us what such a bias looks like. And we will examine the question of whether it is possible to find such a truth oriented bias for a set of facts, and if so, under what conditions. Can we detect and avoid biases that don't get at the truth but are devised for some other purpose?", "Our study of interpretive bias relies on three key premises. The first premise is that histories are discursive interpretations of a set of data in the sense that like discourse interpretations, they link together a set of entities with semantically meaningful relations. As such they are amenable to an analysis using the tools used to model a discourse's content and structure. The second is that a bias consists of a purpose or goal that the histories it generates are built to achieve and that agents build histories for many different purposes—to discover the truth or to understand, but also to conceal the truth, to praise or disparage, to persuade or to dissuade. To properly model histories and the role of biases in creating them, we need a model of the discourse purposes to whose end histories are constructed and of the way that they, together with prior assumptions, shape and determine histories. The third key premise of our approach is that bias is manifested in and conveyed through histories, and so studying histories is crucial for a better understanding of bias." ], [ "Let's consider the following example of biased interpretation of a conversation. Here is an example analyzed in BIBREF0 to which we will return in the course of the paper. . Sa Reporter: On a different subject is there a reason that the Senator won't say whether or not someone else bought some suits for him? Sheehan: Rachel, the Senator has reported every gift he has ever received. Reporter: That wasn't my question, Cullen. Sheehan: (i) The Senator has reported every gift he has ever received. (ii) We are not going to respond to unnamed sources on a blog. . Reporter: So Senator Coleman's friend has not bought these suits for him? Is that correct? Sheehan: The Senator has reported every gift he has ever received.", "Sheehan continues to repeat, “The Senator has reported every gift he has ever received” seven more times in two minutes to every follow up question by the reporter corps. http://www.youtube.com/watch?v=VySnpLoaUrI. For convenience, we denote this sentence uttered by Sheehan (which is an EDU in the languare of SDRT as we shall see presently) as INLINEFORM0 .", "Now imagine two “juries,” onlookers or judges who interpret what was said and evaluate the exchange, yielding differing interpretations. The interpretations differ principally in how the different contributions of Sheehan and the reporter hang together. In other words, the different interpretations provide different discourse structures that we show schematically in the graphs below. The first is one in which Sheehan's response INLINEFORM0 in SECREF3 b is somewhat puzzling and not taken as an answer to the reporter's question in SECREF3 a. In effect this “jury” could be the reporter herself. This Jury then interprets the move in SECREF3 c as a correction of the prior exchange. The repetition of INLINEFORM1 in SECREF3 d.ii is taken tentatively as a correction of the prior exchange (that is, the moves SECREF3 a, SECREF3 b and SECREF3 c together), which the Jury then takes the reporter to try to establish with SECREF3 e. When Sheehan repeats SECREF3 a again in SECREF3 f, this jury might very well take Sheehan to be evading all questions on the subject.", "A different Jury, however, might have a different take on the conversation as depicted in the discourse structure below. Such a jury might take INLINEFORM0 to be at least an indirect answer to the question posed in SECREF3 a, and as a correction to the Reporter's evidently not taking INLINEFORM1 as an answer. The same interpretation of INLINEFORM2 would hold for this Jury when it is repeated in SECREF3 f. Such a Jury would be a supporter of Sheehan or even Sheehan himself. What accounts for these divergent discourse structures? We will argue that it is the biases of the two Juries that create these different interpretations. And these biases are revealed at least implicitly in how they interpret the story: Jury 1 is at the outset at least guarded, if not skeptical, in its appraisal of Sheehan's interest in answering the reporter's questions. On the other hand, Jury 2 is fully convinced of Sheehan's position and thus interprets his responses much more charitably. BIBREF0 shows formally that there is a co-dependence between biases and interpretations; a certain interpretation created because of a certain bias can in turn strengthen that bias, and we will sketch some of the details of this story below.", "The situation of our two juries applies to a set of nonlinguistic facts. In such a case we take our “jury” to be the author of a history over that set of facts. The jury in this case evaluates and interprets the facts just as our juries did above concerning linguistic messages. To tell a history about a set of facts is to connect them together just as discourse constituents are connected together. And these connections affect and may even determine the way the facts are conceptualized BIBREF1 . Facts typically do not wear their connections to other facts on their sleeves and so how one takes those connections to be is often subject to bias. Even if their characterization and their connections to other facts are “intuitively clear”, our jury may choose to pick only certain connections to convey a particular history or even to make up connections that might be different. One jury might build a history over the set of facts that conveys one set of ideas, while the other might build a quite different history with a different message. Such histories reflect the purposes and assumptions that were exploited to create that structure.", "As an example of this, consider the lead paragraphs of articles from the New York Times, Townhall and Newsbusters concerning the March for Science held in April, 2017.", "The March for Science on April 22 may or may not accomplish the goals set out by its organizers. But it has required many people who work in a variety of scientific fields — as well as Americans who are passionate about science — to grapple with the proper role of science in our civic life. The discussion was evident in thousands of responses submitted to NYTimes.com ahead of the march, both from those who will attend and those who are sitting it out.", "", "–New York Times", "Do you have march fatigue yet? The left, apparently, does not, so we're in for some street theater on Earth Day, April 22, with the so-called March for Science. It's hard to think of a better way to undermine the public's faith in science than to stage demonstrations in Washington, D.C., and around the country modeled on the Women's March on Washington that took place in January. The Women's March was an anti-Donald Trump festival. Science, however, to be respected, must be purely the search for truth. The organizers of this “March for Science\" – by acknowledging that their demonstration is modeled on the Women's March – are contributing to the politicization of science, exactly what true upholders of science should be at pains to avoid.", "", "–Townhall", "Thousands of people have expressed interest in attending the “March for Science” this Earth Day, but internally the event was fraught with conflict and many actual scientists rejected the march and refused to participate.", "", "–Newsbusters", "These different articles begin with some of the same basic facts: the date and purpose of the march, and the fact that the march's import for the science community is controversial, for example. But bias led the reporters to stitch together very different histories. The New York Times, for instance, interprets the controversy as generating a serious discussion about “the proper role of science in our civic life,” while Townhall interprets the march as a political stunt that does nothing but undermine science.", "While the choice of wording helps to convey bias, just as crucial is the way that the reporters portray the march as being related to other events. Which events authors choose to include in their history, which they leave out, and the way the events chosen relate to the march are crucial factors in conveying bias. Townhall's bias against the March of Science expressed in the argument that it politicizes science cannot be traced back to negative opinion words; it relies on a comparison between the March for Science and the Women's March, which is portrayed as a political, anti-Trump event. Newsbusters takes a different track: the opening paragraph conveys an overall negative perspective on the March for Science, despite its neutral language, but it achieves this by contrasting general interest in the march with a claimed negative view of the march by many “actual scientists.” On the other hand, the New York Times points to an important and presumably positive outcome of the march, despite its controversiality: a renewed look into the role of science in public life and politics. Like Newsbusters, it lacks any explicit evaluative language and relies on the structural relations between events to convey an overall positive perspective; it contrasts the controversy surrounding the march with a claim that the march has triggered an important discussion, which is in turn buttressed by the reporter's mentioning of the responses of the Times' readership.", "A formally precise account of interpretive bias will thus require an analysis of histories and their structure and to this end, we exploit Segmented Discourse Representation Theory or SDRT BIBREF2 , BIBREF3 . As the most precise and well-studied formal model of discourse structure and interpretation to date, SDRT enables us to characterize and to compare histories in terms of their structure and content. But neither SDRT nor any other, extant theoretical or computational approach to discourse interpretation can adequately deal with the inherent subjectivity and interest relativity of interpretation, which our study of bias will illuminate. Message Exchange (ME) Games, a theory of games that builds on SDRT, supplements SDRT with an analysis of the purposes and assumptions that figure in bias. While epistemic game theory in principle can supply an analysis of these assumptions, it lacks linguistic constraints and fails to reflect the basic structure of conversations BIBREF4 . ME games will enable us not only to model the purposes and assumptions behind histories but also to evaluate their complexity and feasibility in terms of the existence of winning strategies.", "Bias has been studied in cognitive psychology and empirical economics BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF5 , BIBREF13 . Since the seminal work of Kahneman and Tversky and the economist Allais, psychologists and empirical economists have provided valuable insights into cognitive biases in simple decision problems and simple mathematical tasks BIBREF14 . Some of this work, for example the bias of framing effects BIBREF7 , is directly relevant to our theory of interpretive bias. A situation is presented using certain lexical choices that lead to different “frames”: INLINEFORM0 of the people will live if you do INLINEFORM1 (frame 1) versus INLINEFORM2 of the people will die if you do INLINEFORM3 (frame 2). In fact, INLINEFORM4 , the total population in question; so the two consequents of the conditionals are equivalent. Each frame elaborates or “colors” INLINEFORM5 in a way that affects an interpreter's evaluation of INLINEFORM6 . These frames are in effect short histories whose discourse structure explains their coloring effect. Psychologists, empirical economists and statisticians have also investigated cases of cognitive bias in which subjects deviate from prescriptively rational or independently given objective outcomes in quantitative decision making and frequency estimation, even though they arguably have the goal of seeking an optimal or “true” solution. In a general analysis of interpretive bias like ours, however, it is an open question whether there is an objective norm or not, whether it is attainable and, if so, under what conditions, and whether an agent builds a history for attaining that norm or for some other purpose." ], [ "Our paper is organized as follows. Section SECREF2 introduces our model of interpretive bias. Section SECREF3 looks forward towards some consequences of our model for learning and interpretation. We then draw some conclusions in Section SECREF4 . A detailed and formal analysis of interpretive bias has important social implications. Questions of bias are not only timely but also pressing for democracies that are having a difficult time dealing with campaigns of disinformation and a society whose information sources are increasingly fragmented and whose biases are often concealed. Understanding linguistic and cognitive mechanisms for bias precisely and algorithmically can yield valuable tools for navigating in an informationally bewildering world." ], [ "As mentioned in Section SECREF1 , understanding interpretive bias requires two ingredients. First, we need to know what it is to interpret a text or to build a history over a set of facts. Our answer comes from analyzing discourse structure and interpretation in SDRT BIBREF2 , BIBREF3 . A history for a text connects its elementary information units, units that convey propositions or describe events, using semantic relations that we call discourse relations to construct a coherent and connected whole. Among such relations are logical, causal, evidential, sequential and resemblance relations as well as relations that link one unit with an elaboration of its content. It has been shown in the literature that discourse structure is an important factor in accurately extracting sentiments and opinions from text BIBREF15 , BIBREF16 , BIBREF17 , and our examples show that this is the case for interpretive bias as well." ], [ "The second ingredient needed to understand interpretive bias is the connection between on the one hand the purpose and assumption behind telling a story and on the other the particular way in which that story is told. A history puts the entities to be understood into a structure that serves certain purposes or conversational goals BIBREF18 . Sometimes the history attempts to get at the “truth”, the true causal and taxonomic structure of a set of events. But a history may also serve other purposes—e.g., to persuade, or to dupe an audience. Over the past five years, BIBREF4 , BIBREF19 , BIBREF20 , BIBREF21 have developed an account of conversational purposes or goals and how they guide strategic reasoning in a framework called Message Exchange (ME) Games. ME games provide a general and formally precise framework for not only the analysis of conversational purposes and conversational strategies, but also for the typology of dialogue games from BIBREF22 and finally for the analysis of strategies for achieving what we would intuitively call “unbiased interpretation”, as we shall see in the next section. In fact in ME Games, conversational goals are analyzed as properties, and hence sets, of conversations; these are the conversations that “go well” for the player. ME games bring together the linguistic analysis of SDRT with a game theoretic approach to strategic reasoning; in an ME game, players alternate making sequences of discourse moves such as those described in SDRT, and a player wins if the conversation constructed belongs to her winning condition, which is a subset of the set of all possible conversational plays. ME games are designed to analyze the interaction between conversational structure, purposes and assumptions, in the absence of assumptions about cooperativity or other cognitive hypotheses, which can cause problems of interpretability in other frameworks BIBREF23 . ME games also assume a Jury that sets the winning conditions and thus evaluates whether the conversational moves made by players or conversationalists are successful or not. The Jury can be one or both of the players themselves or some exogenous body.", "To define an ME game, we first fix a finite set of players INLINEFORM0 and let INLINEFORM1 range over INLINEFORM2 . For simplicity, we consider here the case where there are only two players, that is INLINEFORM3 , but the notions can be easily lifted to the case where there are more than two players. Here, Player INLINEFORM4 will denote the opponent of Player INLINEFORM5 . We need a vocabulary INLINEFORM6 of moves or actions; these are the discourse moves as defined by the language of SDRT. The intuitive idea behind an ME game is that a conversation proceeds in turns where in each turn one of the players `speaks' or plays a string of elements from INLINEFORM7 . In addition, in the case of conversations, it is essential to keep track of “who says what”. To model this, each player INLINEFORM8 was assigned a copy INLINEFORM9 of the vocabulary INLINEFORM10 which is simply given as INLINEFORM11 . As BIBREF4 argues, a conversation may proceed indefinitely, and so conversations correspond to plays of ME games, typically denoted as INLINEFORM12 , which are the union of finite or infinite sequences in INLINEFORM13 , denoted as INLINEFORM14 and INLINEFORM15 respectively. The set of all possible conversations is thus INLINEFORM16 and is denoted as INLINEFORM17 . [ME game BIBREF4 ] A Message Exchange game (ME game), INLINEFORM18 , is a tuple INLINEFORM19 where INLINEFORM20 is a Jury. Due to the ambiguities in language, discourse moves in SDRT are underspecified formulas that may yield more than one fully specified discourse structure or histories for the conversation; a resulting play in an ME game thus forms one or more histories or complete discourse structures for the entire conversation.", "To make ME games into a truly realistic model of conversation requires taking account of the limited information available to conversational participants. BIBREF0 imported the notion of a type space from epistemic game theory BIBREF24 to take account of this. The type of a player INLINEFORM0 or the Jury is an abstract object that is used to code-up anything and everything about INLINEFORM1 or the Jury, including her behavior, the way she strategizes, her personal biases, etc. BIBREF24 . Let INLINEFORM2 denote the set of strategies for Player INLINEFORM3 in an ME game; let INLINEFORM4 ; and let INLINEFORM5 be the set of strategies of INLINEFORM6 given play INLINEFORM7 . [Harsanyi type space BIBREF24 ] A Harsanyi type space for INLINEFORM8 is a tuple INLINEFORM9 such that INLINEFORM10 and INLINEFORM11 , for each INLINEFORM12 , are non-empty (at-most countable) sets called the Jury-types and INLINEFORM13 -types respectively and INLINEFORM14 and INLINEFORM15 are the beliefs of Player INLINEFORM16 and the Jury respectively at play INLINEFORM17 . BIBREF0 defines the beliefs of the players and Jury using the following functions. [Belief function] For every play INLINEFORM18 the (first order) belief INLINEFORM19 of player INLINEFORM20 at INLINEFORM21 is a pair of measurable functions INLINEFORM22 where INLINEFORM23 is the belief function and INLINEFORM24 is the interpretation function defined as: INLINEFORM25 INLINEFORM26 ", "where INLINEFORM0 is the set of probability distributions over the corresponding set. Similarly the (first order) belief INLINEFORM1 of the Jury is a pair of measurable functions INLINEFORM2 where the belief function INLINEFORM3 and the interpretation function INLINEFORM4 are defined as: INLINEFORM5 INLINEFORM6 ", " Composing INLINEFORM0 and INLINEFORM1 together over their respective outputs reveals a correspondence between interpretations of plays and types for a fixed Jury type INLINEFORM2 : every history yields a distribution over types for the players and every tuple of types for the players and the Jury fixes a distribution over histories. We'll call this the types/history correspondence.", "An epistemic ME game is an ME game with a Harsanyi type space and a type/history correspondence as we've defined it. By adding types to an ME game, we provide the beginnings of a game theoretic model of interpretive bias that we believe is completely new. Our definition of bias is now: [Interpretive Bias] An interpretive bias in an epistemic ME game is the probability distribution over types given by the belief function of the conversationalists or players, or the Jury. Note that in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury.", "Outside of language, statisticians study bias; and sample bias is currently an important topic. To do so, they exploit statistical models with a set of parameters and random variables, which play the role of our types in interpretive bias. But for us, the interpretive process is already well underway once the model, with its constraints, features and explanatory hypotheses, is posited; at least a partial history, or set of histories, has already been created.", "The ME model in BIBREF0 not only makes histories dependent on biases but also conditionally updates an agent's bias, the probability distribution, given the interpretation of the conversation or more generally a course of events as it has so far unfolded and crucially as the agent has so far interpreted it. This means that certain biases are reinforced as a history develops, and in turn strengthen the probability of histories generated by such biases in virtue of the types/histories correspondence. We now turn to an analysis of SECREF3 discussed in BIBREF4 , BIBREF0 where arguably this happens." ], [ "The Sheehan case study in BIBREF0 shows the interactions of interpretation and probability distributions over types. We'll refer to content that exploit assumptions about types' epistemic content. SECREF3 also offers a case of a self-confirming bias with Jury INLINEFORM0 . But the analysis proposed by BIBREF0 leaves open an important open question about what types are relevant to constructing a particular history and only examines one out of many other cases of biased interpretation. In epistemic game models, the relevant types are typically given exogenously and Harsanyi's type space construction is silent on this question. The question seems a priori very hard to answer, because anything and everything might be relevant to constructing a history.", "In SECREF3 , the relevant types have to do with the interpreters' or Jurys' attitudes towards the commitments of the spokesman and Coleman. These attitudes might reinforce or be a product of other beliefs like beliefs about the spokesman's political affiliations. But we will put forward the following simplifying hypothesis:", "Hypothesis 1: epistemic content is based on assumptions about types defined by different attitudes to commitments by the players and or the Jury to the contents of a discourse move or sequence of discourse moves.", "Hypothesis 2: These assumptions can be represented as probability distributions over types.", "In SECREF3 , we've only looked at epistemic content from the point of view of the interpreter, which involves types for the Jury defined in terms of probability distributions over types for the speaker. But we can look at subjective interpretations from the perspective of the speaker as well. In other words, we look at how the speaker might conceptualize the discourse situation, in particular her audience. We illustrate this with another type of content based on types. Consider the following move by Marion Le Pen, a leader of the French nationalist, right-wing party le Front National in which she recently said: . La France était la fille aînée de l'église. Elle est en passe de devenir la petite nièce de l'Islam. (France was once the eldest daughter of the Catholic church. It is now becoming the little niece of Islam.)", " SECREF8 appeals to what the speaker takes to be her intended audience's beliefs about Islam, Catholicism and France. In virtue of these beliefs, this discourse move takes on a loaded racist meaning, conveying an assault on France and its once proud status by people of North African descent. Without those background beliefs, however, Le Pen's statement might merely be considered a somewhat curious description of a recent shift in religious majorities. This is known as a “dog whistle,” in which a discourse move communicates a content other than its grammatically determined content to a particular audience BIBREF25 . While BIBREF26 proposes that such messages are conventional implicatures, BIBREF25 , BIBREF27 show that dog whistle content doesn't behave like other conventional implicatures; in terms of tests about “at issue content”, dog whistle content patterns with other at issue content, not with the content associated with conventional implicatures in the sense of BIBREF28 . This also holds of content that resolves ambiguities as in SECREF3 .", "The dogwhistle content seems to be driven by the hearer's type in SECREF8 or the speaker's beliefs about the interpreter's or hearer's type. Generalizing from BIBREF29 , the use of the historical expression la fille ainée de l'église contrasted with la petite nièce has come to encode a type, in much the same way that dropping the final g in present participles and gerunds has come to signify a type BIBREF29 , for the speaker INLINEFORM0 about hearer INLINEFORM1 ; e.g., INLINEFORM2 will believe that INLINEFORM3 has the strategy of using just this language to access the loaded interpretation and moreover will identify with its content. Because this meaning comes about in virtue of the hearer's type, the speaker is in a position to plausibly deny that they committed to conveying a racist meaning, which is a feature of such dog whistles. In fact, we might say that all dogwhistle content is so determined.", "We can complicate the analysis by considering the speaker's types, the interlocutor's types and types for the Jury when these three components of an ME game are distinct (i.e. the Jury is distinct from the interlocutors). A case like this is the Bronston example discussed in BIBREF0 .", "By looking at dogwhistles, we've now distinguished two kinds of epistemic content that depends on an interpreters' type. The epistemic content may as in SECREF3 fill out the meaning of an underspecified play to produce a determinate history. Dog whistles add content to a specific discourse unit that goes beyond its grammatically determined meaning. More formally, we can define these two kinds of epistemic content using the machinery of ME games. Given that plays in an ME game are sequences of discourse moves, we can appeal to the semantics of these moves and a background consequence relation INLINEFORM0 defined as usual. In addition, a play INLINEFORM1 in an ME game may itself be a fully specified history or a sequence of discourse moves that is compatible with several fully specified histories given a particular interpreter's or Jury's type INLINEFORM2 . Let INLINEFORM3 be the set of histories (FLFs) compatible with a play INLINEFORM4 given an interpreter or Jury type INLINEFORM5 . INLINEFORM6 will be ambiguous and open to epistemic content supplementation just in case: (i) INLINEFORM7 for any type INLINEFORM8 for a linguistically competent jury, and (ii) there are INLINEFORM9 , such that INLINEFORM10 and INLINEFORM11 are semantically distinct (neither one entails the other). Now suppose that a play INLINEFORM12 gives rise through the grammar to a history, INLINEFORM13 . Then INLINEFORM14 is a dog whistle for INLINEFORM15 just in case: (i) INLINEFORM16 , (ii) INLINEFORM17 and (iii) there is a INLINEFORM18 that can positively affect some jury perhaps distinct from INLINEFORM19 and such that INLINEFORM20 . On this definition, a player who utters such a play INLINEFORM21 always has the excuse that what he/she actually meant was INLINEFORM22 when challenged—which seems to be one essential feature of a dog whistle.", "Plays with such semantic features may not be a pervasive feature of conversation; not every element is underspecified or is given a content over and above its linguistically determined one. But in interpreting a set of nonlinguistic facts INLINEFORM0 or data not already connected together in a history, that is in constructing a history over INLINEFORM1 , an interpreter INLINEFORM2 , who in this case is a speaker or writer, must appeal to her beliefs, which includes her beliefs about the Jury to whom her discourse actions are directed. So certainly the type of INLINEFORM3 , which includes beliefs about the Jury for the text, is relevant to what history emerges. The facts in INLINEFORM4 don't wear their relational properties to other facts on their sleeves so to speak, and so INLINEFORM5 has to supply the connections to construct the history. In effect for a set of non linguistically given facts, “ambiguities of attachment,” whose specification determines how the facts in INLINEFORM6 are related to each other, are ubiquitous and must be resolved in constructing a history. The speaker or “history creator” INLINEFORM7 's background beliefs determine the play and the history an interpreter INLINEFORM8 takes away.", "In the case of constructing a history over a set of nonlinguistic facts INLINEFORM0 , the interpreter INLINEFORM1 's task of getting the history INLINEFORM2 has constructed will not reliably succeed unless one of two conditions are met: either INLINEFORM3 and INLINEFORM4 just happen to share the relevant beliefs (have close enough types) so that they construct the same histories from INLINEFORM5 , or INLINEFORM6 uses linguistic devices to signal the history. ME games require winning conversations, and by extension texts, to be (mostly) coherent, which means that the discourse connections between the elements in the history must be largely determined in any successful play, or can be effectively determined by INLINEFORM14 . This means that INLINEFORM15 will usually reveal relevant information about her type through her play, in virtue of the type/history correspondence, enough to reconstruct the history or much of it. In the stories on the March for Science, for example, the reporters evoke very different connections between the march and other facts. The Townhall reporter, for instance, connects the March for Science to the Women's march and “leftwing” political manifestations and manifests a negative attitude toward the March. But he does so so unambiguously that little subjective interpretation on the part of the interpreter or Jury is needed to construct the history or assign a high probability to a type for INLINEFORM16 that drives the story.", "This discussion leads to the following observations. To construct a history over a set of disconnected nonlinguistic facts INLINEFORM0 , in general a Jury needs to exploit linguistic pointers to the connections between elements of INLINEFORM1 , if the speaker is to achieve the goal of imparting a (discourse) coherent story, unless the speaker knows that the Jury or interpreter has detailed knowledge of her type. The speaker may choose to leave certain elements underspecified or ambiguous, or use a specified construction, to invoke epistemic content for a particular type that she is confident the Jury instantiates. How much so depends on her confidence in the type of the Jury. This distribution or confidence level opens a panoply of options about the uses of epistemic content: at one end there are histories constructed from linguistic cues with standard, grammatically encoded meanings; at the other end there are histories generated by a code shared with only a few people whose types are mutually known. As the conversation proceeds as we have seen, probabilities about types are updated and so the model should predict that a speaker may resort to more code-like messages in the face of feedback confirming her hypotheses about the Jury's type (if such feedback can be given) and that the speaker may revert to a more message exploiting grammatical cues in the face of feedback disconfirming her hypotheses about the Jury's type. Thus, the epistemic ME model predicts a possible change in register as the speaker receives more information about the Jury's type, though this change is subject to other conversational goals coded in the speaker's victory condition for the ME game." ], [ "We've now seen how histories in ME games bring an interpretive bias, the bias of the history's creator, to the understanding of a certain set of facts. We've also seen how epistemic ME games allow for the introduction of epistemic content in the interpretation of plays. Each such epistemic interpretation is an instance of a bias that goes beyond the grammatically determined meaning of the play and is dependent upon the Jury's or interpreter's type. We now make explicit another crucial component of ME games and their relation to bias: the players' winning conditions or discourse goals. Why is this relevant to a study of bias? The short answer is that players' goals tells us whether two players' biases on a certain subject are compatible or resolvable or not. Imagine that our two Juries in SECREF3 shared the same goal—of getting at the truth behind the Senator's refusal to comment about the suits. They might still have come up with the opposing interpretations that they did in our discussion above. But they could have discussed their differences, and eventually would have come to agreement, as we show below in Proposition SECREF19 .", "However, our two Juries might have different purposes too. One Jury might have the purpose of finding out about the suits, like the reporters; the other might have the purpose just to see Senator Coleman defended, a potentially quite different winning condition and collection of histories. In so doing we would identify Jury 1 with the reporters or at least Rachel, and Jury 2 with Sheehan. Such different discourse purposes have to be taken into account in attempting to make a distinction between good and bad biases. From the perspective of subjective rationality or rationalizability (an important criterion in epistemic game theory BIBREF33 ), good biases for a particular conversation should be those that lead to histories in the winning condition, histories that fulfill the discourse purpose; bad biases lead to histories that do not achieve the winning condition. The goals that a Jury or interpreter INLINEFORM0 adopts and her biases go together; INLINEFORM1 's interpretive bias is good for speaker INLINEFORM2 , if it helps INLINEFORM3 achieve her winning condition. Hence, INLINEFORM4 's beliefs about INLINEFORM5 are crucial to her success and rationalizable behavior. Based on those beliefs INLINEFORM6 's behavior is rationalizable in the sense we have just discussed. If she believes Jury 2 is the one whose winning condition she should satisfy, there is no reason for her to change that behavior. Furthermore, suppose Jury 1 and Jury 2 discuss their evaluations; given that they have different goals, there is no reason for them to come to an agreement with the other's point of view either. Both interpretations are rationalizable as well, if the respective Juries have the goals they do above. A similar story applies to constructing histories over a set of facts, in so far as they had different conceptions of winning conditions set by their respective Juries. In contrast to Aumann's dictum BIBREF32 , in our scenario there is every reason to agree to disagree!", "Understanding such discourse goals is crucial to understanding bias for at least two reasons. The first is that together with the types that are conventionally coded in discourse moves, they fix the space of relevant types. In SECREF3 , Jury 1 is sensitive to a winning condition in which the truth about the suits is revealed, what we call a truth oriented goal. The goal of Jury 2, on the other hand, is to see that Coleman is successfully defended, what we call a persuasion goal. In fact, we show below that a truth oriented goal is a kind of persuasion goal. Crucial to the accomplishment of either of these goals is for the Jury INLINEFORM0 to decide whether the speaker INLINEFORM1 is committing to a definite answer that she will defend (or better yet an answer that she believes) on a given move to a question from her interlocutor or is INLINEFORM2 trying to avoid any such commitments. If it's the latter, then INLINEFORM3 would be epistemically rash to be persuaded. But the two possibilities are just the two types for Sheehan that are relevant to the interpretation of the ambiguous moves in SECREF3 . Because persuasive goals are almost ubiquitous at least as parts of speaker goals, not only in conversation but also for texts (think of how the reporters in the examples on the March for Science are seeking to convince us of a particular view of the event), we claim that these two types are relevant to the interpretation of many, if not all, conversations. In general we conjecture that the relevant types for interpretation may all rely on epistemic requirements for meeting various kinds of conversational goals.", "The second reason that discourse goals are key to understanding bias is that by analyzing persuasion goals in more detail we get to the heart of what bias is. Imagine a kind of ME game played between two players, E(loïse) and A(belard), where E proposes and tries to defend a particular interpretation of some set of facts INLINEFORM0 , and A tries to show the interpretation is incorrect, misguided, based on prejudice or whatever will convince the Jury to be dissuaded from adopting E's interpretation of INLINEFORM1 . As in all ME games, E's victory condition in an ME persuasion game is a set of histories determined by the Jury, but but it crucially depends on E's and A's beliefs about the Jury: E has to provide a history INLINEFORM2 over INLINEFORM3 ; A has to attack that history in ways that accord with her beliefs about the Jury; and E has to defend INLINEFORM4 in ways that will, given her beliefs, dispose the Jury favorably to it.", "An ME persuasion game is one where E and A each present elements of INLINEFORM0 and may also make argumentative or attack moves in their conversation. At each turn of the game, A can argue about the history constructed by E over the facts given so far, challenge it with new facts or attack its assumptions, with the result that E may rethink and redo portions her history over INLINEFORM1 (though not abandon the original history entirely) in order to render A's attack moot. E wins if the history she finally settles on for the facts in INLINEFORM2 allows her to rebut every attack by A; A wins otherwise. A reasonable precisification of this victory condition is that the proportions of good unanswered attacks on the latest version of E's history with respect to the total number of attacks at some point continues to diminish and eventually goes to 0. This is a sort of limit condition: if we think of the initial segments INLINEFORM3 E's play as producing an “initial” history INLINEFORM4 over INLINEFORM5 , as INLINEFORM6 , INLINEFORM7 has no unanswered counterattacks by A that affect the Jury. Such winning histories are extremely difficult to construct; as one can see from inspection, no finite segment of an infinite play guarantees such a winning condition. We shall call a history segment that is part of a history in INLINEFORM8 's winning condition as we have just characterized it, E-defensible.", "The notion of an ME persuasion game opens the door to a study of attacks, a study that can draw on work in argumentation and game theory BIBREF34 , BIBREF35 , BIBREF36 . ME games and ME persuasion games in particular go beyond the work just cited, however, because our notion of an effective attack involves the type of the Jury as a crucial parameter; the effectiveness of an attack for a Jury relies on its prejudices, technically its priors about the game's players' types (and hence their beliefs and motives). For instance, an uncovering of an agent's racist bias when confronted with a dog whistle like that in SECREF8 is an effective attack technique if the respondent's type for the Jury is such that it is sensitive to such accusations, while it will fail if the Jury is insensitive to such accusations. ME games make plain the importance in a persuasion game of accurately gauging the beliefs of the Jury!" ], [ "We now turn to a special kind of ME persuasion game with what we call a disinterested Jury. The intuition behind a disinterested Jury is simple: such a Jury judges the persuasion game based only on the public commitments that follow from the discourse moves that the players make. It is not predisposed to either player in the game. While it is difficult to define such a disinterested Jury in terms of its credences, its probability distribution over types, we can establish some necessary conditions. We first define the notion of the dual of a play of an ME game. Let INLINEFORM0 be an element of the labeled vocabulary with player INLINEFORM1 . Define its dual as: INLINEFORM2 ", "The dual of a play INLINEFORM0 then is simply the lifting of this operator over the entire sequence of INLINEFORM1 . That is, if INLINEFORM2 , where INLINEFORM3 then INLINEFORM4 ", "Then, a disinterested Jury must necessarily satisfy:", "Indifference towards player identity: A Jury INLINEFORM0 is unbiased only if for every INLINEFORM1 , INLINEFORM2 iff INLINEFORM3 .", "Symmetry of prior belief: A Jury is unbiased only if it has symmetrical prior beliefs about the player types.", "Clearly, the Jury INLINEFORM0 does not have symmetrical prior beliefs nor is it indifferent to player identity, while Jury INLINEFORM1 arguably has symmetrical beliefs about the participants in SECREF3 . Note also that while Symmetry of prior beliefs is satisfied by a uniform distribution over all types, but it does not entail such a uniform distribution. Symmetry is closely related to the principle of maximum entropy used in fields as diverse as physics and computational linguistics BIBREF37 , according to which in the absence of any information about the players would entail a uniform probability distribution over types.", "A distinterested Jury should evaluate a conversation based solely on the strength of the points put forth by the participants. But also crucially it should evaluate the conversation in light of the right points. So for instance, appeals to ad hominem attacks by A or colorful insults should not sway the Jury in favor of A. They should evaluate only based on how the points brought forward affect their credences under conditionalization. A distinterested Jury is impressed only by certain attacks from A, ones based on evidence (E's claims aren't supported by the facts) and on formal properties of coherence, consistency and explanatory or predictive power. In such a game it is common knowledge that attacks based on information about E's type that is not relevant either to the evidential support or formal properties of her history are ignored by the Jury and the participants know this. The same goes for E; counterattacks by her on A that are not based on evidence or the formal properties mentioned above.", " BIBREF4 discusses the formal properties of coherence and consistency in detail, and we say more about explanatory and predictive power below. The evidential criterion, however, is also particularly important, and it is one that a disinterested Jury must attend to. Luckily for us, formal epistemologists have formulated constraints like cognitive skill and safety or anti-luck on beliefs that are relevant to characterizing this evidential criterion BIBREF38 , BIBREF39 . Cognitive skill is a factor that affects the success (accuracy) of an agent's beliefs: the success of an agent's beliefs is the result of her cognitive skill, exactly to the extent that the reasoning process that produces them makes evidential factors (how weighty, specific, misleading, etc., the agent's evidence is) comparatively important for explaining that success, and makes non-evidential factors comparatively unimportant. In addition, we will require that the relevant evidential factors are those that have been demonstrated to be effective in the relevant areas of inquiry. So if a Jury measures the success of a persuasion game in virtue of a criterion of cognitive ability on the part of the participants and this is common knowledge among the participants (something we will assume throughout here), then, for instance, A's attacks have to be about the particular evidence adduced to support E's history, the way it was collected or verifiable errors in measurements etc., and preclude general skeptical claims from credible attacks in such a game. These epistemic components thus engender more relevant types for interpretation: are the players using cognitive skill and anti-luck conditions or not? More particularly, most climate skeptics' attacks on climate change science, using general doubts about the evidence without using any credible scientific criteria attacking specific evidential bases, would consequently be ruled as irrelevant in virtue of a property like cognitive skill. But this criterion may also affect the Jury's interpretation of the conversation. A Jury whose beliefs are constrained by cognitive ability will adjust its beliefs about player types and about interpretation only in the light of relevant evidential factors.", "Safety is a feature of beliefs that says that conditionalizing on circumstances that could have been otherwise without one's evidence changing should not affect the strength of one's beliefs. Safety rules out out belief profiles in which luck or mere hunches play a role.", "The notion of a disinterested jury is formally a complicated one. Consider an interpretation of a conversation between two players E and A. Bias can be understood as a sort of modal operator over an agent's first order and higher order beliefs. So a disinterested Jury in an ME game means that neither its beliefs about A nor about E involve an interested bias; nor do its beliefs about A's beliefs about E's beliefs or E's beliefs about the A's beliefs about E's beliefs, and so on up the epistemic hierarchy. Thus, a disinterested Jury in this setting involves an infinitary conjunction of modal statements, which is intuitively (and mathematically) a complex condition on beliefs. And since this disinterestedness must be common knowledge amongst the players, E and A have equally complex beliefs.", "We are interested in ME persuasion games in which the truth may emerge. Is an ME persuasion game with a disinterested Jury sufficient to ensure such an outcome? No. there may be a fatal flaw in E's history that INLINEFORM0 does not uncover and that the Jury does not see. We have to suppose certain abilities on the part of INLINEFORM1 and/or the Jury—namely, that if E has covered up some evidence or falsely constructed evidence or has introduced an inconsistency in her history, that eventually A will uncover it. Further, if there is an unexplained leap, an incoherence in the history, then INLINEFORM2 will eventually find it. Endowing INLINEFORM3 with such capacities would suffice to ensure a history that is in E's winning condition to be the best possible approximation to the truth, a sort of Peircean ideal. Even if we assume only that INLINEFORM4 is a competent and skilled practitioner of her art, we have something like a good approximation of the truth for any history in E's winning condition. We call a persuasion game with such a disinterested Jury and such a winning condition for INLINEFORM5 an ME truth game.", "In an ME truth game, a player or a Jury may not be completely disinterested because of skewed priors. But she may still be interested in finding out the truth and thus adjusting her priors in the face of evidence. We put some constraints on the revision of beliefs of a truth interested player. Suppose such a player INLINEFORM0 has a prior INLINEFORM1 on INLINEFORM2 such that INLINEFORM5 , but in a play INLINEFORM6 of an ME truth game it is revealed that INLINEFORM7 has no confirming evidence for INLINEFORM8 that the opponent INLINEFORM9 cannot attack without convincing rebuttal. Then a truth interested player INLINEFORM10 should update her beliefs INLINEFORM11 after INLINEFORM12 so that INLINEFORM13 . On the other hand, if INLINEFORM14 cannot rebut the confirming evidence that INLINEFORM15 has for INLINEFORM16 , then INLINEFORM17 . Where INLINEFORM18 is infinite, we put a condition on the prefixes INLINEFORM19 of INLINEFORM20 : INLINEFORM21 . Given our concepts of truth interested players and an ME truth game, we can show the following. If the two players of a 2 history ME truth game INLINEFORM22 , have access to all the facts in INLINEFORM23 , and are truth interested but have incompatible histories for INLINEFORM24 based on distinct priors, they will eventually agree to a common history for INLINEFORM25 . To prove this, we note that our players will note the disagreement and try to overcome it since they have a common interest, in the truth about INLINEFORM26 . Then it suffices to look at two cases: in case one, one player INLINEFORM27 converges to the INLINEFORM28 's beliefs in the ME game because INLINEFORM29 successfully attacks the grounds on which INLINEFORM30 's incompatible interpretation is based; in case two, neither INLINEFORM31 nor INLINEFORM32 is revealed to have good evidential grounds for their conflicting beliefs and so they converge to common revised beliefs that assign an equal probability to the prior beliefs that were in conflict. Note that the difference with BIBREF32 is that we need to assume that players interested in the truth conditionalize upon outcomes of discussion in an ME game in the same way. Players who do not do this need not ever agree.", "There are interesting variants of an ME truth game where one has to do with approximations. ME truth games are infinitary games, in which getting a winning history is something E may or may not achieve in the limit. But typically we want the right, or “good enough” interpretation sooner rather than later. We can also appeal to discounted ME games developed in BIBREF21 , in which the scores are assigned to individual discourse moves in context which diminish as the game progresses, to investigate cases where getting things right, or right enough, early on in an ME truth game is crucial.", "In another variant of an ME truth game, which we call a 2-history ME truth game, we pit two biases one for E and one for A, and the two competing histories they engender, about a set of facts against each other. Note that such a game is not necessarily win-lose as is the original ME truth game, because neither history the conversationalists develop and defend may satisfy the disinterested Jury. That is, both E and A may lose in such a game. Is it also possible that they both win? Can both E and A revise their histories so that their opponents have in the end no telling attacks against their histories? We think not at least in the case where the histories make or entail contradictory claims: in such a case they should both lose because they cannot defeat the opposing possibility.", "Suppose INLINEFORM0 wants to win an ME truth game and to construct a truthful history. Let's assume that the set of facts INLINEFORM1 over which the history is constructed is finite. What should she do? Is it possible for her to win? How hard is it for her to win? Does INLINEFORM2 have a winning strategy? As an ME truth game is win-lose, if the winning condition is Borel definable, it will be determined BIBREF4 ; either INLINEFORM3 has a winning strategy or INLINEFORM4 does. Whether INLINEFORM5 has a winning strategy or not is important: if she does, there is a method for finding an optimal history in the winning set; if she doesn't, an optimal history from the point of view of a truth-seeking goal in the ME truth game is not always attainable.", "To construct a history from ambiguous signals for a history over INLINEFORM0 , the interpreter must rely on her beliefs about the situation and her interlocutors to estimate the right history. So the question of getting at truthful interpretations of histories depends at least in part on the right answer to the question, what are the right beliefs about the situation and the participants that should be invoked in interpretation? Given that beliefs are probabilistic, the space of possible beliefs is vast. The right set of beliefs will typically form a very small set with respect to the set of all possible beliefs about a typical conversational setting. Assuming that one will be in such a position “by default” without any further argumentation is highly implausible, as a simple measure theoretic argument ensures that the set of possible interpretations are almost always biased away from a winning history in an ME truth game.", "What is needed for E-defensibility and a winning strategy in an ME truth game? BIBREF4 argued that consistency and coherence (roughly, the elements of the history have to be semantically connected in relevant ways BIBREF3 ) are necessary conditions on all winning conditions and would thus apply to such histories. A necessary additional property is completeness, an accounting of all or sufficiently many of the facts the history is claimed to cover. We've also mentioned the care that has to be paid to the evidence and how it supports the history. Finally, it became apparent when we considered a variant of an ME truth game in which two competing histories were pitted against each other that a winning condition for each player is that they must be able to defeat the opposing view or at least cast doubt on it.", "More particularly, truth seeking biases should provide predictive and explanatory power, which are difficult to define. But we offer the following encoding of predictiveness and explanatory power as constraints on continuations of a given history in an ME truth game. [Predictiveness] A history INLINEFORM0 developed in an ME game for a set of facts INLINEFORM1 is predictive just in case when INLINEFORM2 is presented with a set of facts INLINEFORM3 relevantly similar to INLINEFORM4 , INLINEFORM5 implies a E-defensible extension INLINEFORM6 of INLINEFORM7 to all the facts in INLINEFORM8 . A similar definition can be given for the explanatory power of a history.", "Does INLINEFORM0 have a strategy for constructing a truthful history that can guarantee all of these things? Well, if the facts INLINEFORM1 it is supposed to relate are sufficiently simple or sufficiently unambiguous in the sense that they determine just one history and E is effectively able to build and defend such a history, then yes she does. So very simple cases like establishing whether your daughter has a snack for after school in the morning or not are easy to determine, and the history is equally simple, once you have the right evidence: yes she has a snack, or no she doesn't. A text which is unambiguous similarly determines only one history, and linguistic competence should suffice to determine what that history is. On the other hand, it is also possible that INLINEFORM2 may determine the right history INLINEFORM3 from a play INLINEFORM4 when INLINEFORM5 depends on the type of the relevant players of INLINEFORM6 . For INLINEFORM7 can have a true “type” for the players relevant to INLINEFORM8 . In general whether or not a player has a winning strategy will depend on the structure of the optimal history targeted, as well as on the resources and constraints on the players in an ME truth game.", "In the more general case, however, whether INLINEFORM0 has a winning strategy in an ME truth game become in general non trivial. At least in a relative sort of way, E can construct a model satisfying her putative history at each stage to show consistency (relative to ZF or some other background theory); coherence can be verified by inspection over the finite discourse graph of the relevant history at each stage and ensuing attacks. Finally completeness and evidential support can be guaranteed at each stage in the history's construction, if E has the right sort of beliefs. If all this can be guaranteed at each stage, von Neumann's minimax theorem or its extension in BIBREF40 guarantees that E has a winning strategy for E-defensibility.", "In future work, we plan to analyze in detail some complicated examples like the ongoing debate about climate, change where there is large scale scientific agreement but where disagreement exists because of distinct winning conditions." ], [ "An ME truth game suggests a certain notion of truth: the truth is a winning history in an ME persuasion game with a disinterested Jury. This is a Peircean “best attainable” approximation of the truth, an ”internal” notion of truth based on consistency, coherence with the available evidence and explanatory and predictive power. But we could investigate also a more external view of truth. Such a view would suppose that the Jury has in its possession the “true history over a set of facts INLINEFORM0 , that the history eventually constructed by E should converge to within a certain margin of error in the limit.", "We think ME games are a promising tool for investigating bias, and in this section we mention some possible applications and open questions that ME games might help us answer. ME truth games allow us to analyze extant strategies for eliminating bias. For instance, given two histories for a given set of facts, it is a common opinion that one finds a less biased history by splitting the difference between them. This is a strategy perhaps distantly inspired by the idea that the truth lies in the golden mean between extremes. But is this really true? ME games should allow us to encode this strategy and find out.", "Another connection that our approach can exploit is the one between games and reinforcement learning BIBREF44 , BIBREF45 , BIBREF46 . While reinforcement learning is traditionally understood as a problem involving a single agent and is not powerful enough to understand the dynamics of competing biases of agents with different winning conditions, there is a direct connection made in BIBREF45 between evolutionary games with replicator dynamics and the stochastic learning theory of BIBREF47 with links to multiagent reinforcement learning. BIBREF44 , BIBREF46 provide a foundation for multiagent reinforcement learning in stochastic games. The connection between ME games and stochastic and evolutionary games has not been explored but some victory conditions in ME games can be an objective that a replicator dynamics converges to, and epistemic ME games already encompass a stochastic component. Thus, our research will be able to draw on relevant results in these areas.", "A typical assumption we make as scientists is that rationality would lead us to always prefer to have a more complete and more accurate history for our world. But bias isn't so simple, as an analysis of ME games can show. ME games are played for many purposes with non truth-seeking biases that lead to histories that are not a best approximation to the truth may be the rational or optimal choice, if the winning condition in the game is other than that defined in an ME truth game. This has real political and social relevance; for example, a plausible hypothesis is that those who argue that climate change is a hoax are building an alternative history, not to get at the truth but for other political purposes. Even being a truth interested player can at least initially fail to generate histories that are in the winning condition of an ME truth game. Suppose E, motivated by truth interest, has constructed for facts INLINEFORM0 a history INLINEFORM1 that meets constraints including coherence, consistency, and completeness, and it provides explanatory and predictive power for at least a large subset INLINEFORM2 of INLINEFORM3 . E's conceptualization of INLINEFORM4 can still go wrong, and E may fail to have a winning strategy in interesting ways. First, INLINEFORM5 can mischaracterize INLINEFORM6 with high confidence in virtue of evidence only from INLINEFORM7 BIBREF48 ; Especially if INLINEFORM8 is large and hence INLINEFORM9 is just simply very “long”, it is intuitively more difficult even for truth seeking players to come to accept that an alternative history is the correct one. Second, INLINEFORM10 may lack or be incompatible with concepts that would be needed to be aware of facts in INLINEFORM11 . BIBREF55 , BIBREF23 investigate a special case of this, a case of unawareness. To succeed E would have to learn the requisite concepts first.", "All of this has important implications for learning. We can represent learning as the following ME games. It is common to represent making a prediction Y from data X as a zero sum game between our player E and Nature: E wins if for data X provided by Nature, E makes a prediction that the Jury judges to be correct. More generally, an iterated learning process is a repeated zero sum game, in which E makes predictions in virtue of some history, which one might also call a model or a set of hypotheses; if she makes a correct prediction at round n, she reinforces her beliefs in her current history; if she makes a wrong prediction, she adjusts it. The winning condition may be defined in terms of some function of the scores at each learning round or in terms of some global convergence property. Learning conceived in this way is a variant of a simple ME truth game in which costs are assigned to individual discourse moves as in discounted ME games.", "In an ME truth game, where E develops a history INLINEFORM0 over a set of facts INLINEFORM1 while A argues for an alternative history INLINEFORM2 over INLINEFORM3 , A can successfully defend history INLINEFORM4 as long as either the true history INLINEFORM5 is (a) not learnable or (b) not uniquely learnable. In case (a), E cannot convince the Jury that INLINEFORM6 is the right history; in case (b) A can justify INLINEFORM7 as an alternative interpretation. Consider the bias of a hardened climate change skeptic: the ME model predicts that simply presenting new facts to the agent will not induce him to change his history, even if to a disinterested Jury his history is clearly not in his winning condition. He may either simply refuse to be convinced because he is not truth interested, or because he thinks his alternative history INLINEFORM8 can explain all of the data in INLINEFORM9 just as well as E's climate science history INLINEFORM10 . Thus, ME games open up an unexplored research area of unlearnable histories for certain agents." ], [ "In this paper, we have put forward the foundations of a formal model of interpretive bias. Our approach differs from philosophical and AI work on dialogue that links dialogue understanding to the recovery of speaker intentions and beliefs BIBREF56 , BIBREF57 . Studies of multimodal interactions in Human Robot Interaction (HRI) have also followed the Gricean tradition BIBREF58 , BIBREF59 , BIBREF60 . BIBREF61 , BIBREF4 , BIBREF62 ), offer many reasons why a Gricean program for dialogue understanding is difficult for dialogues in which there is not a shared task and a strong notion of co-operativity. Our model is not in the business of intention and belief recovery, but rather works from what contents agents explicitly commit to with their actions, linguistic and otherwise, to determine a rational reconstruction of an underlying interpretive bias and what goals a bias would satisfy. In this we also go beyond what current theories of discourse structure like SDRT can accomplish. Our theoretical work also requires an empirical component on exactly how bias is manifested to be complete. This has links to the recent interest in fake news. Modeling interpretive bias can help in detecting fake news by providing relevant types to check in interpretation and by providing an epistemic foundation for fake news detection by exploiting ME truth games where one can draw from various sources to check the credibility of a story. In a future paper, we intend to investigate these connections thoroughly.", "References", "Asher, N., Lascarides, A.: Strategic conversation. Semantics and Pragmatics 6(2), http:// dx.doi.org/10.3765/sp.6.2. (2013)", "Asher, N., Paul, S.: Evaluating conversational success: Weighted message exchange games. In: Hunter, J., Simons, M., Stone, M. (eds.) 20th workshop on the semantics and pragmatics of dialogue (SEMDIAL). New Jersey, USA (July 2016)", "Asher, N.: Reference to Abstract Objects in Discourse. Kluwer Academic Publishers (1993)", "Asher, N., Lascarides, A.: Logics of Conversation. Cambridge University Press (2003)", "Asher, N., Paul, S.: Conversations and incomplete knowledge. In: Proceedings of Semdial Conference. pp. 173–176. Amsterdam (December 2013)", "Asher, N., Paul, S.: Conversation and games. In: Ghosh, S., Prasad, S. (eds.) Logic and Its Applications: 7th Indian Conference, ICLA 2017, Kanpur, India, January 5-7, 2017, Proceedings. vol. 10119, pp. 1–18. Springer, Kanpur, India (January 2017)", "Asher, N., Paul, S.: Strategic conversation under imperfect information: epistemic Message Exchange games (2017), accepted for publication in Journal of Logic, Language and Information", "Asher, N., Paul, S., Venant, A.: Message exchange games in strategic conversations. Journal of Philosophical Logic 46.4, 355–404 (2017), http://dx.doi.org/10.1007/s10992-016-9402-1", "Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Machine learning 47(2-3), 235–256 (2002)", "Aumann, R.J.: Agreeing to disagree. The Annals of Statistics 4(6), 1236–1239 (1976)", "Banks, J.S., Sundaram, R.K.: Switching costs and the gittins index. Econometrica: Journal of the Econometric Society pp. 687–694 (1994)", "Baron, J.: Thinking and deciding. Cambridge University Press (2000)", "Battigalli, P.: Rationalizability in infinite, dynamic games with incomplete information. Research in Economics 57(1), 1–38 (2003)", "Berger, A.L., Pietra, V.J.D., Pietra, S.A.D.: A maximum entropy approach to natural language processing. Computational linguistics 22(1), 39–71 (1996)", "Besnard, P., Hunter, A.: Elements of argumentation, vol. 47. MIT press Cambridge (2008)", "Blackwell, D.: An analog of the minimax theorem for vector payoffs. Pacific Journal of Mathematics 6(1), 1–8 (1956)", "Börgers, T., Sarin, R.: Learning through reinforcement and replicator dynamics. Journal of Economic Theory 77(1), 1–14 (1997)", "Burnetas, A.N., Katehakis, M.N.: Optimal adaptive policies for markov decision processes. Mathematics of Operations Research 22(1), 222–255 (1997)", "Burnett, H.: Sociolinguistic interaction and identity construction: The view from game-theoretic pragmatics. Journal of Sociolinguistics 21(2), 238–271 (2017)", "Bush, R.R., Mosteller, F.: Stochastic models for learning. John Wiley & Sons, Inc. (1955)", "Cadilhac, A., Asher, N., Benamara, F., Lascarides, A.: Commitments to preferences in dialogue. In: Proceedings of the 12th Annual SIGDIAL Meeting on Discourse and Dialogue. pp. 204–215 (2011)", "Cadilhac, A., Asher, N., Benamara, F., Lascarides, A.: Grounding strategic conversation: Using negotiation dialogues to predict trades in a win-lose game. In: Proceedings of EMNLP. pp. 357–368. Seattle (2013)", "Cadilhac, A., Asher, N., Benamara, F., Popescu, V., Seck, M.: Preference extraction form negotiation dialogues. In: Biennial European Conference on Artificial Intelligence (ECAI) (2012)", "Chambers, N., Allen, J., Galescu, L., Jung, H.: A dialogue-based approach to multi-robot team control. In: The 3rd International Multi-Robot Systems Workshop. Washington, DC (2005)", "Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial intelligence 77(2), 321–357 (1995)", "Erev, I., Wallsten, T.S., Budescu, D.V.: Simultaneous over-and underconfidence: The role of error in judgment processes. Psychological review 101(3), 519 (1994)", "Foster, M.E., Petrick, R.P.A.: Planning for social interaction with sensor uncertainty. In: The ICAPS 2014 Scheduling and Planning Applications Workshop (SPARK). pp. 19–20. Portsmouth, New Hampshire, USA (Jun 2014)", "Garivier, A., Cappé, O.: The kl-ucb algorithm for bounded stochastic bandits and beyond. In: COLT. pp. 359–376 (2011)", "Glazer, J., Rubinstein, A.: On optimal rules of persuasion. Econometrica 72(6), 119–123 (2004)", "Grice, H.P.: Utterer's meaning and intentions. Philosophical Review 68(2), 147–177 (1969)", "Grice, H.P.: Logic and conversation. In: Cole, P., Morgan, J.L. (eds.) Syntax and Semantics Volume 3: Speech Acts, pp. 41–58. Academic Press (1975)", "Grosz, B., Sidner, C.: Attention, intentions and the structure of discourse. Computational Linguistics 12, 175–204 (1986)", "Harsanyi, J.C.: Games with incomplete information played by “bayesian” players, parts i-iii. Management science 14, 159–182 (1967)", "Henderson, R., McCready, E.: Dogwhistles and the at-issue/non-at-issue distinction. Published on Semantics Archive (2017)", "Hilbert, M.: Toward a synthesis of cognitive biases: how noisy information processing can bias human decision making. Psychological bulletin 138(2), 211 (2012)", "Hintzman, D.L.: Minerva 2: A simulation model of human memory. Behavior Research Methods, Instruments, & Computers 16(2), 96–101 (1984)", "Hintzman, D.L.: Judgments of frequency and recognition memory in a multiple-trace memory model. Psychological review 95(4), 528 (1988)", "Hu, J., Wellman, M.P.: Multiagent reinforcement learning: theoretical framework and an algorithm. In: ICML. vol. 98, pp. 242–250 (1998)", "Hunter, J., Asher, N., Lascarides, A.: Situated conversation (2017), submitted to Semantics and Pragmatics", "Khoo, J.: Code words in political discourse. Philosophical Topics 45(2), 33–64 (2017)", "Konek, J.: Probabilistic knowledge and cognitive ability. Philosophical Review 125(4), 509–587 (2016)", "Lai, T.L., Robbins, H.: Asymptotically efficient adaptive allocation rules. Advances in applied mathematics 6(1), 4–22 (1985)", "Lakkaraju, H., Kamar, E., Caruana, R., Horvitz, E.: Discovering blind spots of predictive models: Representations and policies for guided exploration. arXiv preprint arXiv:1610.09064 (2016)", "Lee, M., Solomon, N.: Unreliable Sources: A Guide to Detecting Bias in News Media. Lyle Smart, New York (1990)", "Lepore, E., Stone, M.: Imagination and Convention: Distinguishing Grammar and Inference in Language. Oxford University Press (2015)", "Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: Proceedings of the eleventh international conference on machine learning. vol. 157, pp. 157–163 (1994)", "Morey, M., Muller, P., Asher, N.: A dependency perspective on rst discourse parsing and evaluation (2017), submitted to Computational Linguistics", "Moss, S.: Epistemology formalized. Philosophical Review 122(1), 1–43 (2013)", "Perret, J., Afantenos, S., Asher, N., Morey, M.: Integer linear programming for discourse parsing. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pp. 99–109. Association for Computational Linguistics, San Diego, California (June 2016), http://www.aclweb.org/anthology/N16-1013", "Perzanowski, D., Schultz, A., Adams, W., Marsh, E., Bugajska, M.: Building a multimodal human-robot interface. Intelligent Systems 16(1), 16–21 (2001)", "Potts, C.: The logic of conventional implicatures. Oxford University Press Oxford (2005)", "Recanati, F.: Literal Meaning. Cambridge University Press (2004)", "Sperber, D., Wilson, D.: Relevance. Blackwells (1986)", "Stanley, J.: How propaganda works. Princeton University Press (2015)", "Tversky, A., Kahneman, D.: Availability: A heuristic for judging frequency and probability. Cognitive psychology 5(2), 207–232 (1973)", "Tversky, A., Kahneman, D.: Judgment under uncertainty: Heuristics and biases. In: Utility, probability, and human decision making, pp. 141–162. Springer (1975)", "Tversky, A., Kahneman, D.: Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological review 90(4), 293 (1983)", "Tversky, A., Kahneman, D.: The framing of decisions and the psychology of choice. In: Environmental Impact Assessment, Technology Assessment, and Risk Analysis, pp. 107–129. Springer (1985)", "Venant, A.: Structures, Semantics and Games in Strategic Conversations. Ph.D. thesis, Université Paul Sabatier, Toulouse (2016)", "Venant, A., Asher, N., Muller, P., Denis, P., Afantenos, S.: Expressivity and comparison of models of discourse structure. In: Proceedings of the SIGDIAL 2013 Conference. pp. 2–11. Association for Computational Linguistics, Metz, France (August 2013), http://www.aclweb.org/anthology/W13-4002", "Venant, A., Degremont, C., Asher, N.: Semantic similarity. In: LENLS 10. Tokyo, Japan (2013)", "Walton, D.N.: Logical dialogue-games. University Press of America (1984)", "Whittle, P.: Multi-armed bandits and the gittins index. Journal of the Royal Statistical Society. Series B (Methodological) pp. 143–149 (1980)", "Wilkinson, N., Klaes, M.: An introduction to behavioral economics. Palgrave Macmillan (2012)" ] ] }
{ "question": [ "What factors contribute to interpretive biases according to this research?", "Which interpretative biases are analyzed in this paper?" ], "question_id": [ "7426a6e800d6c11795941616fc4a243e75716a10", "da4535b75e360604e3ce4bb3631b0ba96f4dadd3" ], "nlp_background": [ "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ "bias", "bias" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Which events authors choose to include in their history, which they leave out, and the way the events chosen relate to the march" ], "yes_no": null, "free_form_answer": "", "evidence": [ "While the choice of wording helps to convey bias, just as crucial is the way that the reporters portray the march as being related to other events. Which events authors choose to include in their history, which they leave out, and the way the events chosen relate to the march are crucial factors in conveying bias. Townhall's bias against the March of Science expressed in the argument that it politicizes science cannot be traced back to negative opinion words; it relies on a comparison between the March for Science and the Women's March, which is portrayed as a political, anti-Trump event. Newsbusters takes a different track: the opening paragraph conveys an overall negative perspective on the March for Science, despite its neutral language, but it achieves this by contrasting general interest in the march with a claimed negative view of the march by many “actual scientists.” On the other hand, the New York Times points to an important and presumably positive outcome of the march, despite its controversiality: a renewed look into the role of science in public life and politics. Like Newsbusters, it lacks any explicit evaluative language and relies on the structural relations between events to convey an overall positive perspective; it contrasts the controversy surrounding the march with a claim that the march has triggered an important discussion, which is in turn buttressed by the reporter's mentioning of the responses of the Times' readership." ], "highlighted_evidence": [ "Which events authors choose to include in their history, which they leave out, and the way the events chosen relate to the march are crucial factors in conveying bias." ] } ], "annotation_id": [ "15919887a45f0ab0271cbbbd259dca3d3689cfb7" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury" ], "yes_no": null, "free_form_answer": "", "evidence": [ "An epistemic ME game is an ME game with a Harsanyi type space and a type/history correspondence as we've defined it. By adding types to an ME game, we provide the beginnings of a game theoretic model of interpretive bias that we believe is completely new. Our definition of bias is now: [Interpretive Bias] An interpretive bias in an epistemic ME game is the probability distribution over types given by the belief function of the conversationalists or players, or the Jury. Note that in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury." ], "highlighted_evidence": [ "Our definition of bias is now: [Interpretive Bias] An interpretive bias in an epistemic ME game is the probability distribution over types given by the belief function of the conversationalists or players, or the Jury.", "Note that in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury." ] } ], "annotation_id": [ "5de7561e1d77d850074ea4d32d30a73d95c4b79e" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [], "file": [] }
2004.00139
A Swiss German Dictionary: Variation in Speech and Writing
We introduce a dictionary containing forms of common words in various Swiss German dialects normalized into High German. As Swiss German is, for now, a predominantly spoken language, there is a significant variation in the written forms, even between speakers of the same dialect. To alleviate the uncertainty associated with this diversity, we complement the pairs of Swiss German - High German words with the Swiss German phonetic transcriptions (SAMPA). This dictionary becomes thus the first resource to combine large-scale spontaneous translation with phonetic transcriptions. Moreover, we control for the regional distribution and insure the equal representation of the major Swiss dialects. The coupling of the phonetic and written Swiss German forms is powerful. We show that they are sufficient to train a Transformer-based phoneme to grapheme model that generates credible novel Swiss German writings. In addition, we show that the inverse mapping - from graphemes to phonemes - can be modeled with a transformer trained with the novel dictionary. This generation of pronunciations for previously unknown words is key in training extensible automated speech recognition (ASR) systems, which are key beneficiaries of this dictionary.
{ "section_name": [ "Introduction", "Related Work", "Dictionary Content and access", "Construction of the dictionary", "Construction of the dictionary ::: Discretising continuous variation", "Construction of the dictionary ::: Manual annotation ::: SAMPAs", "Construction of the dictionary ::: Manual annotation ::: GSWs", "Construction of the dictionary ::: Automatic annotation", "Construction of the dictionary ::: Automatic annotation ::: Transformer-based Phoneme to Grapheme (p2g)", "Construction of the dictionary ::: Automatic annotation ::: Test set and evaluation", "Construction of the dictionary ::: Automatic annotation ::: Grapheme to Phoneme (g2p) and its benefits for ASR", "Discussion", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "Swiss German refers to any of the German varieties that are spoken in about two thirds of Switzerland BIBREF0. Besides at least one of those dialectal varieties, Swiss German people also master standard (or 'High') German which is taught in school as the official language of communication.", "Swiss German is varies strongly. Many differences exist in the dialectal continuum of the German speaking part of Switzerland. Besides pronunciation, it also varies a lot in writing. Standard German used to be the exclusive language for writing in Switzerland. Writing in Swiss German has only come up rather recently (notably in text messaging). Because of this, there are no orthographic conventions for Swiss German varieties. Even people speaking the same dialect can, and often do, write phonetically identical words differently.", "In this paper, we present a dictionary of written standard German words paired with their pronunciation in Swiss German words. Additionally Swiss German spontaneous writings, i.e. writings as they may be used in text messages by native speakers, are paired with Swiss German pronunciations.", "The primary motivation for building this dictionary is rendering Swiss German accessible for technologies such as Automatic Speech Recognition (ASR).", "This is the first publicly described Swiss German dictionary shared for research purposes. Furthermore, this is the first dictionary that combines pronunciations of Swiss German with spontaneous writings." ], [ "This dictionary complements previously developed resources for Swiss German, which share some common information. Spontaneous noisy writing has already been recorded in text corpora BIBREF1, BIBREF2, BIBREF3, some of which are also normalized. These resources contain relatively large lexicons of words used in context, but they do not contain any information about pronunciation. The features of speech are represented in other resources, such as BIBREF4, BIBREF5, BIBREF6, which, on the other hand, contain relatively small lexicons (small set of words known to vary across dialects). The ArchiMob corpus does contain a large lexicon of speech and writing (Dieth transcription), but the spoken part is available in audio sources only, without phonetic transcription.", "This dictionary is the first resource to combine all the relevant information together. A relatively large lexicon has been constructed in which phonetic transcriptions (in the SAMPA alphabet) are mapped to various spontaneous writings controlling for the regional distribution. Some of the representations in this dictionary are produced manually, while others are added using automatic processing.", "Automatic word-level conversion between various writings in Swiss German has been addressed in several projects, mostly for the purpose of writing normalization BIBREF7, BIBREF2, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF0, BIBREF12. The task of normalization consist of mapping multiple variants of a single lexical item into a single writing usually identical to standard German (an example would be the Swiss German words aarbet and arbäit which both map to standard German arbeit ('work')). Early data sets were processed manually (SMS). This was followed by an implementation of character-level statistical machine translation models BIBREF13, BIBREF14 and, more recently, with neural sequence-to-sequence technology. The solution by lusettietal18 employes soft-attention encoder-decoder recurrent networks enhanced with synchronous multilevel decoding. ruzsicsetal19 develop these models further to integrate linguistic (PoS) features.", "A slightly different task of translating between standard German and Swiss dialects was first addressed with finite state technology BIBREF15. More recently, honnet-etal17 test convolutional neural networks on several data sets.", "We continue the work on using neural networks for modeling word-level conversion. Unlike previous work, which dealt with written forms only, we train models for mapping phonetic representations to various possible writings. The proposed solution relies on the latest framework for sequence-to-sequence tasks — transformer networks BIBREF16." ], [ "We pair 11'248 standard German written words with their phonetical representations in six different Swiss dialects: Zürich, St. Gallen, Basel, Bern, Visp, and Stans (Figure FIGREF1). The phonetic words were written in a modified version of the Speech Assessment Methods Phonetic Alphabet (SAMPA). The Swiss German phonetic words are also paired with Swiss German writings in the latin alphabet. (From here onwards, a phonetic representation of a Swiss German word will be called a SAMPA and a written Swiss German word will be called a GSW.)", "This dictionary comes in two versions as we used two differently sized sets of SAMPA characters. Our extended set including 137 phones allows for a detailed and adequate representation of the diverse pronunciation in Switzerland. The smaller set of 59 phones is easier to compute. The phone reduction was mainly done by splitting up combined SAMPA-characters such as diphthongs. UI s t r $ \\lbrace $ tt @ and U I s t r $ \\lbrace $ t t @ for example are both representations of the Stans pronunciation of the standard German word austreten ('step out'). The latter representation belongs to the dictionary based on the smaller phoneset. Table TABREF2 shows an example of five dictionary entries based on the bigger phoneset.", "For a subset of 9000 of 11'248 standard German words, we have manually annotated GSWs for Visp (9000) and for Zurich (2 x 9000, done by two different annotators). For a subsubset of 600 of those standard German words we have manually annotated GSWs for the four other dialects of St. Gallen, Basel, Bern, and Stans. The remaining writing variants are generated using automatic methods described below.", "The dictionary is freely available for research purposes under the creative commons share-alike non-commercial licence via this website http://tiny.uzh.ch/11X." ], [ "In the following we present the steps of construction of our dictionary, also detailing how we chose the six dialects to represent Swiss German and how, starting with a list of standard German words, we retrieved the mapping SAMPAs and GSWs." ], [ "To be able to represent Swiss German by only a few dialects which differ considerably it is necessary to discretize linguistic varieties. Because, as mentioned earlier, regional language variation in Switzerland is continuous. For this identification of different varieties we used a dialectometric analysis BIBREF17. This analysis is based on lexical, phonological, morphological data of the German speaking areas of Switzerland BIBREF4. As we worked with word-lists and not sentences, we discounted syntactical influences on area boundaries that are also described in that analysis. We represent six differentiated linguistic varieties. We considered working with ten linguistic varieties because this number of areas was the 'best-cut'-analysis in the dialectometric analysis BIBREF17. Yet, due to time restraints and considerable overlap between some of the linguistic varieties, we reduced this number to six. We also made some adjustements to the chosen varieties in order to correspond better to the perception of speakers and in favor of more densely populated areas.", "One way to represent the six individualized linguistic varieties would have been to annotate the dialectal centers, i.e. those places that have the average values of dialectal properties within the area where the variety is spoken. However, we chose to represent the linguistic varieties by the most convenient urban places. Those were the dialects of the Cities Zurich, St. Gallen, Basel, Bern, and Visp, and Stans." ], [ "For each standard German word in our dictionary we manually annotated its phonetic representation in the six chosen dialects. The information about the pronunciation of Swiss German words is partially available also from other sources but not fully accessible BIBREF4 BIBREF7.", "To help us with pronunciation our annotators first used their knowledge as native speakers (for Zurich and Visp). Secondly, they consulted dialect specific grammars BIBREF18 BIBREF19 BIBREF20 BIBREF21 BIBREF22 as well as dialect specific lexica BIBREF23 BIBREF24 BIBREF25. They also considered existing Swiss German dictionaries BIBREF7 BIBREF4, listened to recordings BIBREF0 and conferred with friends and acquaintances originating from the respective locations." ], [ "9000 GSWs for Visp German and 2 x 9000 GSWs for Zurich German were annotated by native speakers of the respective dialect. Our annotators created the GSWs while looking at standard German words and without looking at the corresponding SAMPAs for Visp and Zurich. Through this independence from SAMPAs we are able to avoid biases concerning the phonetics as well as the meaning of the word in generating GSWs.", "At a later stage of our work, we added each 600 GSWs for the four dialects of St. Gallen, Basel, Bern, and Stans in order to improve our phoneme-to-grapheme(p2g) model (see next section). For the manual annotation of these dialects we had no native speakers. Therefore, when writing the GSWs, our annotators relied on the corresponding SAMPAs of these dialects, which they had made an effort to create before." ], [ "In order to account for the mentioned variety of everyday Swiss German writing, we aimed for more than one GSW per SAMPA. The heterogeneous writing style makes the SAMPA$\\,\\rightarrow \\,$GSW a one to many relation instead of the regular one to one that speakers of standard languages are accustomed to. To save time in generating the many GSWs, we opted for an automatic process.", "We first tried to automatize the generation of GSWs with a rule-based program. Via SAMPAs together with phoneme-to-grapheme mappings we tried to obtain all possible GSWs. Yet, this yielded mostly impossible writings and also not all the writings we had already done manually. We then set up a phoneme-to-grapheme(p2g) model to generate the most likely spellings." ], [ "The process of generating written forms from a given SAMPA can be viewed as a sequence-to-sequence problem, where the input is a sequence of phonemes and the output is a sequence of graphemes.", "We decided to use a Transformer-based model for the phoneme-to-grapheme (p2g) task. The reason for this is twofold. First, the Transformer has shown great success in seq2seq tasks and it has outperformed LSTM and CNN-based models. Second, it is computationally more efficient than LSTM and CNN networks.", "The Transformer consists of an encoder and a decoder part. The encoder generates a contextual representation for each input SAMPA that is then fed into the decoder together with the previously decoded grapheme. They both have N identical layers. In the encoder, each layer has a multi-head self-attention layer and a position-wise fully-connected feed-forward layer. While in the decoder, in addition to these two layers, we also have an additional multi-headed attention layer that uses the output of the encoder BIBREF16.", "We are using a Pytorch implementation of the Transformer. As a result of the small size of the dataset, we are using a smaller model with only 2 layers and 2 heads. The dimension of the key (d_k) and value (d_v) is 32, the dimension of the model (d_model) and the word vectors (d_word_vec) is 50 and the hidden inner dimension (d_inner_hid) is 400. The model is trained for 55 epochs with a batch size of 64 and a dropout of 0.2. For decoding the output of the model, we are using beam search with beam size 10. We experimented with different beam sizes, but we saw that it does not have significant influence on the result.", "The training set is made of 24'000 phonemes-to-graphemes pairs, which are the result of transcribing 8'000 High German words into two Zurich forms and one Visp form. Those transcriptions were made independently by three native speakers. Due to the scarcity of data, we decided not to distinguish between dialects. Hence, a single model receives a sequence of SAMPA symbols and learns to generate a matching sequence of characters." ], [ "Our team of Swiss German annotators evaluated a test-set of 1000 words. We aimed to exclude only very far-off forms (tagged '0'), such that they are very probably to be seen as false by Swiss German speakers. The accepted writings (tagged '1') might include some that seem off to the Swiss German reader.", "In order to consistently rate the output, the criteria shown in table TABREF4 were followed. A GSW was tagged '0' if there was at least one letter added, missing, or changed without comprehensible phonetic reason. GSWs were also tagged '0' if there were at least two mistakes that our annotators saw as minor. 'Minor mistakes' are substitutions of related sounds or spellings, added or omitted geminates, and changes in vowel length.", "For each of the 1000 words in the test-set, five GSW-predictions in all six dialects were given to our annotators. For Visp and Zurich they tagged each 1000x5 GSW predictions with 1 or 0. For St. Gallen, Basel, Bern, and Stans, they evaluated 200x5.", "In Table TABREF13 we show the result from this evaluation. We count the number of correct GSWs (labeled as '1') among the top 5 candidates generated by the p2g model, where the first candidate is the most relevant, then the second one and so on.", "The evaluation was done at a stage where our model was trained only on GSW for Zurich and Visp (see sec. SECREF8). The amount of correct predictions are lower for the dialects of St. Gallen, Basel, Bern, and Stans, mainly because there were some special SAMPA characters we used for those dialects and the model did not have the correlating latin character strings. After the evaluation, we added each 600 GSWs for the four dialects of St. Gallen, Basel, Bern, and Stans to improve the model." ], [ "Automatic speech recognition (ASR) systems are the main use cases for our dictionary. ASR systems convert spoken language into text. Today, they are widely used in different domains from customer and help centers to voice-controlled assistants and devices. The main resources needed for an ASR system are audio, transcriptions and a phonetic dictionary. The quality of the ASR system is highly dependant of the quality of the dictionary. With our resource we provide such a phonetic dictionary.", "To increase the benefits of our data for ASR systems, we also trained a grapheme-to-phoneme (g2p) model: Out-of-vocabulary words can be a problem for ASR system. For those out-of-vocabulary words we need a model that can generate pronunciations from a written form, in real time. This is why we train a grapheme-to-phoneme (g2p) model that generates a sequence of phonemes for a given word. We train the g2p model using our dictionary and compare its performance with a widely used joint-sequence g2p model, Sequitur BIBREF26. For the g2p model we are using the same architecture as for the p2g model. The only difference is input and output vocabulary. The Sequitur and our model are using the dictionary with the same train (19'898 samples), test (2'412 samples) and validation (2'212 samples) split. Additionally, we also test their performance only on the items from the Zurich and Visp dialect, because most of the samples are from this two dialects. In Table TABREF15 we show the result of the comparison of the two models. We compute the edit distance between the predicted and the true pronunciation and report the number of exact matches. In the first columns we have the result using the whole test set with all the dialects, and in the 2nd and 3rd columns we show the number of exact matches only on the samples from the test set that are from the Zurich and Visp dialect. For here we can clearly see that our model performs better than the Sequitur model. The reason why we have less matches in the Visp dialect compared to Zurich is because most of the our data is from the Zurich dialect." ], [ "One of our objectives was to map phonetic words with their writings. There are some mismatches between SAMPA and GSWs in our dictionary, especially when the GSWs were done manually and independently from the SAMPA. Those mismatches occur where there is no straightforward correspondence of a standard German and Swiss German word.", "Two kinds of such a missing correspondence can be distinguished. First, there are ambiguous standard German words. And that is necessarily so, as our dictionary is based on a list of standard German words without sentential or any other context. An example for a (morphologically) ambiguous word is standard German liebe. As we did not differentiate upper- and lower-case, it can both mean (a) 'I love' or (b) 'the love'. As evident from table 1, liebe (a) and liebi (b) were mixed in our dictionary. The same is the case for standard German frage which means either (a) 'I ask' or (b) 'the question'. Swiss German fröge, froge, fregu (a) and or (b) fraag, froog were mixed. (For both examples, see table 1.)", "The second case of missing straightforward correspondence is distance between standard German and Swiss German. For one, lexical preferences in Swiss German differ from those in standard German. To express that food is 'tasty' in standard German, the word lecker is used. This is also possible in Swiss German, yet the word fein is much more common. Another example is that the standard German word rasch ('swiftly') is uncommon in Swiss German – synonyms of the word are preferred. Both of this shows in the variety of options our annotators chose for those words (see table 1). Also, the same standard German word may have several dialectal versions in Swiss German. For example there is a short and long version for the standard German word grossvater, namely grospi and grossvatter.", "A second aim was to represent the way Swiss German speaking people write spontaneously. However, as our annotators wrote the spontaneous GSWs mostly while looking at standard German words, our GSWs might be biased towards standard German orthography. Yet, there is potentially also a standard German influence in the way Swiss German is actually written.", "We partly revised our dictionary in order to adapt to everyday writing: We introduced explicit boundary marking into our SAMPAs. We inserted an _ in the SAMPA where there would usually be a space in writing. An example where people would conventionally add a space are corresponding forms to standard German preterite forms, for example 'ging'. The Swiss German corresponding past participles – here isch gange – would (most often) be written separately. So entries like b i n k a N @ in table 1 were changed to b i n _ k a N @." ], [ "In this work we introduced the first Swiss German dictionary. Through its dual nature - both spontaneous written forms in multiple dialects and accompanying phonetic representations - we believe it will become a valuable resource for multiple tasks, including automated speech recognition (ASR). This resource was created using a combination of manual and automated work, in a collaboration between linguists and data scientists that leverages the best of two worlds - domain knowledge and data-driven focus on likely character combinations.", "Through the combination of complementary skills we overcame the difficulty posed by the important variations in written Swiss German and generated a resource that adds value to downstream tasks. We show that the SAMPA to written Swiss German is useful in speech recognition and can replace the previous state of the art. Moreover the written form to SAMPA is promising and has applications in areas like text-to-speech.", "We make the dictionary freely available for researchers to expand and use." ], [ "We would like to thank our collaborators Alina Mächler and Raphael Tandler for their valueable contribution." ] ] }
{ "question": [ "How many words are coded in the dictionary?", "Is the model evaluated on the graphemes-to-phonemes task?" ], "question_id": [ "4d30c2223939b31216f2e90ef33fe0db97e962ac", "7b47aa6ba247874eaa8ab74d7cb6205251c01eb5" ], "nlp_background": [ "two", "two" ], "topic_background": [ "research", "research" ], "paper_read": [ "no", "no" ], "search_query": [ "dialects", "dialects" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "11'248" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We pair 11'248 standard German written words with their phonetical representations in six different Swiss dialects: Zürich, St. Gallen, Basel, Bern, Visp, and Stans (Figure FIGREF1). The phonetic words were written in a modified version of the Speech Assessment Methods Phonetic Alphabet (SAMPA). The Swiss German phonetic words are also paired with Swiss German writings in the latin alphabet. (From here onwards, a phonetic representation of a Swiss German word will be called a SAMPA and a written Swiss German word will be called a GSW.)" ], "highlighted_evidence": [ "We pair 11'248 standard German written words with their phonetical representations in six different Swiss dialects: Zürich, St. Gallen, Basel, Bern, Visp, and Stans (Figure FIGREF1)." ] } ], "annotation_id": [ "1b3a3ffae9e4cf371d821d5b9c407e82c5b75842" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "To increase the benefits of our data for ASR systems, we also trained a grapheme-to-phoneme (g2p) model: Out-of-vocabulary words can be a problem for ASR system. For those out-of-vocabulary words we need a model that can generate pronunciations from a written form, in real time. This is why we train a grapheme-to-phoneme (g2p) model that generates a sequence of phonemes for a given word. We train the g2p model using our dictionary and compare its performance with a widely used joint-sequence g2p model, Sequitur BIBREF26. For the g2p model we are using the same architecture as for the p2g model. The only difference is input and output vocabulary. The Sequitur and our model are using the dictionary with the same train (19'898 samples), test (2'412 samples) and validation (2'212 samples) split. Additionally, we also test their performance only on the items from the Zurich and Visp dialect, because most of the samples are from this two dialects. In Table TABREF15 we show the result of the comparison of the two models. We compute the edit distance between the predicted and the true pronunciation and report the number of exact matches. In the first columns we have the result using the whole test set with all the dialects, and in the 2nd and 3rd columns we show the number of exact matches only on the samples from the test set that are from the Zurich and Visp dialect. For here we can clearly see that our model performs better than the Sequitur model. The reason why we have less matches in the Visp dialect compared to Zurich is because most of the our data is from the Zurich dialect." ], "highlighted_evidence": [ "Additionally, we also test their performance only on the items from the Zurich and Visp dialect, because most of the samples are from this two dialects. In Table TABREF15 we show the result of the comparison of the two models." ] } ], "annotation_id": [ "15a03ea8b99644bcd3e08f73bd8510552caba79a" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] } ] }
{ "caption": [ "Figure 1: Six variants of Swiss German chosen for our dictionary. Map by Yves Scherrer and Larissa Schmidt.", "Table 1: Dictionary entry of five standard German words mapped with their spoken (=S) Swiss German representation (in SAMPA) toghether with a Swiss German spontaneous writing (=W) in the six dialects of Zurich, St. Gallen, Basel, Bern, Visp, and Stans", "Table 2: Examples of evaluated GSWs. The ’correct version’ is only one of many possible versions of GSWs, tagged ’1’ in our evaluation. The ’wrong version’ was tagged ’0’ in our evaluation. The column ’error’ shows the criteria we used for evaluating the GSWs as ’0’.", "Table 3: Percentages of correct GSWs among the top 5 candidates. For Zurich and Visp the total number of evaluated words was 5000, 1000 from each candidate. For St. Gallen, Basel, Bern, and Stans the total number of evaluated words was 1000, 200 from each candidate.", "Table 4: Number of exact matches, Sequitur vs Transformer" ], "file": [ "1-Figure1-1.png", "3-Table1-1.png", "3-Table2-1.png", "5-Table3-1.png", "5-Table4-1.png" ] }
1811.08048
QuaRel: A Dataset and Models for Answering Questions about Qualitative Relationships
Many natural language questions require recognizing and reasoning with qualitative relationships (e.g., in science, economics, and medicine), but are challenging to answer with corpus-based methods. Qualitative modeling provides tools that support such reasoning, but the semantic parsing task of mapping questions into those models has formidable challenges. We present QuaRel, a dataset of diverse story questions involving qualitative relationships that characterize these challenges, and techniques that begin to address them. The dataset has 2771 questions relating 19 different types of quantities. For example,"Jenny observes that the robot vacuum cleaner moves slower on the living room carpet than on the bedroom carpet. Which carpet has more friction?"We contribute (1) a simple and flexible conceptual framework for representing these kinds of questions; (2) the QuaRel dataset, including logical forms, exemplifying the parsing challenges; and (3) two novel models for this task, built as extensions of type-constrained semantic parsing. The first of these models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel. The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships without requiring additional training data, something not possible with previous models. This work thus makes inroads into answering complex, qualitative questions that require reasoning, and scaling to new relationships at low cost. The dataset and models are available at http://data.allenai.org/quarel.
{ "section_name": [ "Introduction", "Related Work", "Knowledge Representation ", "Qualitative Background Knowledge", "Representing Questions", "Logical Forms for Questions", "Inference", "The QuaRel Dataset ", "Baseline Systems ", "Baseline Experiments", "New Models ", "QuaSP+: A Model Incorporating World Tracking", "QuaSP+Zero: A Model for the Zero-Shot Task", "Summary and Conclusion" ], "paragraphs": [ [ "Many natural language tasks require recognizing and reasoning with qualitative relationships. For example, we may read about temperatures rising (climate science), a drug dose being increased (medicine), or the supply of goods being reduced (economics), and want to reason about the effects. Qualitative story problems, of the kind found in elementary exams (e.g., Figure FIGREF1 ), form a natural example of many of these linguistic and reasoning challenges, and is the target of this work.", "Understanding and answering such questions is particularly challenging. Corpus-based methods perform poorly in this setting, as the questions ask about novel scenarios rather than facts that can be looked up. Similarly, word association methods struggle, as a single word change (e.g., “more” to “less”) can flip the answer. Rather, the task appears to require knowledge of the underlying qualitative relations (e.g., “more friction implies less speed”).", "Qualitative modeling BIBREF0 , BIBREF1 , BIBREF2 provides a means for encoding and reasoning about such relationships. Relationships are expressed in a natural, qualitative way (e.g., if X increases, then so will Y), rather than requiring numeric equations, and inference allows complex questions to be answered. However, the semantic parsing task of mapping real world questions into these models is formidable and presents unique challenges. These challenges must be solved if natural questions involving qualitative relationships are to be reliably answered.", "We make three contributions: (1) a simple and flexible conceptual framework for formally representing these kinds of questions, in particular ones that express qualitative comparisons between two scenarios; (2) a challenging new dataset (QuaRel), including logical forms, exemplifying the parsing challenges; and (3) two novel models that extend type-constrained semantic parsing to address these challenges.", "Our first model, QuaSP+, addresses the problem of tracking different “worlds” in questions, resulting in significantly higher scores than with off-the-shelf tools (Section SECREF36 ). The second model, QuaSP+Zero, demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships on unseen properties, without requiring additional training data, something not possible with previous models (Section SECREF44 ). Together these contributions make inroads into answering complex, qualitative questions by linking language and reasoning, and offer a new dataset and models to spur further progress by the community." ], [ "There has been rapid progress in question-answering (QA), spanning a wide variety of tasks and phenomena, including factoid QA BIBREF3 , entailment BIBREF4 , sentiment BIBREF5 , and ellipsis and coreference BIBREF6 . Our contribution here is the first dataset specifically targeted at qualitative relationships, an important category of language that has been less explored. While questions requiring reasoning about qualitative relations sometimes appear in other datasets, e.g., BIBREF7 , our dataset specifically focuses on them so their challenges can be studied.", "For answering such questions, we treat the problem as mapping language to a structured formalism (semantic parsing) where simple qualitative reasoning can occur. Semantic parsing has a long history BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , using datasets about geography BIBREF8 , travel booking BIBREF12 , factoid QA over knowledge bases BIBREF10 , Wikipedia tables BIBREF13 , and many more. Our contributions to this line of research are: a dataset that features phenomena under-represented in prior datasets, namely (1) highly diverse language describing open-domain qualitative problems, and (2) the need to reason over entities that have no explicit formal representation; and methods for adapting existing semantic parsers to address these phenomena.", "For the target formalism itself, we draw on the extensive body of work on qualitative reasoning BIBREF0 , BIBREF1 , BIBREF2 to create a logical form language that can express the required qualitative knowledge, yet is sufficiently constrained that parsing into it is feasible, described in more detail in Section SECREF3 .", "There has been some work connecting language with qualitative reasoning, although mainly focused on extracting qualitative models themselves from text rather than question interpretation, e.g., BIBREF14 , BIBREF15 . Recent work by BIBREF16 crouse2018learning also includes interpreting questions that require identifying qualitative processes in text, in constrast to our setting of interpreting NL story questions that involve qualitative comparisons.", "Answering story problems has received attention in the domain of arithmetic, where simple algebra story questions (e.g., “Sue had 5 cookies, then gave 2 to Joe...”) are mapped to a system of equations, e.g., BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . This task is loosely analogous to ours (we instead map to qualitative relations) except that in arithmetic the entities to relate are often identifiable (namely, the numbers). Our qualitative story questions lack this structure, adding an extra challenge.", "The QuaRel dataset shares some structure with the Winograd Schema Challenge BIBREF21 , being 2-way multiple choice questions invoking both commonsense and coreference. However, they test different aspects of commonsense: Winograd uses coreference resolution to test commonsense understanding of scenarios, while QuaRel tests reasoning about qualitative relationships requiring tracking of coreferent “worlds.”", "Finally, crowdsourcing datasets has become a driving force in AI, producing significant progress, e.g., BIBREF3 , BIBREF22 , BIBREF23 . However, for semantic parsing tasks, one obstacle has been the difficulty in crowdsourcing target logical forms for questions. Here, we show how those logical forms can be obtained indirectly from workers without training the workers in the formalism, loosely similar to BIBREF24 ." ], [ "We first describe our framework for representing questions and the knowledge to answer them. Our dataset, described later, includes logical forms expressed in this language." ], [ "We use a simple representation of qualitative relationships, leveraging prior work in qualitative reasoning BIBREF0 . Let INLINEFORM0 be the set of properties relevant to the question set's domain (e.g., smoothness, friction, speed). Let INLINEFORM1 be a set of qualitative values for property INLINEFORM2 (e.g., fast, slow). For the background knowledge about the domain itself (a qualitative model), following BIBREF0 Forbus1984QualitativePT, we use the following predicates: [vskip=1mm,leftmargin=5mm] q+(property1, property2)", "q-(property1, property2) q+ denotes that property1 and property2 are qualitatively proportional, e.g., if property1 goes up, property2 will too, while q- denotes inverse proportionality, e.g., [vskip=1mm,leftmargin=5mm] # If friction goes up, speed goes down.", "q-(friction, speed). We also introduce the predicate: [vskip=1mm,leftmargin=5mm] higher-than( INLINEFORM0 , INLINEFORM1 , property INLINEFORM2 ) where INLINEFORM3 , allowing an ordering of property values to be specified, e.g., higher-than(fast, slow, speed). For our purposes here, we simplify to use just two property values, low and high, for all properties. (The parser learns mappings from words to these values, described later).", "Given these primitives, compact theories can be authored for a particular domain by choosing relevant properties INLINEFORM0 , and specifying qualitative relationships (q+,q-) and ordinal values (higher-than) for them. For example, a simple theory about friction is sketched graphically in Figure FIGREF3 . Our observation is that these theories are relatively small, simple, and easy to author. Rather, the primary challenge is in mapping the complex and varied language of questions into a form that interfaces with this representation.", "This language can be extended to include additional primitives from qualitative modeling, e.g., i+(x,y) (“the rate of change of x is qualitatively proportional to y”). That is, the techniques we present are not specific to our particular qualitative modeling subset. The only requirement is that, given a set of absolute values or qualitative relationships from a question, the theory can compute an answer." ], [ "A key feature of our representation is the conceptualization of questions as describing events happening in two worlds, world1 and world2, that are being compared. That comparison may be between two different entities, or the same entity at different time points. E.g., in Figure FIGREF1 the two worlds being compared are the car on wood, and the car on carpet. The tags world1 and world2 denote these different situations, and semantic parsing (Section SECREF5 ) requires learning to correctly associate these tags with parts of the question describing those situations. This abstracts away irrelevant details of the worlds, while still keeping track of which world is which.", "We define the following two predicates to express qualitative information in questions: [vskip=1mm,leftmargin=5mm] qrel(property, direction, world)", "qval(property, value, world) where property ( INLINEFORM0 ) INLINEFORM1 P, value INLINEFORM2 INLINEFORM3 , direction INLINEFORM4 {higher, lower}, and world INLINEFORM5 {world1, world2}. qrel() denotes the relative assertion that property is higher/lower in world compared with the other world, which is left implicit, e.g., from Figure FIGREF1 : [vskip=1mm,leftmargin=5mm] # The car rolls further on wood.", "qrel(distance, higher, world1) where world1 is a tag for the “car on wood” situation (hence world2 becomes a tag for the opposite “car on carpet” situation). qval() denotes that property has an absolute value in world, e.g., [vskip=1mm,leftmargin=5mm] # The car's speed is slow on carpet.", "qval(speed, low, world2)" ], [ "Despite the wide variation in language, the space of logical forms (LFs) for the questions that we consider is relatively compact. In each question, the question body establishes a scenario and each answer option then probes an implication. We thus express a question's LF as a tuple: [vskip=1mm,leftmargin=5mm] (setup, answer-A, answer-B) where setup is the predicate(s) describing the scenario, and answer-* are the predicate(s) being queried for. If answer-A follows from setup, as inferred by the reasoner, then the answer is (A); similarly for (B). For readability we will write this as [vskip=1mm,leftmargin=5mm] setup INLINEFORM0 answer-A ; answer-B We consider two styles of LF, covering a large range of questions. The first is: [vskip=1mm,leftmargin=5mm] (1) qrel( INLINEFORM1 ) INLINEFORM2 ", " qrel( INLINEFORM0 ) ; qrel( INLINEFORM1 ) which deals with relative values of properties between worlds, and applies when the question setup includes a comparative. An example of this is in Figure FIGREF1 . The second is: [vskip=1mm,leftmargin=5mm] (2) qval( INLINEFORM2 ), qval( INLINEFORM3 ) INLINEFORM4 ", " qrel( INLINEFORM0 ) ; qrel( INLINEFORM1 ) which deals with absolute values of properties, and applies when the setup uses absolute terms instead of comparatives. An example is the first question in Figure FIGREF4 , shown simplified below, whose LF looks as follows (colors showing approximate correspondences): [vskip=1mm,leftmargin=5mm] # Does a bar stool orangeslide faster along the redbar surface with tealdecorative raised bumps or the magentasmooth wooden bluefloor? (A) redbar (B) bluefloor", "", "qval(tealsmoothness, low, redworld1),", "qval(magentasmoothness, high, blueworld2) INLINEFORM0 ", " qrel(orangespeed, higher, redworld1) ;", " qrel(orangespeed, higher, blueworld2)" ], [ "A small set of rules for qualitative reasoning connects these predicates together. For example, (in logic) if the value of P is higher in world1 than the value of P in world2 and q+(P,Q) then the value of Q will be higher in world1 than the value of Q in world2. Given a question's logical form, a qualitative model, and these rules, a Prolog-style inference engine determines which answer option follows from the premise." ], [ "QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms. The size of the dataset is similar to several other datasets with annotated logical forms used for semantic parsing BIBREF8 , BIBREF25 , BIBREF24 . As the space of LFs is constrained, the dataset is sufficient for a rich exploration of this space.", "We crowdsourced multiple-choice questions in two parts, encouraging workers to be imaginative and varied in their use of language. First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare. They then created a question, guided by a large number of examples, and were encouraged to be imaginative and use their own words. The results are a remarkable variety of situations and phrasings (Figure FIGREF4 ).", "Second, the LFs were elicited using a novel technique of reverse-engineering them from a set of follow-up questions, without exposing workers to the underlying formalism. This is possible because of the constrained space of LFs. Referring to LF templates (1) and (2) earlier (Section SECREF13 ), these questions are as follows:", "From this information, we can deduce the target LF ( INLINEFORM0 is the complement of INLINEFORM1 , INLINEFORM2 , we arbitrarily set INLINEFORM3 =world1, hence all other variables can be inferred). Three independent workers answer these follow-up questions to ensure reliable results.", "We also had a human answer the questions in the dev partition (in principle, they should all be answerable). The human scored 96.4%, the few failures caused by occasional annotation errors or ambiguities in the question set itself, suggesting high fidelity of the content.", "About half of the dataset are questions about friction, relating five different properties (friction, heat, distance, speed, smoothness). These questions form a meaningful, connected subset of the dataset which we denote QuaRel INLINEFORM0 . The remaining questions involve a wide variety of 14 additional properties and their relations, such as “exercise intensity vs. sweat” or “distance vs. brightness”.", "Figure FIGREF4 shows typical examples of questions in QuaRel, and Table TABREF26 provides summary statistics. In particular, the vocabulary is highly varied (5226 unique words), given the dataset size. Figure FIGREF27 shows some examples of the varied phrases used to describe smoothness." ], [ "We use four systems to evaluate the difficulty of this dataset. (We subsequently present two new models, extending the baseline neural semantic parser, in Sections SECREF36 and SECREF44 ). The first two are an information retrieval system and a word-association method, following the designs of BIBREF26 Clark2016CombiningRS. These are naive baselines that do not parse the question, but nevertheless may find some signal in a large corpus of text that helps guess the correct answer. The third is a CCG-style rule-based semantic parser written specifically for friction questions (the QuaRel INLINEFORM0 subset), but prior to data being collected. The last is a state-of-the-art neural semantic parser. We briefly describe each in turn." ], [ "We ran the above systems on the QuaRel dataset. QuaSP was trained on the training set, using the model with highest parse accuracy on the dev set (similarly BiLSTM used highest answer accuracy on the dev set) . The results are shown in Table TABREF34 . The 95% confidence interval is +/- 4% on the full test set. The human score is the sanity check on the dev set (Section SECREF4 ).", "As Table TABREF34 shows, the QuaSP model performs better than other baseline approaches which are only slightly above random. QuaSP scores 56.1% (61.7% on the friction subset), indicating the challenges of this dataset.", "For the rule-based system, we observe that it is unable to parse the majority (66%) of questions (hence scoring 0.5 for those questions, reflecting a random guess), due to the varied and unexpected vocabulary present in the dataset. For example, Figure FIGREF27 shows some of the ways that the notion of “smoother/rougher” is expressed in questions, many of which are not covered by the hand-written CCG grammar. This reflects the typical brittleness of hand-built systems.", "For QuaSP, we also analyzed the parse accuracies, shown in Table TABREF35 , the score reflecting the percentage of times it produced exactly the right logical form. The random baseline for parse accuracy is near zero given the large space of logical forms, while the model parse accuracies are relatively high, much better than a random baseline.", "Further analysis of the predicted LFs indicates that the neural model does well at predicting the properties ( INLINEFORM0 25% of errors on dev set), but struggles to predict the worlds in the LFs reliably ( INLINEFORM1 70% of errors on dev set). This helps explain why non-trivial parse accuracy does not necessarily translate into correspondingly higher answer accuracy: If only the world assignment is wrong, the answer will flip and give a score of zero, rather than the average 0.5." ], [ "We now present two new models, both extensions of the neural baseline QuaSP. The first, QuaSP+, addresses the leading cause of failure just described, namely the problem of identifying the two worlds being compared, and significantly outperforms all the baseline systems. The second, QuaSP+Zero, addresses the scaling problem, namely the costly requirement of needing many training examples each time a new qualitative property is introduced. It does this by instead using only a small amount of lexical information about the new property, thus achieving “zero shot” performance, i.e., handling properties unseen in the training examples BIBREF34 , a capability not present in the baseline systems. We present the models and results for each." ], [ "We define the world tracking problem as identifying and tracking references to different “worlds” being compared in text, i.e., correctly mapping phrases to world identifiers, a critical aspect of the semantic parsing task. There are three reasons why this is challenging. First, unlike properties, the worlds being compared in questions are distinct in almost every question, and thus there is no obvious, learnable mapping from phrases to worlds. For example, while a property (like speed) has learnable ways to refer to it (“faster”, “moves rapidly”, “speeds”, “barely moves”), worlds are different in each question (e.g., “on a road”, “countertop”, “while cutting grass”) and thus learning to identify them is hard. Second, different phrases may be used to refer to the same world in the same question (see Figure FIGREF43 ), further complicating the task. Finally, even if the model could learn to identify worlds in other ways, e.g., by syntactic position in the question, there is the problem of selecting world1 or world2 consistently throughout the parse, so that the equivalent phrasings are assigned the same world.", "This problem of mapping phrases to world identifiers is similar to the task of entity linking BIBREF35 . In prior semantic parsing work, entity linking is relatively straightforward: simple string-matching heuristics are often sufficient BIBREF36 , BIBREF37 , or an external entity linking system can be used BIBREF38 , BIBREF39 . In QuaRel, however, because the phrases denoting world1 and world2 are different in almost every question, and the word “world” is never used, such methods cannot be applied.", "To address this, we have developed QuaSP+, a new model that extends QuaSP by adding an extra initial step to identify and delexicalize world references in the question. In this delexicalization process, potentially new linguistic descriptions of worlds are replaced by canonical tokens, creating the opportunity for the model to generalize across questions. For example, the world mentions in the question: [vskip=1mm,leftmargin=5mm] “A ball rolls further on wood than carpet because the (A) carpet is smoother (B) wood is smoother” are delexicalized to: [vskip=1mm,leftmargin=5mm] “A ball rolls further on World1 than World2 because the (A) World2 is smoother (B) World1 is smoother” This approach is analogous to BIBREF40 Herzig2018DecouplingSA, who delexicalized words to POS tags to avoid memorization. Similar delexicalized features have also been employed in Open Information Extraction BIBREF41 , so the Open IE system could learn a general model of how relations are expressed. In our case, however, delexicalizing to World1 and World2 is itself a significant challenge, because identifying phrases referring to worlds is substantially more complex than (say) identifying parts of speech.", "To perform this delexicalization step, we use the world annotations included as part of the training dataset (Section SECREF4 ) to train a separate tagger to identify “world mentions” (text spans) in the question using BIO tags (BiLSTM encoder followed by a CRF). The spans are then sorted into World1 and World2 using the following algorithm:", "If one span is a substring of another, they are are grouped together. Remaining spans are singleton groups.", "The two groups containing the longest spans are labeled as the two worlds being compared.", "Any additional spans are assigned to one of these two groups based on closest edit distance (or ignored if zero overlap).", "The group appearing first in the question is labeled World1, the other World2.", "The result is a question in which world mentions are canonicalized. The semantic parser QuaSP is then trained using these questions. We call the combined system (delexicalization plus semantic parser) QuaSP+.", "The results for QuaSP+ are included in Table TABREF34 . Most importantly, QuaSP+ significantly outperforms the baselines by over 12% absolute. Similarly, the parse accuracies are significantly improved from 32.2% to 43.8% (Table TABREF35 ). This suggests that this delexicalization technique is an effective way of making progress on this dataset, and more generally on problems where multiple situations are being compared, a common characteristic of qualitative problems." ], [ "While our delexicalization procedure demonstrates a way of addressing the world tracking problem, the approach still relies on annotated data; if we were to add new qualitative relations, new training data would be needed, which is a significant scalability obstacle. To address this, we define the zero-shot problem as being able to answer questions involving a new predicate p given training data only about other predicates P different from p. For example, if we add a new property (e.g., heat) to the qualitative model (e.g., adding q+(friction, heat); “more friction implies more heat”), we want to answer questions involving heat without creating new annotated training questions, and instead only use minimal extra information about the new property. A parser that achieved good zero-shot performance, i.e., worked well for new properties unseen at training time, would be a substantial advance, allowing a new qualitative model to link to questions with minimal effort.", "QuaRel provides an environment in which methods for this zero-shot theory extension can be devised and evaluated. To do this, we consider the following experimental setting: All questions mentioning a particular property are removed, the parser is trained on the remainder, and then tested on those withheld questions, i.e., questions mentioning a property unseen in the training data.", "We present and evaluate a model that we have developed for this, called QuaSP+Zero, that modifies the QuaSP+ parser as follows: During decoding, at points where the parser is selecting which property to include in the LF (e.g., Figure FIGREF31 ), it does not just consider the question tokens, but also the relationship between those tokens and the properties INLINEFORM0 used in the qualitative model. For example, a question token such as “longer” can act as a cue for (the property) length, even if unseen in the training data, because “longer” and a lexical form of length (e.g.,“length”) are similar. This approach follows the entity-linking approach used by BIBREF11 Krishnamurthy2017NeuralSP, where the similarity between question tokens and (words associated with) entities - called the entity linking score - help decide which entities to include in the LF during parsing. Here, we modify their entity linking score INLINEFORM1 , linking question tokens INLINEFORM2 and property “entities” INLINEFORM3 , to be: INLINEFORM4 ", "where INLINEFORM0 is a diagonal matrix connecting the embedding of the question token INLINEFORM1 and words INLINEFORM2 associated with the property INLINEFORM3 . For INLINEFORM4 , we provide a small list of words for each property (such as “speed”, “velocity”, and “fast” for the speed property), a small-cost requirement.", "The results with QuaSP+Zero are in Table TABREF45 , shown in detail on the QuaRel INLINEFORM0 subset and (due to space constraints) summarized for the full QuaRel. We can measure overall performance of QuaSP+Zero by averaging each of the zero-shot test sets (weighted by the number of questions in each set), resulting in an overall parse accuracy of 38.9% and answer accuracy 61.0% on QuaRel INLINEFORM1 , and 25.7% (parse) and 59.5% (answer) on QuaRel, both significantly better than random. These initial results are encouraging, suggesting that it may be possible to parse into modified qualitative models that include new relations, with minimal annotation effort, significantly opening up qualitative reasoning methods for QA." ], [ "Our goal is to answer questions that involve qualitative relationships, an important genre of task that involves both language and knowledge, but also one that presents significant challenges for semantic parsing. To this end we have developed a simple and flexible formalism for representing these questions; constructed QuaRel, the first dataset of qualitative story questions that exemplifies these challenges; and presented two new models that adapt existing parsing techniques to this task. The first model, QuaSP+, illustrates how delexicalization can help with world tracking (identifying different “worlds” in questions), resulting in state-of-the-art performance on QuaRel. The second model, QuaSP+Zero, illustrates how zero-shot learning can be achieved (i.e., adding new qualitative relationships without requiring new training examples) by using an entity-linking approach applied to properties - a capability not present in previous models.", "There are several directions in which this work can be expanded. First, quantitative property values (e.g., “10 mph”) are currently not handled well, as their mapping to “low” or “high” is context-dependent. Second, some questions do not fit our two question templates (Section SECREF13 ), e.g., where two property values are a single answer option (e.g., “....(A) one floor is smooth and the other floor is rough”). Finally, some questions include an additional level of indirection, requiring an inference step to map to qualitative relations. For example, “Which surface would be best for a race? (A) gravel (B) blacktop” requires the additional commonsense inference that “best for a race” implies “higher speed”.", "Given the ubiquity of qualitative comparisons in natural text, recognizing and reasoning with qualitative relationships is likely to remain an important task for AI. This work makes inroads into this task, and contributes a dataset and models to encourage progress by others. The dataset and models are publicly available at http://data.allenai.org/quarel." ] ] }
{ "question": [ "How does the QuaSP+Zero model work?", "Which off-the-shelf tools do they use on QuaRel?", "How do they obtain the logical forms of their questions in their dataset?", "Do all questions in the dataset allow the answers to pick from 2 options?" ], "question_id": [ "ce14b87dacfd5206d2a5af7c0ed1cfeb7b181922", "709a4993927187514701fe3cc491ac3030da1215", "a3c6acf900126bc9bd9c50ce99041ea00761da6a", "31b631a8634f6180b20a72477040046d1e085494" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "does not just consider the question tokens, but also the relationship between those tokens and the properties" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We present and evaluate a model that we have developed for this, called QuaSP+Zero, that modifies the QuaSP+ parser as follows: During decoding, at points where the parser is selecting which property to include in the LF (e.g., Figure FIGREF31 ), it does not just consider the question tokens, but also the relationship between those tokens and the properties INLINEFORM0 used in the qualitative model. For example, a question token such as “longer” can act as a cue for (the property) length, even if unseen in the training data, because “longer” and a lexical form of length (e.g.,“length”) are similar. This approach follows the entity-linking approach used by BIBREF11 Krishnamurthy2017NeuralSP, where the similarity between question tokens and (words associated with) entities - called the entity linking score - help decide which entities to include in the LF during parsing. Here, we modify their entity linking score INLINEFORM1 , linking question tokens INLINEFORM2 and property “entities” INLINEFORM3 , to be: INLINEFORM4" ], "highlighted_evidence": [ "We present and evaluate a model that we have developed for this, called QuaSP+Zero, that modifies the QuaSP+ parser as follows: During decoding, at points where the parser is selecting which property to include in the LF (e.g., Figure FIGREF31 ), it does not just consider the question tokens, but also the relationship between those tokens and the properties INLINEFORM0 used in the qualitative model.", "This approach follows the entity-linking approach used by BIBREF11 Krishnamurthy2017NeuralSP, where the similarity between question tokens and (words associated with) entities - called the entity linking score - help decide which entities to include in the LF during parsing." ] } ], "annotation_id": [ "67ef1d513e97769fbe925a2e1af2274c7b6f46a8" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "information retrieval system", "word-association method", " CCG-style rule-based semantic parser written specifically for friction questions", "state-of-the-art neural semantic parser" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use four systems to evaluate the difficulty of this dataset. (We subsequently present two new models, extending the baseline neural semantic parser, in Sections SECREF36 and SECREF44 ). The first two are an information retrieval system and a word-association method, following the designs of BIBREF26 Clark2016CombiningRS. These are naive baselines that do not parse the question, but nevertheless may find some signal in a large corpus of text that helps guess the correct answer. The third is a CCG-style rule-based semantic parser written specifically for friction questions (the QuaRel INLINEFORM0 subset), but prior to data being collected. The last is a state-of-the-art neural semantic parser. We briefly describe each in turn." ], "highlighted_evidence": [ "We use four systems to evaluate the difficulty of this dataset.", " The first two are an information retrieval system and a word-association method, following the designs of BIBREF26 Clark2016CombiningRS. These are naive baselines that do not parse the question, but nevertheless may find some signal in a large corpus of text that helps guess the correct answer. The third is a CCG-style rule-based semantic parser written specifically for friction questions (the QuaRel INLINEFORM0 subset), but prior to data being collected. The last is a state-of-the-art neural semantic parser. We briefly describe each in turn." ] } ], "annotation_id": [ "213ccb84972b6ec22e4b1092862cda7317a5f699" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " workers were given a seed qualitative relation", "asked to enter two objects, people, or situations to compare", "created a question, guided by a large number of examples", "LFs were elicited using a novel technique of reverse-engineering them from a set of follow-up questions" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We crowdsourced multiple-choice questions in two parts, encouraging workers to be imaginative and varied in their use of language. First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare. They then created a question, guided by a large number of examples, and were encouraged to be imaginative and use their own words. The results are a remarkable variety of situations and phrasings (Figure FIGREF4 ).", "Second, the LFs were elicited using a novel technique of reverse-engineering them from a set of follow-up questions, without exposing workers to the underlying formalism. This is possible because of the constrained space of LFs. Referring to LF templates (1) and (2) earlier (Section SECREF13 ), these questions are as follows:", "From this information, we can deduce the target LF ( INLINEFORM0 is the complement of INLINEFORM1 , INLINEFORM2 , we arbitrarily set INLINEFORM3 =world1, hence all other variables can be inferred). Three independent workers answer these follow-up questions to ensure reliable results." ], "highlighted_evidence": [ "First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare. They then created a question, guided by a large number of examples, and were encouraged to be imaginative and use their own words.", "Second, the LFs were elicited using a novel technique of reverse-engineering them from a set of follow-up questions, without exposing workers to the underlying formalism. This is possible because of the constrained space of LFs. Referring to LF templates (1) and (2) earlier (Section SECREF13 ), these questions are as follows:\n\nFrom this information, we can deduce the target LF ( INLINEFORM0 is the complement of INLINEFORM1 , INLINEFORM2 , we arbitrarily set INLINEFORM3 =world1, hence all other variables can be inferred)." ] } ], "annotation_id": [ "8ad294ab24414fc701dc137ce75a52b83dc12780" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We crowdsourced multiple-choice questions in two parts, encouraging workers to be imaginative and varied in their use of language. First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare. They then created a question, guided by a large number of examples, and were encouraged to be imaginative and use their own words. The results are a remarkable variety of situations and phrasings (Figure FIGREF4 )." ], "highlighted_evidence": [ "We crowdsourced multiple-choice questions in two parts, encouraging workers to be imaginative and varied in their use of language. First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare." ] } ], "annotation_id": [ "15ebd96b0f67e95b2e83b3fe999f03a146961685" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 2: A simple qualitative theory about friction, shown graphically (left) and formally (right). For example, q-(smoothness,friction) indicates that if smoothness increases, friction decreases.", "Figure 5: The QUASP parser decodes to a sequence of LFbuilding decisions, incrementally constructing the LF by selecting production rules from the LF grammar. As illustrated, first it decides if the LF should be of type 1 or 2 (here, type 1 is chosen), then it selects the the property for the question body (here, distance), then it selects the direction of change (here, higher), and so on.", "Table 1: Summary statistics for the QUAREL dataset.", "Figure 4: Examples of the varied way that smoother/rougher surfaces are described in QUAREL questions.", "Table 2: Scores (answer accuracies) of the different models on the full QUAREL dataset and QUARELF subset about friction. The baseline models only marginally outperform a random baseline. In QUASP+, however, identifying and delexicalizing the worlds significantly improves the performance (see Section 7.1).", "Table 3: Parse accuracies for the semantic parsers.", "Figure 6: Examples of different linguistic expressions of the same world in a question.", "Table 4: Baseline scores (bold) using QUASP+ZERO for the zero-shot task of answering questions involving properties unseen in the training data, using the QUARELF subset of QUAREL. For the entire QUAREL dataset, the weighted average scores for questions with unseen properties are 25.7% (parse) and 59.5% (answer)." ], "file": [ "3-Figure2-1.png", "5-Figure5-1.png", "5-Table1-1.png", "5-Figure4-1.png", "6-Table2-1.png", "6-Table3-1.png", "7-Figure6-1.png", "8-Table4-1.png" ] }
1904.10500
Natural Language Interactions in Autonomous Vehicles: Intent Detection and Slot Filling from Passenger Utterances
Understanding passenger intents and extracting relevant slots are important building blocks towards developing contextual dialogue systems for natural interactions in autonomous vehicles (AV). In this work, we explored AMIE (Automated-vehicle Multi-modal In-cabin Experience), the in-cabin agent responsible for handling certain passenger-vehicle interactions. When the passengers give instructions to AMIE, the agent should parse such commands properly and trigger the appropriate functionality of the AV system. In our current explorations, we focused on AMIE scenarios describing usages around setting or changing the destination and route, updating driving behavior or speed, finishing the trip and other use-cases to support various natural commands. We collected a multi-modal in-cabin dataset with multi-turn dialogues between the passengers and AMIE using a Wizard-of-Oz scheme via a realistic scavenger hunt game activity. After exploring various recent Recurrent Neural Networks (RNN) based techniques, we introduced our own hierarchical joint models to recognize passenger intents along with relevant slots associated with the action to be performed in AV scenarios. Our experimental results outperformed certain competitive baselines and achieved overall F1 scores of 0.91 for utterance-level intent detection and 0.96 for slot filling tasks. In addition, we conducted initial speech-to-text explorations by comparing intent/slot models trained and tested on human transcriptions versus noisy Automatic Speech Recognition (ASR) outputs. Finally, we compared the results with single passenger rides versus the rides with multiple passengers.
{ "section_name": [ "Introduction", "Background", "Data Collection and Annotation", "Detecting Utterance-level Intent Types", "Utterance-Level Intent Detection Experiments", "Slot Filling and Intent Keyword Extraction Experiments", "Speech-to-Text Experiments for AMIE: Training and Testing Models on ASR Outputs", "Discussion and Conclusion" ], "paragraphs": [ [ "One of the exciting yet challenging areas of research in Intelligent Transportation Systems is developing context-awareness technologies that can enable autonomous vehicles to interact with their passengers, understand passenger context and situations, and take appropriate actions accordingly. To this end, building multi-modal dialogue understanding capabilities situated in the in-cabin context is crucial to enhance passenger comfort and gain user confidence in AV interaction systems. Among many components of such systems, intent recognition and slot filling modules are one of the core building blocks towards carrying out successful dialogue with passengers. As an initial attempt to tackle some of those challenges, this study introduce in-cabin intent detection and slot filling models to identify passengers' intent and extract semantic frames from the natural language utterances in AV. The proposed models are developed by leveraging User Experience (UX) grounded realistic (ecologically valid) in-cabin dataset. This dataset is generated with naturalistic passenger behaviors, multiple passenger interactions, and with presence of a Wizard-of-Oz (WoZ) agent in moving vehicles with noisy road conditions." ], [ "Long Short-Term Memory (LSTM) networks BIBREF0 are widely-used for temporal sequence learning or time-series modeling in Natural Language Processing (NLP). These neural networks are commonly employed for sequence-to-sequence (seq2seq) and sequence-to-one (seq2one) modeling problems, including slot filling tasks BIBREF1 and utterance-level intent classification BIBREF2 , BIBREF3 which are well-studied for various application domains. Bidirectional LSTMs (Bi-LSTMs) BIBREF4 are extensions of traditional LSTMs which are proposed to improve model performance on sequence classification problems even further. Jointly modeling slot extraction and intent recognition BIBREF2 , BIBREF5 is also explored in several architectures for task-specific applications in NLP. Using Attention mechanism BIBREF6 , BIBREF7 on top of RNNs is yet another recent break-through to elevate the model performance by attending inherently crucial sub-modules of given input. There exist various architectures to build hierarchical learning models BIBREF8 , BIBREF9 , BIBREF10 for document-to-sentence level, and sentence-to-word level classification tasks, which are highly domain-dependent and task-specific.", "Automatic Speech Recognition (ASR) technology has recently achieved human-level accuracy in many fields BIBREF11 , BIBREF12 . For spoken language understanding (SLU), it is shown that training SLU models on true text input (i.e., human transcriptions) versus noisy speech input (i.e., ASR outputs) can achieve varying results BIBREF13 . Even greater performance degradations are expected in more challenging and realistic setups with noisy environments, such as moving vehicles in actual traffic conditions. As an example, a recent work BIBREF14 attempts to classify sentences as navigation-related or not using the DARPA supported CU-Move in-vehicle speech corpus BIBREF15 , a relatively old and large corpus focusing on route navigation. For this binary intent classification task, the authors observed that detection performances are largely affected by high ASR error rates due to background noise and multi-speakers in CU-Move dataset (not publicly available). For in-cabin dialogue between car assistants and driver/passengers, recent studies explored creating a public dataset using a WoZ approach BIBREF16 , and improving ASR for passenger speech recognition BIBREF17 .", "A preliminary report on research designed to collect data for human-agent interactions in a moving vehicle is presented in a previous study BIBREF18 , with qualitative analysis on initial observations and user interviews. Our current study is focused on the quantitative analysis of natural language interactions found in this in-vehicle dataset BIBREF19 , where we address intent detection and slot extraction tasks for passengers interacting with the AMIE in-cabin agent.", "In this study, we propose a UX grounded realistic intent recognition and slot filling models with naturalistic passenger-vehicle interactions in moving vehicles. Based on observed interactions, we defined in-vehicle intent types and refined their relevant slots through a data driven process. After exploring existing approaches for jointly training intents and slots, we applied certain variations of these models that perform best on our dataset to support various natural commands for interacting with the car-agent. The main differences in our proposed models can be summarized as follows: (1) Using the extracted intent keywords in addition to the slots to jointly model them with utterance-level intents (where most of the previous work BIBREF8 , BIBREF9 only join slots and utterance-level intents, ignoring the intent keywords); (2) The 2-level hierarchy we defined by word-level detection/extraction for slots and intent keywords first, and then filtering-out predicted non-slot and non-intent keywords instead of feeding them into the upper levels of the network (i.e., instead of using stacked RNNs with multiple recurrent hidden layers for the full utterance BIBREF9 , BIBREF10 , which are computationally costly for long utterances with many non-slot & non-intent-related words), and finally using only the predicted valid-slots and intent-related keywords as an input to the second level of the hierarchy; (3) Extending joint models BIBREF2 , BIBREF5 to include both beginning-of-utterance and end-of-utterance tokens to leverage Bi-LSTMs (after observing that we achieved better results by doing so). We compared our intent detection and slot filling results with the results obtained from Dialogflow, a commercially available intent-based dialogue system by Google, and showed that our proposed models perform better for both tasks on the same dataset. We also conducted initial speech-to-text explorations by comparing models trained and tested (10-fold CV) on human transcriptions versus noisy ASR outputs (via Cloud Speech-to-Text). Finally, we compared the results with single passenger rides versus the rides with multiple passengers." ], [ "Our AV in-cabin dataset includes around 30 hours of multi-modal data collected from 30 passengers (15 female, 15 male) in a total of 20 rides/sessions. In 10 sessions, single passenger was present (i.e., singletons), whereas the remaining 10 sessions include two passengers (i.e., dyads) interacting with the vehicle. The data is collected \"in the wild\" on the streets of Richmond, British Columbia, Canada. Each ride lasted about 1 hour or more. The vehicle is modified to hide the operator and the human acting as in-cabin agent from the passengers, using a variation of WoZ approach BIBREF20 . Participants sit in the back of the car and are separated by a semi-sound proof and translucent screen from the human driver and the WoZ AMIE agent at the front. In each session, the participants were playing a scavenger hunt game by receiving instructions over the phone from the Game Master. Passengers treat the car as AV and communicate with the WoZ AMIE agent via speech commands. Game objectives require passengers to interact naturally with the agent to go to certain destinations, update routes, stop the vehicle, give specific directions regarding where to pull over or park (sometimes with gesture), find landmarks, change speed, get in and out of the vehicle, etc. Further details of the data collection design and scavenger hunt protocol can be found in the preliminary study BIBREF18 . See Fig. FIGREF6 for the vehicle instrumentation to enhance multi-modal data collection setup. Our study is the initial work on this multi-modal dataset to develop intent detection and slot filling models, where we leveraged from the back-driver video/audio stream recorded by an RGB camera (facing towards the passengers) for manual transcription and annotation of in-cabin utterances. In addition, we used the audio data recorded by Lapel 1 Audio and Lapel 2 Audio (Fig. FIGREF6 ) as our input resources for the ASR.", "For in-cabin intent understanding, we described 4 groups of usages to support various natural commands for interacting with the vehicle: (1) Set/Change Destination/Route (including turn-by-turn instructions), (2) Set/Change Driving Behavior/Speed, (3) Finishing the Trip Use-cases, and (4) Others (open/close door/window/trunk, turn music/radio on/off, change AC/temperature, show map, etc.). According to those scenarios, 10 types of passenger intents are identified and annotated as follows: SetDestination, SetRoute, GoFaster, GoSlower, Stop, Park, PullOver, DropOff, OpenDoor, and Other. For slot filling task, relevant slots are identified and annotated as: Location, Position/Direction, Object, Time Guidance, Person, Gesture/Gaze (e.g., `this', `that', `over there', etc.), and None/O. In addition to utterance-level intents and slots, word-level intent related keywords are annotated as Intent. We obtained 1331 utterances having commands to AMIE agent from our in-cabin dataset. We expanded this dataset via the creation of similar tasks on Amazon Mechanical Turk BIBREF21 and reached 3418 utterances with intents in total. Intent and slot annotations are obtained on the transcribed utterances by majority voting of 3 annotators. Those annotation results for utterance-level intent types, slots and intent keywords can be found in Table TABREF7 and Table TABREF8 as a summary of dataset statistics." ], [ "As a baseline system, we implemented term-frequency and rule-based mapping mechanisms from word-level intent keywords extraction to utterance-level intent recognition. To further improve the utterance-level performance, we explored various RNN architectures and developed a hierarchical (2-level) models to recognize passenger intents along with relevant entities/slots in utterances. Our hierarchical model has the following 2-levels:", "Level-1: Word-level extraction (to automatically detect/predict and eliminate non-slot & non-intent keywords first, as they would not carry much information for understanding the utterance-level intent-type).", "Level-2: Utterance-level recognition (to detect final intent-types from given utterances, by using valid slots and intent keywords as inputs only, which are detected at Level-1).", "In this study, we employed an RNN architecture with LSTM cells that are designed to exploit long range dependencies in sequential data. LSTM has memory cell state to store relevant information and various gates, which can mitigate the vanishing gradient problem BIBREF0 . Given the input INLINEFORM0 at time INLINEFORM1 , and hidden state from the previous time step INLINEFORM2 , the hidden and output layers for the current time step are computed. The LSTM architecture is specified by the following equations: DISPLAYFORM0 ", " where INLINEFORM0 and INLINEFORM1 denote the weight matrices and bias terms, respectively. The sigmoid ( INLINEFORM2 ) and INLINEFORM3 are activation functions applied element-wise, and INLINEFORM4 denotes the element-wise vector product. LSTM has a memory vector INLINEFORM5 to read/write or reset using a gating mechanism and activation functions. Here, input gate INLINEFORM6 scales down the input, the forget gate INLINEFORM7 scales down the memory vector INLINEFORM8 , and the output gate INLINEFORM9 scales down the output to achieve final INLINEFORM10 , which is used to predict INLINEFORM11 (through a INLINEFORM12 activation). Similar to LSTMs, GRUs BIBREF22 are proposed as a simpler and faster alternative, having only reset and update gates. For Bi-LSTM BIBREF4 , BIBREF2 , two LSTM architectures are traversed in forward and backward directions, where their hidden layers are concatenated to compute the output.", "For slot filling and intent keywords extraction, we experimented with various configurations of seq2seq LSTMs BIBREF3 and GRUs BIBREF22 , as well as Bi-LSTMs BIBREF4 . A sample network architecture can be seen in Fig. FIGREF15 where we jointly trained slots and intent keywords. The passenger utterance is fed into LSTM/GRU network via an embedding layer as a sequence of words, which are transformed into word vectors. We also experimented with GloVe BIBREF23 , word2vec BIBREF24 , BIBREF25 , and fastText BIBREF26 as pre-trained word embeddings. To prevent overfitting, we used a dropout layer with 0.5 rate for regularization. Best performing results are obtained with Bi-LSTMs and GloVe embeddings (6B tokens, 400K vocabulary size, vector dimension 100).", "For utterance-level intent detection, we mainly experimented with 5 groups of models: (1) Hybrid: RNN + Rule-based, (2) Separate: Seq2one Bi-LSTM with Attention, (3) Joint: Seq2seq Bi-LSTM for slots/intent keywords & utterance-level intents, (4) Hierarchical & Separate, (5) Hierarchical & Joint. For (1), we detect/extract intent keywords and slots (via RNN) and map them into utterance-level intent-types (rule-based). For (2), we feed the whole utterance as input sequence and intent-type as single target into Bi-LSTM network with Attention mechanism. For (3), we jointly train word-level intent keywords/slots and utterance-level intents (by adding <BOU>/<EOU> terms to the beginning/end of utterances with intent-types as their labels). For (4) and (5), we detect/extract intent keywords/slots first, and then only feed the predicted keywords/slots as a sequence into (2) and (3), respectively." ], [ "The details of 5 groups of models and their variations that we experimented with for utterance-level intent recognition are summarized in this section.", "Instead of purely relying on machine learning (ML) or deep learning (DL) system, hybrid models leverage both ML/DL and rule-based systems. In this model, we defined our hybrid approach as using RNNs first for detecting/extracting intent keywords and slots; then applying rule-based mapping mechanisms to identify utterance-level intents (using the predicted intent keywords and slots). A sample network architecture can be seen in Fig. FIGREF18 where we leveraged seq2seq Bi-LSTM networks for word-level extraction before the rule-based mapping to utterance-level intent classes. The model variations are defined based on varying mapping mechanisms and networks as follows:", "Hybrid-0: RNN (Seq2seq LSTM for intent keywords extraction) + Rule-based (mapping extracted intent keywords to utterance-level intents)", "Hybrid-1: RNN (Seq2seq Bi-LSTM for intent keywords extraction) + Rule-based (mapping extracted intent keywords to utterance-level intents)", "Hybrid-2: RNN (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Rule-based (mapping extracted intent keywords & ‘Position/Direction’ slots to utterance-level intents)", "Hybrid-3: RNN (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Rule-based (mapping extracted intent keywords & all slots to utterance-level intents)", "This approach is based on separately training sequence-to-one RNNs for utterance-level intents only. These are called separate models as we do not leverage any information from the slot or intent keyword tags (i.e., utterance-level intents are not jointly trained with slots/intent keywords). Note that in seq2one models, we feed the utterance as an input sequence and the LSTM layer will only return the hidden state output at the last time step. This single output (or concatenated output of last hidden states from the forward and backward LSTMs in Bi-LSTM case) will be used to classify the intent type of the given utterance. The idea behind is that the last hidden state of the sequence will contain a latent semantic representation of the whole input utterance, which can be utilized for utterance-level intent prediction. See Fig. FIGREF24 (a) for sample network architecture of the seq2one Bi-LSTM network. Note that in the Bi-LSTM implementation for seq2one learning (i.e., when not returning sequences), the outputs of backward/reverse LSTM is actually ordered in reverse time steps ( INLINEFORM0 ... INLINEFORM1 ). Thus, as illustrated in Fig. FIGREF24 (a), we actually concatenate the hidden state outputs of forward LSTM at last time step and backward LSTM at first time step (i.e., first word in a given utterance), and then feed this merged result to the dense layer. Fig. FIGREF24 (b) depicts the seq2one Bi-LSTM network with Attention mechanism applied on top of Bi-LSTM layers. For the Attention case, the hidden state outputs of all time steps are fed into the Attention mechanism that will allow to point at specific words in a sequence when computing a single output BIBREF6 . Another variation of Attention mechanism we examined is the AttentionWithContext, which incorporates a context/query vector jointly learned during the training process to assist the attention BIBREF7 . All seq2one model variations we experimented with can be summarized as follows:", "Separate-0: Seq2one LSTM for utterance-level intents", "Separate-1: Seq2one Bi-LSTM for utterance-level intents", "Separate-2: Seq2one Bi-LSTM with Attention BIBREF6 for utterance-level intents", "Separate-3: Seq2one Bi-LSTM with AttentionWithContext BIBREF7 for utterance-level intents", "Using sequence-to-sequence networks, the approach here is jointly training annotated utterance-level intents and slots/intent keywords by adding <BOU>/ <EOU> tokens to the beginning/end of each utterance, with utterance-level intent-type as labels of such tokens. Our approach is an extension of BIBREF2 , in which only an <EOS> term is added with intent-type tags associated to this sentence final token, both for LSTM and Bi-LSTM cases. However, we experimented with adding both <BOU> and <EOU> terms as Bi-LSTMs will be used for seq2seq learning, and we observed that slightly better results can be achieved by doing so. The idea behind is that, since this is a seq2seq learning problem, at the last time step (i.e., prediction at <EOU>) the reverse pass in Bi-LSTM would be incomplete (refer to Fig. FIGREF24 (a) to observe the last Bi-LSTM cell). Therefore, adding <BOU> token and leveraging the backward LSTM output at first time step (i.e., prediction at <BOU>) would potentially help for joint seq2seq learning. An overall network architecture can be found in Fig. FIGREF30 for our joint models. We will report the experimental results on two variations (with and without intent keywords) as follows:", "Joint-1: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots)", "Joint-2: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots & intent keywords)", "Proposed hierarchical models are detecting/extracting intent keywords & slots using sequence-to-sequence networks first (i.e., level-1), and then feeding only the words that are predicted as intent keywords & valid slots (i.e., not the ones that are predicted as ‘None/O’) as an input sequence to various separate sequence-to-one models (described above) to recognize final utterance-level intents (i.e., level-2). A sample network architecture is given in Fig. FIGREF34 (a). The idea behind filtering out non-slot and non-intent keywords here resembles providing a summary of input sequence to the upper levels of the network hierarchy, where we actually learn this summarized sequence of keywords using another RNN layer. This would potentially result in focusing the utterance-level classification problem on the most salient words of the input sequences (i.e., intent keywords & slots) and also effectively reducing the length of input sequences (i.e., improving the long-term dependency issues observed in longer sequences). Note that according to our dataset statistics given in Table TABREF8 , 45% of the words found in transcribed utterances with passenger intents are annotated as non-slot and non-intent keywords (e.g., 'please', 'okay', 'can', 'could', incomplete/interrupted words, filler sounds like 'uh'/'um', certain stop words, punctuation, and many others that are not related to intent/slots). Therefore, the proposed approach would result in reducing the sequence length nearly by half at the input layer of level-2 for utterance-level recognition. For hierarchical & separate models, we experimented with 4 variations based on which separate model used at the second level of the hierarchy, and these are summarized as follows:", "Hierarchical & Separate-0: Level-1 (Seq2seq LSTM for intent keywords & slots extraction) + Level-2 (Separate-0: Seq2one LSTM for utterance-level intent detection)", "Hierarchical & Separate-1: Level-1 (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Level-2 (Separate-1: Seq2one Bi-LSTM for utterance-level intent detection)", "Hierarchical & Separate-2: Level-1 (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Level-2 (Separate-2: Seq2one Bi-LSTM + Attention for utterance-level intent detection)", "Hierarchical & Separate-3: Level-1 (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Level-2 (Separate-3: Seq2one Bi-LSTM + AttentionWithContext for utterance-level intent detection)", "Proposed hierarchical models detect/extract intent keywords & slots using sequence-to-sequence networks first, and then only the words that are predicted as intent keywords & valid slots (i.e., not the ones that are predicted as ‘None/O’) are fed as input to the joint sequence-to-sequence models (described above). See Fig. FIGREF34 (b) for sample network architecture. After the filtering or summarization of sequence at level-1, <BOU> and <EOU> tokens are appended to the shorter input sequence before level-2 for joint learning. Note that in this case, using Joint-1 model (jointly training annotated slots & utterance-level intents) for the second level of the hierarchy would not make much sense (without intent keywords). Hence, Joint-2 model is used for the second level as described below:", "Hierarchical & Joint-2: Level-1 (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Level-2 (Joint-2 Seq2seq models with slots & intent keywords & utterance-level intents)", "Table TABREF42 summarizes the results of various approaches we investigated for utterance-level intent understanding. We achieved 0.91 overall F1-score with our best-performing model, namely Hierarchical & Joint-2. All model results are obtained via 10-fold cross-validation (10-fold CV) on the same dataset. For our AMIE scenarios, Table TABREF43 shows the intent-wise detection results with the initial (Hybrid-0) and currently best performing (H-Joint-2) intent recognizers. With our best model (H-Joint-2), relatively problematic SetDestination and SetRoute intents’ detection performances in baseline model (Hybrid-0) jumped from 0.78 to 0.89 and 0.75 to 0.88, respectively.", "We compared our intent detection results with the Dialogflow's Detect Intent API. The same AMIE dataset is used to train and test (10-fold CV) Dialogflow's intent detection and slot filling modules, using the recommended hybrid mode (rule-based and ML). As shown in Table TABREF43 , an overall F1-score of 0.89 is achieved with Dialogflow for the same task. As you can see, our Hierarchical & Joint models obtained higher results than the Dialogflow for 8 out of 10 intent types." ], [ "Slot filling and intent keyword extraction results are given in Table TABREF44 and Table TABREF46 , respectively. For slot extraction, we reached 0.96 overall F1-score using seq2seq Bi-LSTM model, which is slightly better than using LSTM model. Although the overall performance is slightly improved with Bi-LSTM model, relatively problematic Object, Time Guidance, Gesture/Gaze slots’ F1-score performances increased from 0.80 to 0.89, 0.80 to 0.85, and 0.87 to 0.92, respectively. Note that with Dialogflow, we reached 0.92 overall F1-score for the entity/slot filling task on the same dataset. As you can see, our models reached significantly higher F1-scores than the Dialogflow for 6 out of 7 slot types (except Time Guidance)." ], [ "For transcriptions, utterance-level audio clips were extracted from the passenger-facing video stream, which was the single source used for human transcriptions of all utterances from passengers, AMIE agent and the game master. Since our transcriptions-based intent/slot models assumed perfect (at least close to human-level) ASR in the previous sections, we experimented with more realistic scenario of using ASR outputs for intent/slot modeling. We employed Cloud Speech-to-Text API to obtain ASR outputs on audio clips with passenger utterances, which were segmented using transcription time-stamps. We observed an overall word error rate (WER) of 13.6% in ASR outputs for all 20 sessions of AMIE.", "Considering that a generic ASR is used with no domain-specific acoustic models for this moving vehicle environment with in-cabin noise, the initial results were quite promising for us to move on with the model training on ASR outputs. For initial explorations, we created a new dataset having utterances with commands using ASR outputs of the in-cabin data (20 sessions with 1331 spoken utterances). Human transcriptions version of this set is also created. Although the dataset size is limited, both slot/intent keyword extraction models and utterance-level intent recognition models are not severely affected when trained and tested (10-fold CV) on ASR outputs instead of manual transcriptions. See Table TABREF48 for the overall F1-scores of the compared models.", "After the ASR pipeline described above is completed for all 20 sessions of AMIE in-cabin dataset (ALL with 1331 utterances), we repeated all our experiments with the subsets for 10 sessions having single passenger (Singletons with 600 utterances) and remaining 10 sessions having two passengers (Dyads with 731 utterances). We observed overall WER of 13.5% and 13.7% for Singletons and Dyads, respectively. The overlapping speech cases with slightly more conversations going on (longer transcriptions) in Dyad sessions compared to the Singleton sessions may affect the ASR performance, which may also affect the intent/slots models performances. As shown in Table TABREF48 , although we have more samples with Dyads, the performance drops between the models trained on transcriptions vs. ASR outputs are slightly higher for the Dyads compared to the Singletons, as expected." ], [ "We introduced AMIE, the intelligent in-cabin car agent responsible for handling certain AV-passenger interactions. We develop hierarchical and joint models to extract various passenger intents along with relevant slots for actions to be performed in AV, achieving F1-scores of 0.91 for intent recognition and 0.96 for slot extraction. We show that even using the generic ASR with noisy outputs, our models are still capable of achieving comparable results with models trained on human transcriptions. We believe that the ASR can be improved by collecting more in-domain data to obtain domain-specific acoustic models. These initial models will allow us to collect more speech data via bootstrapping with the intent-based dialogue application we have built, and the hierarchy we defined will allow us to eliminate costly annotation efforts in the future, especially for the word-level slots and intent keywords. Once enough domain-specific multi-modal data is collected, our future work is to explore training end-to-end dialogue agents for our in-cabin use-cases. We are planning to exploit other modalities for improved understanding of the in-cabin dialogue as well." ] ] }
{ "question": [ "What is shared in the joint model?", "Are the intent labels imbalanced in the dataset?" ], "question_id": [ "ab78f066144936444ecd164dc695bec1cb356762", "e659ceb184777015c12db2da5ae396635192f0b0" ], "nlp_background": [ "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "jointly trained with slots" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Joint-1: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots)", "Joint-2: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots & intent keywords)" ], "highlighted_evidence": [ "Using sequence-to-sequence networks, the approach here is jointly training annotated utterance-level intents and slots/intent keywords by adding / tokens to the beginning/end of each utterance, with utterance-level intent-type as labels of such tokens. Our approach is an extension of BIBREF2 , in which only an term is added with intent-type tags associated to this sentence final token, both for LSTM and Bi-LSTM cases. However, we experimented with adding both and terms as Bi-LSTMs will be used for seq2seq learning, and we observed that slightly better results can be achieved by doing so. The idea behind is that, since this is a seq2seq learning problem, at the last time step (i.e., prediction at ) the reverse pass in Bi-LSTM would be incomplete (refer to Fig. FIGREF24 (a) to observe the last Bi-LSTM cell). Therefore, adding token and leveraging the backward LSTM output at first time step (i.e., prediction at ) would potentially help for joint seq2seq learning. An overall network architecture can be found in Fig. FIGREF30 for our joint models. We will report the experimental results on two variations (with and without intent keywords) as follows:\n\nJoint-1: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots)\n\nJoint-2: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots & intent keywords)" ] } ], "annotation_id": [ "6c4706ded7df3f5a9984405ebd88584bec241416" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "For in-cabin intent understanding, we described 4 groups of usages to support various natural commands for interacting with the vehicle: (1) Set/Change Destination/Route (including turn-by-turn instructions), (2) Set/Change Driving Behavior/Speed, (3) Finishing the Trip Use-cases, and (4) Others (open/close door/window/trunk, turn music/radio on/off, change AC/temperature, show map, etc.). According to those scenarios, 10 types of passenger intents are identified and annotated as follows: SetDestination, SetRoute, GoFaster, GoSlower, Stop, Park, PullOver, DropOff, OpenDoor, and Other. For slot filling task, relevant slots are identified and annotated as: Location, Position/Direction, Object, Time Guidance, Person, Gesture/Gaze (e.g., `this', `that', `over there', etc.), and None/O. In addition to utterance-level intents and slots, word-level intent related keywords are annotated as Intent. We obtained 1331 utterances having commands to AMIE agent from our in-cabin dataset. We expanded this dataset via the creation of similar tasks on Amazon Mechanical Turk BIBREF21 and reached 3418 utterances with intents in total. Intent and slot annotations are obtained on the transcribed utterances by majority voting of 3 annotators. Those annotation results for utterance-level intent types, slots and intent keywords can be found in Table TABREF7 and Table TABREF8 as a summary of dataset statistics.", "FLOAT SELECTED: Table 2: AMIE Dataset Statistics: Slots and Intent Keywords" ], "highlighted_evidence": [ " Those annotation results for utterance-level intent types, slots and intent keywords can be found in Table TABREF7 and Table TABREF8 as a summary of dataset statistics.", "FLOAT SELECTED: Table 2: AMIE Dataset Statistics: Slots and Intent Keywords" ] } ], "annotation_id": [ "165607da36fd746c87bdb9874f764fbc6e8c6c1e" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ] }
{ "caption": [ "Fig. 1: AMIE In-cabin Data Collection Setup", "Table 1: AMIE Dataset Statistics: Utterance-level Intent Types", "Table 2: AMIE Dataset Statistics: Slots and Intent Keywords", "Fig. 2: Seq2seq Bi-LSTM Network for Slot Filling and Intent Keyword Extraction", "Fig. 3: Hybrid Models Network Architecture", "Fig. 4: Separate Models Network Architecture", "Fig. 5: Joint Models Network Architecture", "Fig. 6: Hierarchical Models Network Architecture", "Table 3: Utterance-level Intent Detection Performance Results (10-fold CV)", "Table 4: Intent-wise Performance Results of Utterance-level Intent Detection", "Table 5: Slot Filling Results (10-fold CV)", "Table 6: Intent Keyword Extraction Results (10-fold CV)", "Table 7: F1-scores of Models Trained/Tested on Transcriptions vs. ASR Outputs" ], "file": [ "4-Figure1-1.png", "5-Table1-1.png", "5-Table2-1.png", "7-Figure2-1.png", "8-Figure3-1.png", "9-Figure4-1.png", "10-Figure5-1.png", "11-Figure6-1.png", "12-Table3-1.png", "13-Table4-1.png", "13-Table5-1.png", "14-Table6-1.png", "14-Table7-1.png" ] }
1704.00177
Sentiment Analysis of Citations Using Word2vec
Citation sentiment analysis is an important task in scientific paper analysis. Existing machine learning techniques for citation sentiment analysis are focusing on labor-intensive feature engineering, which requires large annotated corpus. As an automatic feature extraction tool, word2vec has been successfully applied to sentiment analysis of short texts. In this work, I conducted empirical research with the question: how well does word2vec work on the sentiment analysis of citations? The proposed method constructed sentence vectors (sent2vec) by averaging the word embeddings, which were learned from Anthology Collections (ACL-Embeddings). I also investigated polarity-specific word embeddings (PS-Embeddings) for classifying positive and negative citations. The sentence vectors formed a feature space, to which the examined citation sentence was mapped to. Those features were input into classifiers (support vector machines) for supervised classification. Using 10-cross-validation scheme, evaluation was conducted on a set of annotated citations. The results showed that word embeddings are effective on classifying positive and negative citations. However, hand-crafted features performed better for the overall classification.
{ "section_name": [ "Introduction", "Related Work", "Pre-processing", "Overall Sent2vec Training", "Polarity-Specific Word Representation Training", "Training Dataset", "Test Dataset", "Evaluation Strategy", "Results", "Discussion and Conclusion" ], "paragraphs": [ [ "The evolution of scientific ideas happens when old ideas are replaced by new ones. Researchers usually conduct scientific experiments based on the previous publications. They either take use of others work as a solution to solve their specific problem, or they improve the results documented in the previous publications by introducing new solutions. I refer to the former as positive citation and the later negative citation. Citation sentence examples with different sentiment polarity are shown in Table TABREF2 .", "Sentiment analysis of citations plays an important role in plotting scientific idea flow. I can see from Table TABREF2 , one of the ideas introduced in paper A0 is Hidden Markov Model (HMM) based part-of-speech (POS) tagging, which has been referenced positively in paper A1. In paper A2, however, a better approach was brought up making the idea (HMM based POS) in paper A0 negative. This citation sentiment analysis could lead to future-works in such a way that new approaches (mentioned in paper A2) are recommended to other papers which cited A0 positively . Analyzing citation sentences during literature review is time consuming. Recently, researchers developed algorithms to automatically analyze citation sentiment. For example, BIBREF0 extracted several features for citation purpose and polarity classification, such as reference count, contrary expression and dependency relations. Jochim et al. tried to improve the result by using unigram and bigram features BIBREF1 . BIBREF2 used word level features, contextual polarity features, and sentence structure based features to detect sentiment citations. Although they generated good results using the combination of features, it required a lot of engineering work and big amount of annotated data to obtain the features. Further more, capturing accurate features relies on other NLP techniques, such as part-of-speech tagging (POS) and sentence parsing. Therefore, it is necessary to explore other techniques that are free from hand-crafted features. With the development of neural networks and deep learning, it is possible to learn the representations of concepts from unlabeled text corpus automatically. These representations can be treated as concept features for classification. An important advance in this area is the development of the word2vec technique BIBREF3 , which has proved to be an effective approach in Twitter sentiment classification BIBREF4 .", "In this work, the word2vec technique on sentiment analysis of citations was explored. Word embeddings trained from different corpora were compared." ], [ "Mikolov et al. introduced word2vec technique BIBREF3 that can obtain word vectors by training text corpus. The idea of word2vec (word embeddings) originated from the concept of distributed representation of words BIBREF5 . The common method to derive the vectors is using neural probabilistic language model BIBREF6 . Word embeddings proved to be effective representations in the tasks of sentiment analysis BIBREF4 , BIBREF7 , BIBREF8 and text classification BIBREF9 . Sadeghian and Sharafat BIBREF10 extended word embeddings to sentence embeddings by averaging the word vectors in a sentiment review statement. Their results showed that word embeddings outperformed the bag-of-words model in sentiment classification. In this work, I are aiming at evaluating word embeddings for sentiment analysis of citations. The research questions are:" ], [ "The SentenceModel provided by LingPipe was used to segment raw text into its constituent sentences . The data I used to train the vectors has noise. For example, there are incomplete sentences mistakenly detected (e.g. Publication Year.). To address this issue, I eliminated sentences with less than three words.", "", "" ], [ "In the work, I constructed sentence embeddings based on word embeddings. I simply averaged the vectors of the words in one sentence to obtain sentence embeddings (sent2vec). The main process in this step is to learn the word embedding matrix INLINEFORM0 :", " INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 (1)", "where INLINEFORM0 ( INLINEFORM1 ) is the word embedding for word INLINEFORM2 , which could be learned by the classical word2vec algorithm BIBREF3 . The parameters that I used to train the word embeddings are the same as in the work of Sadeghian and Sharafat" ], [ "To improve sentiment citation classification results, I trained polarity specific word embeddings (PS-Embeddings), which were inspired by the Sentiment-Specific Word Embedding BIBREF4 . After obtaining the PS-Embeddings, I used the same scheme to average the vectors in one sentence according to the sent2vec model." ], [ "The ACL-Embeddings (300 and 100 dimensions) from ACL collection were trained . ACL Anthology Reference Corpus contains the canonical 10,921 computational linguistics papers, from which I have generated 622,144 sentences after filtering out sentences with lower quality.", "For training polarity specific word embeddings (PS-Embeddings, 100 dimensions), I selected 17,538 sentences (8,769 positive and 8,769 negative) from ACL collection, by comparing sentences with the polar phrases .", "", "", "The pre-trained Brown-Embeddings (100 dimensions) learned from Brown corpus was also used as a comparison." ], [ "To evaluate the sent2vec performance on citation sentiment detection, I conducted experiments on three datasets. The first one (dataset-basic) was originally taken from ACL Anthology BIBREF11 . Athar and Awais BIBREF2 manually annotated 8,736 citations from 310 publications in the ACL Anthology. I used all of the labeled sentences (830 positive, 280 negative and 7,626 objective) for testing. ", "The second dataset (dataset-implicit) was used for evaluating implicit citation classification, containing 200,222 excluded (x), 282 positive (p), 419 negative (n) and 2,880 objective (o) annotated sentences. Every sentence which does not contain any direct or indirect mention of the citation is labeled as being excluded (x) . The third dataset (dataset-pn) is a subset of dataset-basic, containing 828 positive and 280 negative citations. Dataset-pn was used for the purposes of (1) evaluating binary classification (positive versus negative) performance using sent2vec; (2) Comparing the sentiment classification ability of PS-Embeddings with other embeddings." ], [ "One-Vs-The-Rest strategy was adopted for the task of multi-class classification and I reported F-score, micro-F, macro-F and weighted-F scores using 10-fold cross-validation. The F1 score is a weighted average of the precision and recall. In the multi-class case, this is the weighted average of the F1 score of each class. There are several types of averaging performed on the data: Micro-F calculates metrics globally by counting the total true positives, false negatives and false positives. Macro-F calculates metrics for each label, and find their unweighted mean. Macro-F does not take label imbalance into account. Weighted-F calculates metrics for each label, and find their average, weighted by support (the number of true instances for each label). Weighted-F alters macro-F to account for label imbalance." ], [ "The performances of citation sentiment classification on dataset-basic and dataset-implicit were shown in Table TABREF25 and Table TABREF26 respectively. The result of classifying positive and negative citations was shown in Table TABREF27 . To compare with the outcomes in the work of BIBREF2 , I selected two records from their results: the best one (based on features n-gram + dependencies + negation) and the baseline (based on 1-3 grams). From Table TABREF25 I can see that the features extracted by BIBREF2 performed far better than word embeddings, in terms of macro-F (their best macro-F is 0.90, the one in this work is 0.33). However, the higher micro-F score (The highest micro-F in this work is 0.88, theirs is 0.78) and the weighted-F scores indicated that this method may achieve better performances if the evaluations are conducted on a balanced dataset. Among the embeddings, ACL-Embeddings performed better than Brown corpus in terms of macro-F and weighted-F measurements . To compare the dimensionality of word embeddings, ACL300 gave a higher micro-F score than ACL100, but there is no difference between 300 and 100 dimensional ACL-embeddings when look at the macro-F and weighted-F scores.", "Table TABREF26 showed the sent2vec performance on classifying implicit citations with four categories: objective, negative, positive and excluded. The method in this experiment had a poor performance on detecting positive citations, but it was comparable with both the baseline and sentence structure method BIBREF12 for the category of objective citations. With respect to classifying negative citations, this method was not as good as sentence structure features but it outperformed the baseline. The results of classifying category X from the rest showed that the performances of this method and the sentence structure method are fairly equal.", "Table TABREF27 showed the results of classifying positive and negative citations using different word embeddings. The macro-F score 0.85 and the weighted-F score 0.86 proved that word2vec is effective on classifying positive and negative citations. However, unlike the outcomes in the paper of BIBREF4 , where they concluded that sentiment specific word embeddings performed best, integrating polarity information did not improve the result in this experiment." ], [ "In this paper, I reported the citation sentiment classification results based on word embeddings. The binary classification results in Table TABREF27 showed that word2vec is a promising tool for distinguishing positive and negative citations. From Table TABREF27 I can see that there are no big differences among the scores generated by ACL100 and Brown100, despite they have different vocabulary sizes (ACL100 has 14,325 words, Brown100 has 56,057 words). The polarity specific word embeddings did not show its strength in the task of binary classification. For the task of classifying implicit citations (Table TABREF26 ), in general, sent2vec (macro-F 0.44) was comparable with the baseline (macro-F 0.47) and it was effective for detecting objective sentences (F-score 0.84) as well as separating X sentences from the rest (F-score 0.997), but it did not work well on distinguishing positive citations from the rest. For the overall classification (Table TABREF25 ), however, this method was not as good as hand-crafted features, such as n-grams and sentence structure features. I may conclude from this experiment that word2vec technique has the potential to capture sentiment information in the citations, but hand-crafted features have better performance. " ] ] }
{ "question": [ "What kernels are used in the support vector machines?", "What dataset is used?", "What metrics are considered?" ], "question_id": [ "b512ab8de26874ee240cffdb3c65d9ac8d6023d9", "4e4d377b140c149338446ba69737ea191c4328d9", "828ce5faed7783297cf9ce202364f999b8d4a1f6" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "word2vec", "word2vec", "word2vec" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "ae1f3070bed4b88ed11eec52f0f9d57e8ea15006" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "ACL Anthology Reference Corpus" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The ACL-Embeddings (300 and 100 dimensions) from ACL collection were trained . ACL Anthology Reference Corpus contains the canonical 10,921 computational linguistics papers, from which I have generated 622,144 sentences after filtering out sentences with lower quality." ], "highlighted_evidence": [ "ACL Anthology Reference Corpus contains the canonical 10,921 computational linguistics papers, from which I have generated 622,144 sentences after filtering out sentences with lower quality." ] } ], "annotation_id": [ "167bae0b367ae241863e08191b0d4a439ac33deb" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "F-score", "micro-F", "macro-F", "weighted-F " ], "yes_no": null, "free_form_answer": "", "evidence": [ "One-Vs-The-Rest strategy was adopted for the task of multi-class classification and I reported F-score, micro-F, macro-F and weighted-F scores using 10-fold cross-validation. The F1 score is a weighted average of the precision and recall. In the multi-class case, this is the weighted average of the F1 score of each class. There are several types of averaging performed on the data: Micro-F calculates metrics globally by counting the total true positives, false negatives and false positives. Macro-F calculates metrics for each label, and find their unweighted mean. Macro-F does not take label imbalance into account. Weighted-F calculates metrics for each label, and find their average, weighted by support (the number of true instances for each label). Weighted-F alters macro-F to account for label imbalance." ], "highlighted_evidence": [ "One-Vs-The-Rest strategy was adopted for the task of multi-class classification and I reported F-score, micro-F, macro-F and weighted-F scores using 10-fold cross-validation. " ] } ], "annotation_id": [ "9535c2a350b42e5e96f7114daf1959c7410c2f78" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ] }
{ "caption": [ "Table 1. Examples of positive and negative citations.", "Table 2. Performance of citation sentiment classification.", "Table 3. Performance of implicit citation sentiment classification.", "Table 4. Performance of classifying positive and negative citations." ], "file": [ "2-Table1-1.png", "6-Table2-1.png", "7-Table3-1.png", "7-Table4-1.png" ] }
1711.11221
Modeling Coherence for Neural Machine Translation with Dynamic and Topic Caches
Sentences in a well-formed text are connected to each other via various links to form the cohesive structure of the text. Current neural machine translation (NMT) systems translate a text in a conventional sentence-by-sentence fashion, ignoring such cross-sentence links and dependencies. This may lead to generate an incoherent target text for a coherent source text. In order to handle this issue, we propose a cache-based approach to modeling coherence for neural machine translation by capturing contextual information either from recently translated sentences or the entire document. Particularly, we explore two types of caches: a dynamic cache, which stores words from the best translation hypotheses of preceding sentences, and a topic cache, which maintains a set of target-side topical words that are semantically related to the document to be translated. On this basis, we build a new layer to score target words in these two caches with a cache-based neural model. Here the estimated probabilities from the cache-based neural model are combined with NMT probabilities into the final word prediction probabilities via a gating mechanism. Finally, the proposed cache-based neural model is trained jointly with NMT system in an end-to-end manner. Experiments and analysis presented in this paper demonstrate that the proposed cache-based model achieves substantial improvements over several state-of-the-art SMT and NMT baselines.
{ "section_name": [ "Related Work", "Attention-based NMT", "Encoder", "Decoder", "Attention Model", "The Cache-based Neural Model", "Dynamic Cache and Topic Cache", "The Model", "Decoding Process", "Experimentation", "Experimental Setting", "Experimental Results", "Effect of the Gating Mechanism", "Effect of the Topic Cache", "Analysis on the Cache-based Neural Model", "Analysis on Translation Coherence", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "In the literature, several cache-based translation models have been proposed for conventional statistical machine translation, besides traditional n-gram language models and neural language models. In this section, we will first introduce related work in cache-based language models and then in translation models.", "For traditional n-gram language models, Kuhn1990A propose a cache-based language model, which mixes a large global language model with a small local model estimated from recent items in the history of the input stream for speech recongnition. della1992adaptive introduce a MaxEnt-based cache model by integrating a cache into a smoothed trigram language model, reporting reduction in both perplexity and word error rates. chueh2010topic present a new topic cache model for speech recongnition based on latent Dirichlet language model by incorporating a large-span topic cache into the generation of topic mixtures.", "For neural language models, huang2014cache propose a cache-based RNN inference scheme, which avoids repeated computation of identical LM calls and caches previously computed scores and useful intermediate results and thus reduce the computational expense of RNNLM. Grave2016Improving extend the neural network language model with a neural cache model, which stores recent hidden activations to be used as contextual representations. Our caches significantly differ from these two caches in that we store linguistic items in the cache rather than scores or activations.", "For neural machine translation, wangexploiting propose a cross-sentence context-aware approach and employ a hierarchy of Recurrent Neural Networks (RNNs) to summarize the cross-sentence context from source-side previous sentences. jean2017does propose a novel larger-context neural machine translation model based on the recent works on larger-context language modelling BIBREF11 and employ the method to model the surrounding text in addition to the source sentence.", "For cache-based translation models, nepveu2004adaptive propose a dynamic adaptive translation model using cache-based implementation for interactive machine translation, and develop a monolingual dynamic adaptive model and a bilingual dynamic adaptive model. tiedemann2010context propose a cache-based translation model, filling the cache with bilingual phrase pairs from the best translation hypotheses of previous sentences in a document. gong2011cache further propose a cache-based approach to document-level translation, which includes three caches, a dynamic cache, a static cache and a topic cache, to capture various document-level information. bertoldi2013cache describe a cache mechanism to implement online learning in phrase-based SMT and use a repetition rate measure to predict the utility of cached items expected to be useful for the current translation.", "Our caches are similar to those used by gong2011cache who incorporate these caches into statistical machine translation. We adapt them to neural machine translation with a neural cache model. It is worthwhile to emphasize that such adaptation is nontrivial as shown below because the two translation philosophies and frameworks are significantly different." ], [ "In this section, we briefly describe the NMT model taken as a baseline. Without loss of generality, we adopt the NMT architecture proposed by bahdanau2015neural, with an encoder-decoder neural network." ], [ "The encoder uses bidirectional recurrent neural networks (Bi-RNN) to encode a source sentence with a forward and a backward RNN. The forward RNN takes as input a source sentence $x = (x_1, x_2, ..., x_T)$ from left to right and outputs a hidden state sequence $(\\overrightarrow{h_1},\\overrightarrow{h_2}, ..., \\overrightarrow{h_T})$ while the backward RNN reads the sentence in an inverse direction and outputs a backward hidden state sequence $(\\overleftarrow{h_1},\\overleftarrow{h_2}, ..., \\overleftarrow{h_T})$ . The context-dependent word representations of the source sentence $h_j$ (also known as word annotation vectors) are the concatenation of hidden states $\\overrightarrow{h_j}$ and $\\overleftarrow{h_j}$ in the two directions." ], [ "The decoder is an RNN that predicts target words $y_t$ via a multi-layer perceptron (MLP) neural network. The prediction is based on the decoder RNN hidden state $s_t$ , the previous predicted word $y_{t-1}$ and a source-side context vector $c_t$ . The hidden state $s_t$ of the decoder at time $t$ and the conditional probability of the next word $y_t$ are computed as follows: ", "$$s_t = f(s_{t-1}, y_{t-1}, c_t)$$ (Eq. 3) ", "$$p(y_t|y_{<t};x) = g(y_{t-1}, s_t, c_t)$$ (Eq. 4) " ], [ "In the attention model, the context vector $c_t$ is calculated as a weighted sum over source annotation vectors $(h_1, h_2, ..., h_T)$ : ", "$$c_t = \\sum _{j=1}^{T_x} \\alpha _{tj}h_j$$ (Eq. 6) ", "$$\\alpha _{tj} = \\frac{exp(e_{tj})}{\\sum _{k=1}^{T} exp(e_{tk})}$$ (Eq. 7) ", "where $\\alpha _{tj}$ is the attention weight of each hidden state $h_j$ computed by the attention model, and $a$ is a feed forward neural network with a single hidden layer.", "The dl4mt tutorial presents an improved implementation of the attention-based NMT system, which feeds the previous word $y_{t-1}$ to the attention model. We use the dl4mt tutorial implementation as our baseline, which we will refer to as RNNSearch*.", "The proposed cache-based neural approach is implemented on the top of RNNSearch* system, where the encoder-decoder NMT framework is trained to optimize the sum of the conditional log probabilities of correct translations of all source sentences on a parallel corpus as normal." ], [ "In this section, we elaborate on the proposed cache-based neural model and how we integrate it into neural machine translation, Figure 1 shows the entire architecture of our NMT with the cache-based neural model." ], [ "The aim of cache is to incorporate document-level constraints and therefore to improve the consistency and coherence of document translations. In this section, we introduce our proposed dynamic cache and topic cache in detail.", "In order to build the dynamic cache, we dynamically extract words from recently translated sentences and the partial translation of current sentence being translated as words of dynamic cache. We apply the following rules to build the dynamic cache.", "The max size of the dynamic cache is set to $|c_d|$ .", "According to the first-in-first-out rule, when the dynamic cache is full and a new word is inserted into the cache, the oldest word in the cache will be removed.", "Duplicate entries into the dynamic cache are not allowed when a word has been already in the cache.", "It is worth noting that we also maintain a stop word list, and we added English punctuations and “UNK” into our stop word list. Words in the stop word list would not be inserted into the dynamic cache. So the common words like “a” and “the” cannot appear in the cache. All words in the dynamic cache can be found in the target-side vocabulary of RNNSearch*.", "In order to build the topic cache, we first use an off-the-shelf LDA topic tool to learn topic distributions of source- and target-side documents separately. Then we estimate a topic projection distribution over all target-side topics $p(z_t|z_s)$ for each source topic $z_s$ by collecting events and accumulating counts of $(z_s, z_t)$ from aligned document pairs. Notice that $z_s/z_t$ is the topic with the highest topic probability $p(z_.|d)$ on the source/target side. Then we can use the topic cache as follows:", "During the training process of NMT, the learned target-side topic model is used to infer the topic distribution for each target document. For a target document d in the training data, we select the topic $z$ with the highest probability $p(z|d)$ as the topic for the document. The $|c_t|$ most probable topical words in topic $z$ are extracted to fill the topic cache for the document $d$ .", "In the NMT testing process, we first infer the topic distribution for a source document in question with the learned source-side topic model. From the topic distribution, we choose the topic with the highest probability as the topic for the source document. Then we use the learned topic projection function to map the source topic onto a target topic with the highest projection probability, as illustrated in Figure 2. After that, we use the $|c_t|$ most probable topical words in the projected target topic to fill the topic cache.", "The words of topic cache and dynamic cache together form the final cache model. In practice, the cache stores word embeddings, as shown in Figure 3. As we do not want to introduce extra embedding parameters, we let the cache share the same target word embedding matrix with the NMT model. In this case, if a word is not in the target-side vocabulary of NMT, we discard the word from the cache." ], [ "The cache-based neural model is to evaluate the probabilities of words occurring in the cache and to provide the evaluation results for the decoder via a gating mechanism.", "When the decoder generates the next target word $y_t$ , we hope that the cache can provide helpful information to judge whether $y_t$ is appropriate from the perspective of the document-level cache if $y_t$ occurs in the cache.To achieve this goal, we should appropriately evaluate the word entries in the cache.", "In this paper, we build a new neural network layer as the scorer for the cache. At each decoding step $t$ , we use the scorer to score $y_t$ if $y_t$ is in the cache. The inputs to the scorer are the current hidden state $s_t$ of the decoder, previous word $y_{t-1}$ , context vector $c_t$ , and the word $y_t$ from the cache. The score of $y_t$ is calculated as follows: ", "$$score(y_t|y_{<t},x) = g_{cache}(s_t,c_t,y_{t-1},y_t)$$ (Eq. 22) ", "where $g_{cache}$ is a non-linear function.", "This score is further used to estimate the cache probability of $y_t$ as follows: ", "$$p_{cache}(y_t|y_{<t},x) = softmax(score(y_t|y_{<t},x))$$ (Eq. 23) ", "Since we have two prediction probabilities for the next target word $y_t$ , one from the cache-based neural model $p_{cache}$ , the other originally estimated by the NMT decoder $p_{nmt}$ , how do we integrate these two probabilities? Here, we introduce a gating mechanism to combine them, and word prediction probabilities on the vocabulary of NMT are updated by combining the two probabilities through linear interpolation between the NMT probability and cache-based neural model probability. The final word prediction probability for $y_t$ is calculated as follows: ", "$$p(y_t|y_{<t},x) = (1 - \\alpha _t)p_{cache}(y_t|y_{<t},x) + \\alpha _tp_{nmt}(y_t|y_{<t},x)$$ (Eq. 26) ", "Notice that if $y_t$ is not in the cache, we set $p_{cache}(y_t|y_{<t},x) = 0$ , where $\\alpha _t$ is the gate and computed as follows: ", "$$\\alpha _t = g_{gate}(f_{gate}(s_t,c_t,y_{t-1}))$$ (Eq. 27) ", "where $f_{gate}$ is a non-linear function and $g_{gate}$ is sigmoid function.", "We use the contextual elements of $s_t, c_t, y_{t-1}$ to score the current target word occurring in the cache (Eq. (6)) and to estimate the gate (Eq. (9)). If the target word is consistent with the context and in the cache at the same time, the probability of the target word will be high.", "Finally, we train the proposed cache model jointly with the NMT model towards minimizing the negative log-likelihood on the training corpus. The cost function is computed as follows: ", "$$L(\\theta ) = -\\sum _{i=1}^N \\sum _{t=1}^Tlogp(y_t|y_{<t},x)$$ (Eq. 28) ", "where $\\theta $ are all parameters in the cache-based NMT model." ], [ "Our cache-based NMT system works as follows:", "When the decoder shifts to a new test document, clear the topic and dynamic cache.", "Obtain target topical words for the new test document as described in Section 4.1 and fill them in the topic cache.", "Clear the dynamic cache when translating the first sentence of the test document.", "For each sentence in the new test document, translate it with the proposed cache-based NMT and continuously expands the dynamic cache with newly generated target words and target words obtained from the best translation hypothesis of previous sentences.", "In this way, the topic cache can provide useful global information at the beginning of the translation process while the dynamic cache is growing with the progress of translation." ], [ "We evaluated the effectiveness of the proposed cache-based neural model for neural machine translation on NIST Chinese-English translation tasks." ], [ "We selected corpora LDC2003E14, LDC2004T07, LDC2005T06, LDC2005T10 and a portion of data from the corpus LDC2004T08 (Hong Kong Hansards/Laws/News) as our bilingual training data, where document boundaries are explicitly kept. In total, our training data contain 103,236 documents and 2.80M sentences. On average, each document consists of 28.4 sentences. We chose NIST05 dataset (1082 sentence pairs) as our development set, and NIST02, NIST04, NIST06 (878, 1788, 1664 sentence pairs. respectively) as our test sets. We compared our proposed model against the following two systems:", "Moses BIBREF12 : an off-the-shelf phrase-based translation system with its default setting.", "RNNSearch*: our in-house attention-based NMT system which adopts the feedback attention as described in Section 3 .", "For Moses, we used the full training data to train the model. We ran GIZA++ BIBREF13 on the training data in both directions, and merged alignments in two directions with “grow-diag-final” refinement rule BIBREF14 to obtain final word alignments. We trained a 5-gram language model on the Xinhua portion of GIGA-WORD corpus using SRILM Toolkit with a modified Kneser-Ney smoothing.", "For RNNSearch, we used the parallel corpus to train the attention-based NMT model. The encoder of RNNSearch consists of a forward and backward recurrent neural network. The word embedding dimension is 620 and the size of a hidden layer is 1000. The maximum length of sentences that we used to train RNNSearch in our experiments was set to 50 on both Chinese and English side. We used the most frequent 30K words for both Chinese and English. We replaced rare words with a special token “UNK”. Dropout was applied only on the output layer and the dropout rate was set to 0.5. All the other settings were the same as those in BIBREF1 . Once the NMT model was trained, we adopted a beam search to find possible translations with high probabilities. We set the beam width to 10.", "For the proposed cache-based NMT model, we implemented it on the top of RNNSearch*. We set the size of the dynamic and topic cache $|c_d|$ and $|c_t|$ to 100, 200, respectively. For the dynamic cache, we only kept those most recently-visited items. For the LDA tool, we set the number of topics considered in the model to 100 and set the number of topic words that are used to fill the topic cache to 200. The parameter $\\alpha $ and $\\beta $ of LDA were set to 0.5 and 0.1, respectively. We used a feedforward neural network with two hidden layers to define $g_{cache}$ (Equation (6)) and $f_{gate}$ (Equation (9)). For $f_{gate}$ , the number of units in the two hidden layers were set to 500 and 200 respectively. For $g_{cache}$ , the number of units in the two hidden layers were set to 1000 and 500 respectively. We used a pre-training strategy that has been widely used in the literature to train our proposed model: training the regular attention-based NMT model using our implementation of RNNSearch*, and then using its parameters to initialize the parameters of the proposed model, except for those related to the operations of the proposed cache model.", "We used the stochastic gradient descent algorithm with mini-batch and Adadelta to train the NMT models. The mini-batch was set to 80 sentences and decay rates $\\rho $ and $\\epsilon $ of Adadelta were set to 0.95 and $10^{-6}$ ." ], [ "Table 1 shows the results of different models measured in terms of BLEU score. From the table, we can find that our implementation RNNSearch* using the feedback attention and dropout outperforms Moses by 3.23 BLEU points. The proposed model $RNNSearch*_{+Cd}$ achieves an average gain of 1.01 BLEU points over RNNSearch* on all test sets. Further, the model $RNNSearch*_{+Cd, Ct}$ achieves an average gain of 1.60 BLEU points over RNNSearch*, and it outperforms Moses by 4.83 BLEU points. These results strongly suggest that the dynamic and topic cache are very helpful and able to improve translation quality in document translation." ], [ "In order to validate the effectiveness of the gating mechanism used in the cache-based neural model, we set a fixed gate value for $RNNSearch*_{+Cd,Ct}$ , in other words, we use a mixture of probabilities with fixed proportions to replace the gating mechanism that automatically learns weights for probability mixture.", "Table 2 displays the result. When we set the gate $\\alpha $ to a fixed value 0.3, the performance has an obvious decline comparing with that of $RNNSearch*_{+Cd,Ct}$ in terms of BLEU score. The performance is even worse than RNNSearch* by 10.11 BLEU points. Therefore without a good mechanism, the cache-based neural model cannot be appropriately integrated into NMT. This shows that the gating mechanism plays a important role in $RNNSearch*_{+Cd,Ct}$ ." ], [ "When the NMT decoder translates the first sentence of a document, the dynamic cache is empty. In this case, we hope that the topic cache will provide document-level information for translating the first sentence. We therefore further investigate how the topic cache influence the translation of the first sentence in a document. We count and calculate the average number of words that appear in both the translations of the beginning sentences of documents and the topic cache.", "The statistical results are shown in Table 3. Without using the cache model, RNNSearch* generates translations that contain words from the topic cache as these topic words are tightly related to documents being translated. With the topic cache, our neural cache model enables the translations of the first sentences to be more relevant to the global topics of documents being translated as these translations contain more words from the topic cache that describes these documents. As the dynamic cache is empty when the decoder translates the beginning sentences, the topic cache is complementary to such a cold cache at the start. Comparing the numbers of translations generated by our model and human translations (Reference in Table 3), we can find that with the help of the topic cache, translations of the first sentences of documents are becoming closer to human translations." ], [ "As shown above, the topic cache is able to influence both the translations of beginning sentences and those of subsequent sentences while the dynamic cache built from translations of preceding sentences has an impact on the translations of subsequent sentences. We further study what roles the dynamic and topic cache play in the translation process. For this aim, we calculate the average number of words in translations generated by $RNNSearch*_{+Cd,Ct}$ that are also in the caches. During the counting process, stop words and “UNK” are removed from sentence and document translations. Table 4 shows the results. If only the topic cache is used ([ $document \\in [Ct]$ , $sentence \\in (Ct)$ ] in Table 4), the cache still can provide useful information to help NMT translate sentences and documents. 28.3 words per document and 2.39 words per sentence are from the topic cache. When both the dynamic and topic cache are used ([ $document \\in [Ct,Cd]$ , $sentence \\in (Ct,Cd)$ ] in Table 4), the numbers of words that both occur in sentence/document translations and the two caches sharply increase from 2.61/30.27 to 6.73/81.16. The reason for this is that words appear in preceding sentences will have a large probability of appearing in subsequent sentences. This shows that the dynamic cache plays a important role in keeping document translations consistent by reusing words from preceding sentences.", "We also provide two translation examples in Table 5. We can find that RNNSearch* generates different translations “operations” and “actions” for the same chinese word “行动(xingdong)”, while our proposed model produces the same translation “actions”." ], [ "We want to further study how the proposed cache-based neural model influence coherence in document translation. For this, we follow Lapata2005Automatic to measure coherence as sentence similarity. First, each sentence is represented by the mean of the distributed vectors of its words. Second, the similarity between two sentences is determined by the cosine of their means. ", "$$sim(S_1,S_2) = cos(\\mu (\\vec{S_1}),\\mu (\\vec{S_2})) \\\\$$ (Eq. 46) ", "where $\\mu (\\vec{S_i})=\\frac{1}{|S_i|}\\sum _{\\vec{w} \\in S_i}\\vec{w}$ , and $\\vec{w}$ is the vector for word $w$ .", "We use Word2Vec to get the distributed vectors of words and English Gigaword fourth Edition as training data to train Word2Vec. We consider that embeddings from word2vec trained on large monolingual corpus can well encode semantic information of words. We set the dimensionality of word embeddings to 200. Table 6 shows the average cosine similarity of adjacent sentences on all test sets. From the table, we can find that the $RNNSearch*_{+Cd,Ct}$ model produces better coherence in document translation than RNNSearch* in term of cosine similarity." ], [ "In this paper, we have presented a novel cache-based neural model for NMT to capture the global topic information and inter-sentence cohesion dependencies. We use a gating mechanism to integrate both the topic and dynamic cache into the proposed neural cache model. Experiment results show that the cache-based neural model achieves consistent and significant improvements in translation quality over several state-of-the-art NMT and SMT baselines. Further analysis reveals that the topic cache and dynamic cache are complementary to each other and that both are able to guide the NMT decoder to use topical words and to reuse words from recently translated sentences as next word predictions." ], [ "The present research was supported by the National Natural Science Foundation of China (Grant No. 61622209). We would like to thank three anonymous reviewers for their insightful comments." ] ] }
{ "question": [ "Did the authors evaluate their system output for coherence?", "What evaluations did the authors use on their system?" ], "question_id": [ "9d016eb3913b41f7a18c6fa865897c12b5fe0212", "c1c611409b5659a1fd4a870b6cc41f042e2e9889" ], "nlp_background": [ "infinity", "infinity" ], "topic_background": [ "familiar", "familiar" ], "paper_read": [ "no", "no" ], "search_query": [ "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We want to further study how the proposed cache-based neural model influence coherence in document translation. For this, we follow Lapata2005Automatic to measure coherence as sentence similarity. First, each sentence is represented by the mean of the distributed vectors of its words. Second, the similarity between two sentences is determined by the cosine of their means." ], "highlighted_evidence": [ "we follow Lapata2005Automatic to measure coherence as sentence similarity" ] } ], "annotation_id": [ "24b8501e77da8e331182557dea36f83fd31de3e7" ], "worker_id": [ "594e0b1297abe0ad3e2555ad27eedfb59c442bb9" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "BLEU scores, exact matches of words in both translations and topic cache, and cosine similarities of adjacent sentences for coherence.", "evidence": [ "FLOAT SELECTED: Table 1: Experiment results on the NIST Chinese-English translation tasks. [+Cd] is the proposed model with the dynamic cache. [+Cd,Ct] is the proposed model with both the dynamic and topic cache. The BLEU scores are case-insensitive. Avg means the average BLEU score on all test sets.", "FLOAT SELECTED: Table 3: The average number of words in translations of beginning sentences of documents that are also in the topic cache. Reference represents the average number of words in four human translations that are also in the topic cache.", "FLOAT SELECTED: Table 6: The average cosine similarity of adjacent sentences (coherence) on all test sets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Experiment results on the NIST Chinese-English translation tasks. [+Cd] is the proposed model with the dynamic cache. [+Cd,Ct] is the proposed model with both the dynamic and topic cache. The BLEU scores are case-insensitive. Avg means the average BLEU score on all test sets.", "FLOAT SELECTED: Table 3: The average number of words in translations of beginning sentences of documents that are also in the topic cache. Reference represents the average number of words in four human translations that are also in the topic cache.", "FLOAT SELECTED: Table 6: The average cosine similarity of adjacent sentences (coherence) on all test sets." ] } ], "annotation_id": [ "168fae5dca1b8acf95dd0235b9633bcf0905c4c1" ], "worker_id": [ "594e0b1297abe0ad3e2555ad27eedfb59c442bb9" ] } ] }
{ "caption": [ "Figure 1: Architecture of NMT with the neural cache model. Pcache is the probability for a next target word estimated by the cache-based neural model.", "Figure 2: Schematic diagram of the topic projection during the testing process.", "Figure 3: Architecture of the cache model.", "Table 1: Experiment results on the NIST Chinese-English translation tasks. [+Cd] is the proposed model with the dynamic cache. [+Cd,Ct] is the proposed model with both the dynamic and topic cache. The BLEU scores are case-insensitive. Avg means the average BLEU score on all test sets.", "Table 2: Effect of the gating mechanism. [+α=0.3] is the [+Cd,Ct] with a fixed gate value 0.3.", "Table 3: The average number of words in translations of beginning sentences of documents that are also in the topic cache. Reference represents the average number of words in four human translations that are also in the topic cache.", "Table 4: The average number of words in translations generated byRNNSearch∗+Cd,Ct that are also in the dynamic and topic cache. [document/sentence ∈ [Ct]] denote the average number of words that are in both document/sentence translations and the topic cache. [document/sentence ∈ [Cd,Ct]] denote the average number of words occurring in both document/sentence translations and the two caches.", "Table 5: Translation examples on the test set. SRC for source sentences, REF for human translations. These two sentences (1) and (2) are in the same document.", "Table 6: The average cosine similarity of adjacent sentences (coherence) on all test sets." ], "file": [ "4-Figure1-1.png", "5-Figure2-1.png", "5-Figure3-1.png", "8-Table1-1.png", "8-Table2-1.png", "9-Table3-1.png", "9-Table4-1.png", "9-Table5-1.png", "10-Table6-1.png" ] }
1912.07025
Indiscapes: Instance Segmentation Networks for Layout Parsing of Historical Indic Manuscripts
Historical palm-leaf manuscript and early paper documents from Indian subcontinent form an important part of the world's literary and cultural heritage. Despite their importance, large-scale annotated Indic manuscript image datasets do not exist. To address this deficiency, we introduce Indiscapes, the first ever dataset with multi-regional layout annotations for historical Indic manuscripts. To address the challenge of large diversity in scripts and presence of dense, irregular layout elements (e.g. text lines, pictures, multiple documents per image), we adapt a Fully Convolutional Deep Neural Network architecture for fully automatic, instance-level spatial layout parsing of manuscript images. We demonstrate the effectiveness of proposed architecture on images from the Indiscapes dataset. For annotation flexibility and keeping the non-technical nature of domain experts in mind, we also contribute a custom, web-based GUI annotation tool and a dashboard-style analytics portal. Overall, our contributions set the stage for enabling downstream applications such as OCR and word-spotting in historical Indic manuscripts at scale.
{ "section_name": [ "Introduction", "Related Work", "Indiscapes: The Indic manuscript dataset", "Indiscapes: The Indic manuscript dataset ::: Annotation Challenges", "Indiscapes: The Indic manuscript dataset ::: Annotation Tool", "Indic Manuscript Layout Parsing", "Indic Manuscript Layout Parsing ::: Network Architecture", "Indic Manuscript Layout Parsing ::: Implementation Details", "Indic Manuscript Layout Parsing ::: Implementation Details ::: Training", "Indic Manuscript Layout Parsing ::: Implementation Details ::: Inference", "Indic Manuscript Layout Parsing ::: Evaluation", "Results", "Conclusion", "Acknowledgment" ], "paragraphs": [ [ "The collection and analysis of historical document images is a key component in the preservation of culture and heritage. Given its importance, a number of active research efforts exist across the world BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. In this paper, we focus on palm-leaf and early paper documents from the Indian sub-continent. In contrast with modern or recent era documents, such manuscripts are considerably more fragile, prone to degradation from elements of nature and tend to have a short shelf life BIBREF6, BIBREF7, BIBREF8. More worryingly, the domain experts who can decipher such content are small in number and dwindling. Therefore, it is essential to access the content within these documents before it is lost forever.", "Surprisingly, no large-scale annotated Indic manuscript image datasets exist for the benefit of researchers in the community. In this paper, we take a significant step to address this gap by creating such a dataset. Given the large diversity in language, script and non-textual regional elements in these manuscripts, spatial layout parsing is crucial in enabling downstream applications such as OCR, word-spotting, style-and-content based retrieval and clustering. For this reason, we first tackle the problem of creating a diverse, annotated spatial layout dataset. This has the immediate advantage of bypassing the hurdle of language and script familiarity for annotators since layout annotation does not require any special expertise unlike text annotation.", "In general, manuscripts from Indian subcontinent pose many unique challenges (Figure FIGREF1). To begin with, the documents exhibit a large multiplicity of languages. This is further magnified by variations in intra-language script systems. Along with text, manuscripts may contain pictures, tables, non-pictorial decorative elements in non-standard layouts. A unique aspect of Indic and South-East Asian manuscripts is the frequent presence of holes punched in the document for the purpose of binding BIBREF8, BIBREF9, BIBREF6. These holes cause unnatural gaps within text lines. The physical dimensions of the manuscripts are typically smaller compared to other historical documents, resulting in a dense content layout. Sometimes, multiple manuscript pages are present in a single image. Moreover, imaging-related factors such as varying scan quality play a role as well. Given all of these challenges, it is important to develop robust and scalable approaches for the problem of layout parsing. In addition, given the typical non-technical nature of domain experts who study manuscripts, it is also important to develop easy-to-use graphical interfaces for annotation, post-annotation visualization and analytics.", "We make the following contributions:", "We introduce Indiscapes, the first ever historical Indic manuscript dataset with detailed spatial layout annotations (Section SECREF3).", "We adapt a deep neural network architecture for instance-level spatial layout parsing of historical manuscript images (Section SECREF16).", "We also introduce a lightweight web-based GUI for annotation and dashboard-style analytics keeping in mind the non-technical domain experts and the unique layout-level challenges of Indic manuscripts (Section SECREF11)." ], [ "A number of research groups have invested significant efforts in the creation and maintenance of annotated, publicly available historical manuscript image datasets BIBREF10, BIBREF11, BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF12. Other collections contain character-level and word-level spatial annotations for South-East Asian palm-leaf manuscripts BIBREF9, BIBREF4, BIBREF13. In these latter set of works, annotations for lines are obtained by considering the polygonal region formed by union of character bounding boxes as a line. While studies on Indic palm-leaf and paper-based manuscripts exist, these are typically conducted on small and often, private collections of documents BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20. No publicly available large-scale, annotated dataset of historical Indic manuscripts exists to the best of our knowledge. In contrast with existing collections, our proposed dataset contains a much larger diversity in terms of document type (palm-leaf and early paper), scripts and annotated layout elements (see Tables TABREF5,TABREF8). An additional level of complexity arises from the presence of multiple manuscript pages within a single image (see Fig. FIGREF1).", "A number of contributions can also be found for the task of historical document layout parsing BIBREF21, BIBREF22, BIBREF23, BIBREF24. Wei et al. BIBREF22 explore the effect of using a hybrid feature selection method while using autoencoders for semantic segmentation in five historical English and Medieval European manuscript datasets. Chen et al. BIBREF24 explore the use of Fully Convolutional Networks (FCN) for the same datasets. Barakat et al. BIBREF25 propose a FCN for segmenting closely spaced, arbitrarily oriented text lines from an Arabic manuscript dataset. The mentioned approaches, coupled with efforts to conduct competitions on various aspects of historical document layout analysis have aided progress in this area BIBREF26, BIBREF27, BIBREF28. A variety of layout parsing approaches, including those employing the modern paradigm of deep learning, have been proposed for Indic BIBREF17, BIBREF19, BIBREF29, BIBREF20 and South-East Asian BIBREF23, BIBREF30, BIBREF13, BIBREF31, BIBREF32 palm-leaf and paper manuscript images. However, existing approaches typically employ brittle hand-crafted features or demonstrate performance on datasets which are limited in terms of layout diversity. Similar to many recent works, we employ Fully Convolutional Networks in our approach. However, a crucial distinction lies in our formulation of layout parsing as an instance segmentation problem, rather than just a semantic segmentation problem. This avoids the problem of closely spaced layout regions (e.g. lines) being perceived as contiguous blobs.", "The ready availability of annotation and analysis tools has facilitated progress in creation and analysis of historical document manuscripts BIBREF33, BIBREF34, BIBREF35. The tool we propose in the paper contains many of the features found in existing annotation systems. However, some of these systems are primarily oriented towards single-user, offline annotation and do not enable a unified management of annotation process and monitoring of annotator performance. In contrast, our web-based system addresses these aspects and provides additional capabilities. Many of the additional features in our system are tailored for annotation and examining annotation analytics for documents with dense and irregular layout elements, especially those found in Indic manuscripts. In this respect, our annotation system is closer to the recent trend of collaborative, cloud/web-based annotation systems and services BIBREF36, BIBREF37, BIBREF38." ], [ "The Indic manuscript document images in our dataset are obtained from two sources. The first source is the publicly available Indic manuscript collection from University of Pennsylvania's Rare Book and Manuscript Library BIBREF39, also referred to as Penn-in-Hand (PIH). From the $2{,}880$ Indic manuscript book-sets, we carefully curated 193 manuscript images for annotation. Our curated selection aims to maximize the diversity of the dataset in terms of various attributes such as the extent of document degradation, script language, presence of non-textual elements (e.g. pictures, tables) and number of lines. Some images contain multiple manuscript pages stacked vertically or horizontally (see bottom-left image in Figure FIGREF1). The second source for manuscript images in our dataset is Bhoomi, an assorted collection of 315 images sourced from multiple Oriental Research Institutes and libraries across India. As with the first collection, we chose a subset intended to maximize the overall diversity of the dataset. However, this latter set of images are characterized by a relatively inferior document quality, presence of multiple languages and from a layout point of view, predominantly contain long, closely and irregularly spaced text lines, binding holes and degradations (Figure FIGREF1). Though some document images contain multiple manuscripts, we do not attempt to split the image into multiple pages. While this poses a challenge for annotation and automatic image parsing, retaining such images in the dataset eliminates manual/semi-automatic intervention. As our results show, our approach can successfully handle such multi-page documents, thereby making it truly an end-to-end system.", "Overall, our dataset contains 508 annotated Indic manuscripts. Some salient aspects of the dataset can be viewed in Table TABREF5 and a pictorial illustration of layout regions can be viewed in Figure FIGREF13. Note that multiple regions can overlap, unlike existing historical document datasets which typically contain disjoint region annotations.", "For the rest of the section, we discuss the challenges associated with annotating Indic manuscripts (Section SECREF9) and our web-based annotation tool (Section SECREF11)." ], [ "A variety of unique challenges exist in the context of annotating Indic manuscript layouts. The challenges arise from three major sources.", "Content: The documents are written in a large variety of Indic languages. Some languages even exhibit intra-language script variations. A large pool of annotators familiar with the languages and scripts present in the corpus is required to ensure proper annotation of lines and character components.", "Layout: Unlike some of the existing datasets, Indic manuscripts contain non-textual elements such as color pictures, tables and document decorations. These elements are frequently interspersed with text in non-standard layouts. In many cases, the manuscripts contain one or more physical holes, designed for a thread-like material to pass through and bind the leaves together as a book. Such holes vary in terms of spatial location, count and hole diameter. When the holes are present in the middle of the document, they cause a break in the contiguity of lines. In some documents, the line contiguity is broken by a `virtual' hole-like gap, possibly intended for creation of the punched hole at a future time. In many cases, the separation between lines is extremely small. The handwritten nature of these documents and the surface material result in extremely uneven lines, necessitating meticulous and slow annotation. If multiple manuscript pages are present, the stacking order could be horizontal or vertical. Overall, the sheer variety in layout elements poses a significant challenge, not only for annotation, but also for automated layout parsing.", "Degradations: Historical Indic manuscripts tend to be inherently fragile and prone to damage due to various sources – wood-and-leaf-boring insects, humidity seepage, improper storage and handling etc. While some degradations cause the edges of the document to become frayed, others manifest as irregularly shaped perforations in the document interior. It may be important to identify such degradations before attempting lexically-focused tasks such as OCR or word-spotting." ], [ "Keeping the aforementioned challenges in mind, we introduce a new browser-based annotation tool (see Figure FIGREF10). The tool is designed to operate both stand-alone and as a web-service. The web-service mode enables features such as distributed parallel sessions by registered annotators, dashboard-based live session monitoring and a wide variety of annotation-related analytics. On the front-end, a freehand region option is provided alongside the usual rectangle and polygon to enable maximum annotation flexibility. The web-service version also features a `Correction-mode' which enables annotators to correct existing annotations from previous annotators. Additionally, the tool has been designed to enable lexical (text) annotations in future." ], [ "To succeed at layout parsing of manuscripts, we require a system which can accurately localize various types of regions (e.g. text lines, isolated character components, physical degradation, pictures, holes). More importantly, we require a system which can isolate individual instances of each region (e.g. multiple text lines) in the manuscript image. Also, in our case, the annotation regions for manuscripts are not disjoint and can overlap (e.g. The annotation region for a text line can overlap with the annotation region of a hole (see Figure FIGREF13)). Therefore, we require a system which can accommodate such overlaps. To meet all of these requirements, we model our problem as one of semantic instance-level segmentation and employ the Mask R-CNN BIBREF40 architecture which has proven to be very effective at the task of object-instance segmentation in photos. Next, we briefly describe the Mask R-CNN architecture and our modifications of the same. Subsequently, we provide details related to implementation (Section SECREF17), model training (Section SECREF18) and inference (Section SECREF19)." ], [ "The Mask-RCNN architecture contains three stages as described below (see Figure FIGREF12).", "Backbone: The first stage, referred to as the backbone, is used to extract features from the input image. It consists of a convolutional network combined with a feature-pyramid network BIBREF41, thereby enabling multi-scale features to be extracted. We use the first four blocks of ResNet-50 BIBREF42 as the convolutional network.", "Region Proposal Network (RPN): This is a convolutional network which scans the pyramid feature map generated by the backbone network and generates rectangular regions commonly called `object proposals' which are likely to contain objects of interest. For each level of the feature pyramid and for each spatial location at a given level, a set of level-specific bounding boxes called anchors are generated. The anchors typically span a range of aspect ratios (e.g. $1:2, 1:1, 2:1$) for flexibility in detection. For each anchor, the RPN network predicts (i) the probability of an object being present (`objectness score') (ii) offset coordinates of a bounding box relative to location of the anchor. The generated bounding boxes are first filtered according to the `objectness score'. From boxes which survive the filtering, those that overlap with the underlying object above a certain threshold are chosen. After applying non-maximal suppression to remove overlapping boxes with relatively smaller objectness scores, the final set of boxes which remain are termed `object proposals' or Regions-of-Interest (RoI).", "Multi-Task Branch Networks: The RoIs obtained from RPN are warped into fixed dimensions and overlaid on feature maps extracted from the backbone to obtain RoI-specific features. These features are fed to three parallel task sub-networks. The first sub-network maps these features to region labels (e.g. Hole,Character-Line-Segment) while the second sub-network maps the RoI features to bounding boxes. The third sub-network is fully convolutional and maps the features to the pixel mask of the underlying region. Note that the ability of the architecture to predict masks independently for each RoI plays a crucial role in obtaining instance segmentations. Another advantage is that it naturally addresses situations where annotations or predictions overlap." ], [ "The dataset splits used for training, validation and test phases can be seen in Table TABREF6. All manuscript images are adaptively resized to ensure the width does not exceed 1024 pixels. The images are padded with zeros such that the input to the deep network has spatial dimensions of $1024 \\times 1024$. The ground truth region masks are initially subjected to a similar resizing procedure. Subsequently, they are downsized to $28 \\times 28$ in order to match output dimensions of the mask sub-network." ], [ "The network is initialized with weights obtained from a Mask R-CNN trained on the MS-COCO BIBREF43 dataset with a ResNet-50 backbone. We found that this results in faster convergence and stabler training compared to using weights from a Mask-RCNN trained on ImageNet BIBREF44 or training from scratch. Within the RPN network, we use custom-designed anchors of 5 different scales and with 3 different aspect ratios. Specifically, we use the following aspect ratios – 1:1,1:3,1:10 – keeping in mind the typical spatial extents of the various region classes. We also limit the number of RoIs (`object proposals') to 512. We use categorical cross entropy loss $\\mathcal {L}_{RPN}$ for RPN classification network. Within the task branches, we use categorical cross entropy loss $\\mathcal {L}_{r}$ for region classification branch, smooth L1 loss BIBREF45 ($\\mathcal {L}_{bb}$) for final bounding box prediction and per-pixel binary cross entropy loss $\\mathcal {L}_{mask}$ for mask prediction. The total loss is a convex combination of these losses, i.e. $\\mathcal {L} = \\lambda _{RPN} \\mathcal {L}_{RPN} + \\lambda _{r} \\mathcal {L}_{r} + \\lambda _{bb} \\mathcal {L}_{bb} + \\lambda _{mask} \\mathcal {L}_{mask}$. The weighting factors ($\\lambda $s) are set to 1. However, to ensure priority for our task of interest namely mask prediction, we set $\\lambda _{mask}=2$. For optimization, we use Stochastic Gradient Descent (SGD) optimizer with a gradient norm clipping value of $0.5$. The batch size, momentum and weight decay are set to 1, $0.9$ and $10^{-3}$ respectively. Given the relatively smaller size of our manuscript dataset compared to the photo dataset (MS-COCO) used to originally train the base Mask R-CNN, we adopt a multi-stage training strategy. For the first stage (30 epochs), we train only the task branch sub-networks using a learning rate of $10^{-3}$ while freezing weights in the rest of the overall network. This ensures that the task branches are fine-tuned for the types of regions contained in manuscript images. For the second stage (20 epochs), we additionally train stage-4 and up of the backbone ResNet-50. This enables extraction of appropriate semantic features from manuscript images. The omission of the initial 3 stages in the backbone for training is due to the fact that they provide generic, re-usable low-level features. To ensure priority coverage of hard-to-localize regions, we use focal loss BIBREF46 for mask generation. For the final stage (15 epochs), we train the entire network using a learning rate of $10^{-4}$." ], [ "During inference, the images are rescaled and processed using the procedure described at the beginning of the subsection. The number of RoIs retained after non-maximal suppression (NMS) from the RPN is set to 1000. From these, we choose the top 100 region detections with objectness score exceeding $0.5$ and feed the corresponding RoIs to the mask branch sub-network for mask generation. It is important to note that this strategy is different from the parallel generation of outputs and use of the task sub-networks during training. The generated masks are then binarized using an empirically chosen threshold of $0.4$ and rescaled to their original size using bilinear interpolation. On these generated masks, NMS with a threshold value of $0.5$ is applied to obtain the final set of predicted masks." ], [ "For quantitative evaluation, we compute Average Precision (AP) for a particular IoU threshold, a measure widely reported in instance segmentation literature BIBREF47, BIBREF43. We specifically report $AP_{50}$ and $AP_{75}$, corresponding to AP at IoU thresholds 50 and 75 respectively BIBREF40. In addition, we report an overall score by averaging AP at different IoU thresholds ranging from $0.5$ to $0.95$ in steps of $0.05$.", "The AP measure characterizes performance at document level. To characterize performance for each region type, we report two additional measures BIBREF24 – average class-wise IoU (cwIoU) and average class-wise per-pixel accuracy (cwAcc). Consider a fixed test document $k$. Suppose there are $r_i$ regions of class $i$ and let ${IoU}_r$ denote the IoU score for one such region $r$, i.e. $1 \\leqslant r \\leqslant r_i$. The per-class IoU score for class $i$ and document $k$ is computed as ${cwIoU}^d_i = \\frac{\\sum _r {IoU}_r}{r_i}$. Suppose there are $N_i$ documents containing at least a single region of class $i$ in ground-truth. The overall per-class IoU score for class $i$ is computed as ${cwIoU}_i = \\frac{\\sum _d {cwIoU}^d_i}{N_i}$. In a similar manner, we define class-wise pixel accuracy ${pwAcc}^d_i$ at document level and average it across all the documents containing class $i$, i.e. ${cwAcc}_i = \\frac{\\sum _d {pwAcc}^d_i}{N_i}$. Note that our approach for computing class-wise scores prevents documents with a relatively larger number of class instances from dominating the score and in this sense, differs from existing approaches BIBREF24" ], [ "We report quantitative results using the measures described in Section SECREF20. Table TABREF14 reports Average Precision and Table TABREF15 reports class-wise average IOUs and per-pixel accuracies. Qualitative results can be viewed in Figure FIGREF13. Despite the challenges posed by manuscripts, our model performs reasonably well across a variety of classes. As the qualitative results indicate, the model predicts accurate masks for almost all the regions. The results also indicate that our model handles overlap between Holes and Character line segments well. From ablative experiments, we found that our choice of focal loss was crucial in obtaining accurate mask boundaries. Unlike traditional semantic segmentation which would have produced a single blob-like region for line segments, our instance-based approach isolates each text line separately. Additionally, the clear demarcation between Page-Boundary and background indicates that our system identifies semantically relevant regions for downstream analysis. As the result at the bottom of Figure FIGREF13 shows, our system can even handle images with multiple pages, thus removing the need for any pre-processing related to isolation of individual pages.", "From quantitative results, we observe that Holes, Character line segments, Page boundary and Pictures are parsed the best while Physical degradations are difficult to parse due to the relatively small footprint and inconsistent patterns in degradations. The results show that performance for Penn in Hand (PIH) documents is better compared to Bhoomi manuscripts. We conjecture that the presence of closely spaced and unevenly written lines in latter is the cause. In our approach, two (or more) objects may share the same bounding box in terms of overlap and it is not possible to determine which box to choose during mask prediction. Consequently, an underlying line's boundary may either end up not being detected or the predicted mask might be poorly localized. However, this is not a systemic problem since our model achieves good performance even for very dense Bhoomi document line layouts." ], [ "Via this paper, we propose Indiscapes, the first dataset with layout annotations for historical Indic manuscripts. We believe that the availability of layout annotations will play a crucial role in reducing the overall complexity for OCR and other tasks such as word-spotting, style-and-content based retrieval. In the long-term, we intend to expand the dataset, not only numerically but also in terms of layout, script and language diversity. As a significant contribution, we have also adapted a deep-network based instance segmentation framework custom modified for fully automatic layout parsing. Given the general nature of our framework, advances in instance segmentation approaches can be leveraged thereby improving performance over time. Our proposed web-based annotator system, although designed for Indic manuscripts, is flexible, and could be reused for similar manuscripts from Asian subcontinent. We intend to expand the capabilities of our annotator system in many useful ways. For instance, the layout estimated by our deep-network could be provided to annotators for correction, thus reducing annotation efforts. Finally, we plan to have our dataset, instance segmentation system and annotator system publicly available. This would enable large-scale data collection and automated analysis efforts for Indic as well as other historical Asian manuscripts. The repositories related to the systems presented in this paper and the Indiscapes dataset can be accessed at https://ihdia.iiit.ac.in." ], [ "We would like to thank Dr. Sai Susarla for enabling access to the Bhoomi document collection. We also thank Poreddy Mourya Kumar Reddy, Gollapudi Sai Vamsi Krishna for their contributions related to dashboard and various annotators for their labelling efforts." ] ] }
{ "question": [ "What accuracy does CNN model achieve?", "How many documents are in the Indiscapes dataset?", "What language(s) are the manuscripts written in?" ], "question_id": [ "79bb1a1b71a1149e33e8b51ffdb83124c18f3e9c", "26faad6f42b6d628f341c8d4ce5a08a591eea8c2", "20be7a776dfda0d3c9dc10270699061cb9bc8297" ], "nlp_background": [ "two", "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "historical", "historical", "historical" ], "question_writer": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Combined per-pixel accuracy for character line segments is 74.79", "evidence": [ "FLOAT SELECTED: TABLE IV: Class-wise average IoUs and per-pixel accuracies on the test set. Refer to Table I for full names of abbreviated region types listed at top of the table.", "FLOAT SELECTED: TABLE I: Counts for various annotated region types in INDISCAPES dataset. The abbreviations used for region types are given below each region type." ], "highlighted_evidence": [ "FLOAT SELECTED: TABLE IV: Class-wise average IoUs and per-pixel accuracies on the test set. Refer to Table I for full names of abbreviated region types listed at top of the table.", "FLOAT SELECTED: TABLE I: Counts for various annotated region types in INDISCAPES dataset. The abbreviations used for region types are given below each region type." ] } ], "annotation_id": [ "16ff9c9f07a060d809fdb92a6e6044c47a21faf3" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "508", "evidence": [ "FLOAT SELECTED: TABLE III: Scripts in the INDISCAPES dataset." ], "highlighted_evidence": [ "FLOAT SELECTED: TABLE III: Scripts in the INDISCAPES dataset." ] } ], "annotation_id": [ "edd6026b3573e63afd587768f066b5bdc87c9446" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "c55585ec881d12ccf06f64dedfe417e3dd1722bb" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ] }
{ "caption": [ "Fig. 1: The five images on the left, enclosed by pink dotted line, are from the BHOOMI palm leaf manuscript collection while the remaining images (enclosed by blue dotted line) are from the ’Penn-in-Hand’ collection (refer to Section III). Note the inter-collection differences, closely spaced and unevenly written text lines, presence of various non-textual layout regions (pictures, holes, library stamps), physical degradation and presence of multiple manuscripts per image. All of these factors pose great challenges for annotation and machine-based parsing.", "TABLE I: Counts for various annotated region types in INDISCAPES dataset. The abbreviations used for region types are given below each region type.", "TABLE II: Dataset splits used for learning and inference.", "TABLE III: Scripts in the INDISCAPES dataset.", "Fig. 2: Screenshots of our web-based annotator (left) and analytics dashboard (right).", "Fig. 3: The architecture adopted for Indic Manuscript Layout Parsing. Refer to Section IV for details.", "TABLE IV: Class-wise average IoUs and per-pixel accuracies on the test set. Refer to Table I for full names of abbreviated region types listed at top of the table.", "TABLE V: AP at IoU thresholds 50, 75 and overall AP averaged over IoU range for test set.", "Fig. 4: Ground truth annotations (left) and predicted instance segmentations (right) for test set images. Note that we use colored shading only to visualize individual region instances and not to color-code region types. The region label abbreviations are shown alongside the regions. CLS : Character Line Segment, PB : Page Boundary, H : Hole, BL : Boundary Line, CC : Character Component, PD : Physical Degradation." ], "file": [ "2-Figure1-1.png", "3-TableI-1.png", "3-TableII-1.png", "3-TableIII-1.png", "4-Figure2-1.png", "5-Figure3-1.png", "5-TableIV-1.png", "5-TableV-1.png", "6-Figure4-1.png" ] }
1709.01256
Semantic Document Distance Measures and Unsupervised Document Revision Detection
In this paper, we model the document revision detection problem as a minimum cost branching problem that relies on computing document distances. Furthermore, we propose two new document distance measures, word vector-based Dynamic Time Warping (wDTW) and word vector-based Tree Edit Distance (wTED). Our revision detection system is designed for a large scale corpus and implemented in Apache Spark. We demonstrate that our system can more precisely detect revisions than state-of-the-art methods by utilizing the Wikipedia revision dumps https://snap.stanford.edu/data/wiki-meta.html and simulated data sets.
{ "section_name": [ "Introduction", "Revision Network", "Distance/similarity Measures", "Background", "Semantic Distance between Paragraphs", "Word Vector-based Dynamic Time Warping", "Word Vector-based Tree Edit Distance", "Process Flow", "Estimating the Cut-off Threshold", "Numerical Experiments", "Distance/Similarity Measures", "Data Sets", "Results", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "It is a common habit for people to keep several versions of documents, which creates duplicate data. A scholarly article is normally revised several times before being published. An academic paper may be listed on personal websites, digital conference libraries, Google Scholar, etc. In major corporations, a document typically goes through several revisions involving multiple editors and authors. Users would benefit from visualizing the entire history of a document. It is worthwhile to develop a system that is able to intelligently identify, manage and represent revisions. Given a collection of text documents, our study identifies revision relationships in a completely unsupervised way. For each document in a corpus we only use its content and the last modified timestamp. We assume that a document can be revised by many users, but that the documents are not merged together. We consider collaborative editing as revising documents one by one.", "The two research problems that are most relevant to document revision detection are plagiarism detection and revision provenance. In a plagiarism detection system, every incoming document is compared with all registered non-plagiarized documents BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . The system returns true if an original copy is found in the database; otherwise, the system returns false and adds the document to the database. Thus, it is a 1-to-n problem. Revision provenance is a 1-to-1 problem as it keeps track of detailed updates of one document BIBREF4 , BIBREF5 . Real-world applications include GitHub, version control in Microsoft Word and Wikipedia version trees BIBREF6 . In contrast, our system solves an n-to-n problem on a large scale. Our potential target data sources, such as the entire web or internal corpora in corporations, contain numerous original documents and their revisions. The aim is to find all revision document pairs within a reasonable time.", "Document revision detection, plagiarism detection and revision provenance all rely on comparing the content of two documents and assessing a distance/similarity score. The classic document similarity measure, especially for plagiarism detection, is fingerprinting BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Fixed-length fingerprints are created using hash functions to represent document features and are then used to measure document similarities. However, the main purpose of fingerprinting is to reduce computation instead of improving accuracy, and it cannot capture word semantics. Another widely used approach is computing the sentence-to-sentence Levenshtein distance and assigning an overall score for every document pair BIBREF13 . Nevertheless, due to the large number of existing documents, as well as the large number of sentences in each document, the Levenshtein distance is not computation-friendly. Although alternatives such as the vector space model (VSM) can largely reduce the computation time, their effectiveness is low. More importantly, none of the above approaches can capture semantic meanings of words, which heavily limits the performances of these approaches. For instance, from a semantic perspective, “I went to the bank\" is expected to be similar to “I withdrew some money\" rather than “I went hiking.\" Our document distance measures are inspired by the weaknesses of current document distance/similarity measures and recently proposed models for word representations such as word2vec BIBREF14 and Paragraph Vector (PV) BIBREF15 . Replacing words with distributed vector embeddings makes it feasible to measure semantic distances using advanced algorithms, e.g., Dynamic Time Warping (DTW) BIBREF16 , BIBREF17 , BIBREF18 and Tree Edit Distance (TED) BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . Although calculating text distance using DTW BIBREF27 , TED BIBREF28 or Word Mover's Distance (WMV) BIBREF29 has been attempted in the past, these measures are not ideal for large-scale document distance calculation. The first two algorithms were designed for sentence distances instead of document distances. The third measure computes the distance of two documents by solving a transshipment problem between words in the two documents and uses word2vec embeddings to calculate semantic distances of words. The biggest limitation of WMV is its long computation time. We show in Section SECREF54 that our wDTW and wTED measures yield more precise distance scores with much shorter running time than WMV.", "We recast the problem of detecting document revisions as a network optimization problem (see Section SECREF2 ) and consequently as a set of document distance problems (see Section SECREF4 ). We use trained word vectors to represent words, concatenate the word vectors to represent documents and combine word2vec with DTW or TED. Meanwhile, in order to guarantee reasonable computation time in large data sets, we calculate document distances at the paragraph level with Apache Spark. A distance score is computed by feeding paragraph representations to DTW or TED. Our code and data are publicly available. ", "The primary contributions of this work are as follows.", "The rest of this paper is organized in five parts. In Section 2, we clarify related terms and explain the methodology for document revision detection. In Section 3, we provide a brief background on existing document similarity measures and present our wDTW and wTED algorithms as well as the overall process flow. In Section 4, we demonstrate our revision detection results on Wikipedia revision dumps and six simulated data sets. Finally, in Section 5, we summarize some concluding remarks and discuss avenues for future work and improvements." ], [ "The two requirements for a document INLINEFORM0 being a revision of another document INLINEFORM1 are that INLINEFORM2 has been created later than INLINEFORM3 and that the content of INLINEFORM4 is similar to (has been modified from) that of INLINEFORM5 . More specifically, given a corpus INLINEFORM6 , for any two documents INLINEFORM7 , we want to find out the yes/no revision relationship of INLINEFORM8 and INLINEFORM9 , and then output all such revision pairs.", "We assume that each document has a creation date (the last modified timestamp) which is readily available from the meta data of the document. In this section we also assume that we have a INLINEFORM0 method and a cut-off threshold INLINEFORM1 . We represent a corpus as network INLINEFORM2 , for example Figure FIGREF5 , in which a vertex corresponds to a document. There is an arc INLINEFORM3 if and only if INLINEFORM4 and the creation date of INLINEFORM5 is before the creation date of INLINEFORM6 . In other words, INLINEFORM7 is a revision candidate for INLINEFORM8 . By construction, INLINEFORM9 is acyclic. For instance, INLINEFORM10 is a revision candidate for INLINEFORM11 and INLINEFORM12 . Note that we allow one document to be the original document of several revised documents. As we only need to focus on revision candidates, we reduce INLINEFORM13 to INLINEFORM14 , shown in Figure FIGREF5 , by removing isolated vertices. We define the weight of an arc as the distance score between the two vertices. Recall the assumption that a document can be a revision of at most one document. In other words, documents cannot be merged. Due to this assumption, all revision pairs form a branching in INLINEFORM15 . (A branching is a subgraph where each vertex has an in-degree of at most 1.) The document revision problem is to find a minimum cost branching in INLINEFORM16 (see Fig FIGREF5 ).", "The minimum branching problem was earlier solved by BIBREF30 edmonds1967optimum and BIBREF31 velardi2013ontolearn. The details of his algorithm are as follows.", "In our case, INLINEFORM0 is acyclic and, therefore, the second step never occurs. For this reason, Algorithm SECREF2 solves the document revision problem.", "Find minimum branching INLINEFORM0 for network INLINEFORM1 ", "[1]", "Input: INLINEFORM0 INLINEFORM1 ", "every vertex INLINEFORM0 Set INLINEFORM1 to correspond to all arcs with head INLINEFORM2 Select INLINEFORM3 such that INLINEFORM4 is minimum INLINEFORM5 ", "Output: INLINEFORM0 ", "The essential part of determining the minimum branching INLINEFORM0 is extracting arcs with the lowest distance scores. This is equivalent to finding the most similar document from the revision candidates for every original document." ], [ "In this section, we first introduce the classic VSM model, the word2vec model, DTW and TED. We next demonstrate how to combine the above components to construct our semantic document distance measures: wDTW and wTED. We also discuss the implementation of our revision detection system." ], [ "VSM represents a set of documents as vectors of identifiers. The identifier of a word used in this work is the tf-idf weight. We represent documents as tf-idf vectors, and thus the similarity of two documents can be described by the cosine distance between their vectors. VSM has low algorithm complexity but cannot represent the semantics of words since it is based on the bag-of-words assumption.", "Word2vec produces semantic embeddings for words using a two-layer neural network. Specifically, word2vec relies on a skip-gram model that uses the current word to predict context words in a surrounding window to maximize the average log probability. Words with similar meanings tend to have similar embeddings.", "DTW was developed originally for speech recognition in time series analysis and has been widely used to measure the distance between two sequences of vectors.", "Given two sequences of feature vectors: INLINEFORM0 and INLINEFORM1 , DTW finds the optimal alignment for INLINEFORM2 and INLINEFORM3 by first constructing an INLINEFORM4 matrix in which the INLINEFORM5 element is the alignment cost of INLINEFORM6 and INLINEFORM7 , and then retrieving the path from one corner to the diagonal one through the matrix that has the minimal cumulative distance. This algorithm is described by the following formula. DISPLAYFORM0 ", "TED was initially defined to calculate the minimal cost of node edit operations for transforming one labeled tree into another. The node edit operations are defined as follows.", "Deletion Delete a node and connect its children to its parent maintaining the order.", "Insertion Insert a node between an existing node and a subsequence of consecutive children of this node.", "Substitution Rename the label of a node.", "Let INLINEFORM0 and INLINEFORM1 be two labeled trees, and INLINEFORM2 be the INLINEFORM3 node in INLINEFORM4 . INLINEFORM5 corresponds to a mapping from INLINEFORM6 to INLINEFORM7 . TED finds mapping INLINEFORM8 with the minimal edit cost based on INLINEFORM9 ", "where INLINEFORM0 means transferring INLINEFORM1 to INLINEFORM2 based on INLINEFORM3 , and INLINEFORM4 represents an empty node." ], [ "According to the description of DTW in Section UID14 , the distance between two documents can be calculated using DTW by replacing each element in the feature vectors INLINEFORM0 and INLINEFORM1 with a word vector. However, computing the DTW distance between two documents at the word level is basically as expensive as calculating the Levenshtein distance. Thus in this section we propose an improved algorithm that is more appropriate for document distance calculation.", "In order to receive semantic representations for documents and maintain a reasonable algorithm complexity, we use word2vec to train word vectors and represent each paragraph as a sequence of vectors. Note that in both wDTW and wTED we take document titles and section titles as paragraphs. Although a more recently proposed model PV can directly train vector representations for short texts such as movie reviews BIBREF15 , our experiments in Section SECREF54 show that PV is not appropriate for standard paragraphs in general documents. Therefore, we use word2vec in our work. Algorithm SECREF20 describes how we compute the distance between two paragraphs based on DTW and word vectors. The distance between one paragraph in a document and one paragraph in another document can be pre-calculated in parallel using Spark to provide faster computation for wDTW and wTED.", "DistPara", "[h] Replace the words in paragraphs INLINEFORM0 and INLINEFORM1 with word2vec embeddings: INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize the first row and the first column of INLINEFORM6 matrix INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 in range INLINEFORM11 INLINEFORM12 in range INLINEFORM13 INLINEFORM14 calculate INLINEFORM15 Return: INLINEFORM16 " ], [ "As a document can be considered as a sequence of paragraphs, wDTW returns the distance between two documents by applying another DTW on top of paragraphs. The cost function is exactly the DistPara distance of two paragraphs given in Algorithm SECREF20 . Algorithm SECREF21 and Figure FIGREF22 describe our wDTW measure. wDTW observes semantic information from word vectors, which is fundamentally different from the word distance calculated from hierarchies among words in the algorithm proposed by BIBREF27 liu2007sentence. The shortcomings of their work are that it is difficult to learn semantic taxonomy of all words and that their DTW algorithm can only be applied to sentences not documents.", "wDTW", "[h] Represent documents INLINEFORM0 and INLINEFORM1 with vectors of paragraphs: INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize the first row and the first column of INLINEFORM6 matrix INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 in range INLINEFORM11 INLINEFORM12 in range INLINEFORM13 INLINEFORM14 DistPara INLINEFORM15 calculate INLINEFORM16 Return: INLINEFORM17 " ], [ "TED is reasonable for measuring document distances as documents can be easily transformed to tree structures visualized in Figure FIGREF24 . The document tree concept was originally proposed by BIBREF0 si1997check. A document can be viewed at multiple abstraction levels that include the document title, its sections, subsections, etc. Thus for each document we can build a tree-like structure with title INLINEFORM0 sections INLINEFORM1 subsections INLINEFORM2 ... INLINEFORM3 paragraphs being paths from the root to leaves. Child nodes are ordered from left to right as they appear in the document.", "We represent labels in a document tree as the vector sequences of titles, sections, subsections and paragraphs with word2vec embeddings. wTED converts documents to tree structures and then uses DistPara distances. More formally, the distance between two nodes is computed as follows.", "The cost of substitution is the DistPara value of the two nodes.", "The cost of insertion is the DistPara value of an empty sequence and the label of the inserted node. This essentially means that the cost is the sum of the L2-norms of the word vectors in that node.", "The cost of deletion is the same as the cost of insertion.", "Compared to the algorithm proposed by BIBREF28 sidorov2015computing, wTED provides different edit cost functions and uses document tree structures instead of syntactic n-grams, and thus wTED yields more meaningful distance scores for long documents. Algorithm SECREF23 and Figure FIGREF28 describe how we calculate the edit cost between two document trees.", "wTED", "[1] Convert documents INLINEFORM0 and INLINEFORM1 to trees INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 ", "Initialize tree edit distance INLINEFORM0 node label INLINEFORM1 node label INLINEFORM2 Update TED mapping cost INLINEFORM3 using INLINEFORM4 DistPara INLINEFORM5 INLINEFORM6 DistPara INLINEFORM7 INLINEFORM8 DistPara INLINEFORM9 ", "Return: INLINEFORM0 " ], [ "Our system is a boosting learner that is composed of four modules: weak filter, strong filter, revision network and optimal subnetwork. First of all, we sort all documents by timestamps and pair up documents so that we only compare each document with documents that have been created earlier. In the first module, we calculate the VSM similarity scores for all pairs and eliminate those with scores that are lower than an empirical threshold ( INLINEFORM0 ). This is what we call the weak filter. After that, we apply the strong filter wDTW or wTED on the available pairs and filter out document pairs having distances higher than a threshold INLINEFORM1 . For VSM in Section SECREF32 , we directly filter out document pairs having similarity scores lower than a threshold INLINEFORM2 . The cut-off threshold estimation is explained in Section SECREF30 . The remaining document pairs from the strong filter are then sent to the revision network module. In the end, we output the optimal revision pairs following the minimum branching strategy." ], [ "Hyperprameter INLINEFORM0 is calibrated by calculating the absolute extreme based on an initial set of documents, i.e., all processed documents since the moment the system was put in use. Based on this set, we calculate all distance/similarity scores and create a histogram, see Figure FIGREF31 . The figure shows the correlation between the number of document pairs and the similarity scores in the training process of one simulated corpus using VSM. The optimal INLINEFORM1 in this example is around 0.6 where the number of document pairs noticeably drops.", "As the system continues running, new documents become available and INLINEFORM0 can be periodically updated by using the same method." ], [ "This section reports the results of the experiments conducted on two data sets for evaluating the performances of wDTW and wTED against other baseline methods." ], [ "We denote the following distance/similarity measures.", "wDTW: Our semantic distance measure explained in Section SECREF21 .", "wTED: Our semantic distance measure explained in Section SECREF23 .", "WMD: The Word Mover's Distance introduced in Section SECREF1 . WMD adapts the earth mover's distance to the space of documents.", "VSM: The similarity measure introduced in Section UID12 .", "PV-DTW: PV-DTW is the same as Algorithm SECREF21 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 where INLINEFORM1 is the PV embedding of paragraph INLINEFORM2 .", "PV-TED: PV-TED is the same as Algorithm SECREF23 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 .", "Our experiments were conducted on an Apache Spark cluster with 32 cores and 320 GB total memory. We implemented wDTW, wTED, WMD, VSM, PV-DTW and PV-TED in Java Spark. The paragraph vectors for PV-DTW and PV-TED were trained by gensim. " ], [ "In this section, we introduce the two data sets we used for our revision detection experiments: Wikipedia revision dumps and a document revision data set generated by a computer simulation. The two data sets differ in that the Wikipedia revision dumps only contain linear revision chains, while the simulated data sets also contains tree-structured revision chains, which can be very common in real-world data.", "The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data.", "We pre-processed the Wikipedia revision dumps using the JWPL Revision Machine BIBREF32 and produced a data set that contains 62,234 documents with 46,354 revisions. As we noticed that short documents just contributed to noise (graffiti) in the data, we eliminated documents that have fewer than three paragraphs and fewer than 300 words. We removed empty lines in the documents and trained word2vec embeddings on the entire corpus. We used the documents occurring in the first INLINEFORM0 of the revision period for INLINEFORM1 calibration, and the remaining documents for test.", "The generation process of the simulated data sets is designed to mimic the real world. Users open some existing documents in a file system, make some changes (e.g. addition, deletion or replacement), and save them as separate documents. These documents become revisions of the original documents. We started from an initial corpus that did not have revisions, and kept adding new documents and revising existing documents. Similar to a file system, at any moment new documents could be added and/or some of the current documents could be revised. The revision operations we used were deletion, addition and replacement of words, sentences, paragraphs, section names and document titles. The addition of words, ..., section names, and new documents were pulled from the Wikipedia abstracts. This corpus generation process had five time periods INLINEFORM0 . Figure FIGREF42 illustrates this simulation. We set a Poisson distribution with rate INLINEFORM1 (the number of documents in the initial corpus) to control the number of new documents added in each time period, and a Poisson distribution with rate INLINEFORM2 to control the number of documents revised in each time period.", "We generated six data sets using different random seeds, and each data set contained six corpora (Corpus 0 - 5). Table TABREF48 summarizes the first data set. In each data set, we name the initial corpus Corpus 0, and define INLINEFORM0 as the timestamp when we started this simulation process. We set INLINEFORM1 , INLINEFORM2 . Corpus INLINEFORM3 corresponds to documents generated before timestamp INLINEFORM4 . We extracted document revisions from Corpus INLINEFORM5 and compared the revisions generated in (Corpus INLINEFORM6 - Corpus INLINEFORM7 ) with the ground truths in Table TABREF48 . Hence, we ran four experiments on this data set in total. In every experiment, INLINEFORM8 is calibrated based on Corpus INLINEFORM9 . For instance, the training set of the first experiment was Corpus 1. We trained INLINEFORM10 from Corpus 1. We extracted all revisions in Corpus 2, and compared revisions generated in the test set (Corpus 2 - Corpus 1) with the ground truth: 258 revised documents. The word2vec model shared in the four experiments was trained on Corpus 5." ], [ "We use precision, recall and F-measure to evaluate the detected revisions. A true positive case is a correctly identified revision. A false positive case is an incorrectly identified revision. A false negative case is a missed revision record.", "We illustrate the performances of wDTW, wTED, WMD, VSM, PV-DTW and PV-TED on the Wikipedia revision dumps in Figure FIGREF43 . wDTW and wTED have the highest F-measure scores compared to the rest of four measures, and wDTW also have the highest precision and recall scores. Figure FIGREF49 shows the average evaluation results on the simulated data sets. From left to right, the corpus size increases and the revision chains become longer, thus it becomes more challenging to detect document revisions. Overall, wDTW consistently performs the best. WMD is slightly better than wTED. In particular, when the corpus size increases, the performances of WMD, VSM, PV-DTW and PV-TED drop faster than wDTW and wTED. Because the revision operations were randomly selected in each corpus, it is possible that there are non-monotone points in the series.", "wDTW and wTED perform better than WMD especially when the corpus is large, because they use dynamic programming to find the global optimal alignment for documents. In contrast, WMD relies on a greedy algorithm that sums up the minimal cost for every word. wDTW and wTED perform better than PV-DTW and PV-TED, which indicates that our DistPara distance in Algorithm SECREF20 is more accurate than the Euclidian distance between paragraph vectors trained by PV.", "We show in Table TABREF53 the average running time of the six distance/similarity measures. In all the experiments, VSM is the fastest, wTED is faster than wDTW, and WMD is the slowest. Running WMD is extremely expensive because WMD needs to solve an INLINEFORM0 sequential transshipment problem for every two documents where INLINEFORM1 is the average number of words in a document. In contrast, by splitting this heavy computation into several smaller problems (finding the distance between any two paragraphs), which can be run in parallel, wDTW and wTED scale much better.", "Combining Figure FIGREF43 , Figure FIGREF49 and Table TABREF53 we conclude that wDTW yields the most accurate results using marginally more time than VSM, PV-TED and PV-DTW, but much less running time than WMD. wTED returns satisfactory results using shorter time than wDTW." ], [ "This paper has explored how DTW and TED can be extended with word2vec to construct semantic document distance measures: wDTW and wTED. By representing paragraphs with concatenations of word vectors, wDTW and wTED are able to capture the semantics of the words and thus give more accurate distance scores. In order to detect revisions, we have used minimum branching on an appropriately developed network with document distance scores serving as arc weights. We have also assessed the efficiency of the method of retrieving an optimal revision subnetwork by finding the minimum branching.", "Furthermore, we have compared wDTW and wTED with several distance measures for revision detection tasks. Our results demonstrate the effectiveness and robustness of wDTW and wTED in the Wikipedia revision dumps and our simulated data sets. In order to reduce the computation time, we have computed document distances at the paragraph level and implemented a boosting learning system using Apache Spark. Although we have demonstrated the superiority of our semantic measures only in the revision detection experiments, wDTW and wTED can also be used as semantic distance measures in many clustering, classification tasks.", "Our revision detection system can be enhanced with richer features such as author information and writing styles, and exact changes in revision pairs. Another interesting aspect we would like to explore in the future is reducing the complexities of calculating the distance between two paragraphs." ], [ "This work was supported in part by Intel Corporation, Semiconductor Research Corporation (SRC)." ] ] }
{ "question": [ "What metrics are used to evaluation revision detection?", "How large is the Wikipedia revision dump dataset?", "What are simulated datasets collected?", "Which are the state-of-the-art models?" ], "question_id": [ "3bfb8c12f151dada259fbd511358914c4b4e1b0e", "3f85cc5be84479ba668db6d9f614fedbff6d77f1", "126e8112e26ebf8c19ca7ff3dd06691732118e90", "be08ef81c3cfaaaf35c7414397a1871611f1a7fd" ], "nlp_background": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "search_query": [ "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "precision", "recall", "F-measure" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use precision, recall and F-measure to evaluate the detected revisions. A true positive case is a correctly identified revision. A false positive case is an incorrectly identified revision. A false negative case is a missed revision record." ], "highlighted_evidence": [ "We use precision, recall and F-measure to evaluate the detected revisions." ] } ], "annotation_id": [ "8b0add840d20bf740a040223502d86b77dee5181" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "eight GB" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data." ], "highlighted_evidence": [ "The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data." ] } ], "annotation_id": [ "e5bcb929f7ac154baa12daa401937be57459067b" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "There are 6 simulated datasets collected which is initialised with a corpus of size 550 and simulated by generating new documents from Wikipedia extracts and replacing existing documents", "evidence": [ "The generation process of the simulated data sets is designed to mimic the real world. Users open some existing documents in a file system, make some changes (e.g. addition, deletion or replacement), and save them as separate documents. These documents become revisions of the original documents. We started from an initial corpus that did not have revisions, and kept adding new documents and revising existing documents. Similar to a file system, at any moment new documents could be added and/or some of the current documents could be revised. The revision operations we used were deletion, addition and replacement of words, sentences, paragraphs, section names and document titles. The addition of words, ..., section names, and new documents were pulled from the Wikipedia abstracts. This corpus generation process had five time periods INLINEFORM0 . Figure FIGREF42 illustrates this simulation. We set a Poisson distribution with rate INLINEFORM1 (the number of documents in the initial corpus) to control the number of new documents added in each time period, and a Poisson distribution with rate INLINEFORM2 to control the number of documents revised in each time period.", "We generated six data sets using different random seeds, and each data set contained six corpora (Corpus 0 - 5). Table TABREF48 summarizes the first data set. In each data set, we name the initial corpus Corpus 0, and define INLINEFORM0 as the timestamp when we started this simulation process. We set INLINEFORM1 , INLINEFORM2 . Corpus INLINEFORM3 corresponds to documents generated before timestamp INLINEFORM4 . We extracted document revisions from Corpus INLINEFORM5 and compared the revisions generated in (Corpus INLINEFORM6 - Corpus INLINEFORM7 ) with the ground truths in Table TABREF48 . Hence, we ran four experiments on this data set in total. In every experiment, INLINEFORM8 is calibrated based on Corpus INLINEFORM9 . For instance, the training set of the first experiment was Corpus 1. We trained INLINEFORM10 from Corpus 1. We extracted all revisions in Corpus 2, and compared revisions generated in the test set (Corpus 2 - Corpus 1) with the ground truth: 258 revised documents. The word2vec model shared in the four experiments was trained on Corpus 5." ], "highlighted_evidence": [ "We started from an initial corpus that did not have revisions, and kept adding new documents and revising existing documents.", "The revision operations we used were deletion, addition and replacement of words, sentences, paragraphs, section names and document titles. The addition of words, ..., section names, and new documents were pulled from the Wikipedia abstracts.", "We generated six data sets using different random seeds, and each data set contained six corpora (Corpus 0 - 5)." ] } ], "annotation_id": [ "1726f69f9e25a1a5f704a4aa45afbfc4fd153ef6" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "WMD", "VSM", "PV-DTW", "PV-TED" ], "yes_no": null, "free_form_answer": "", "evidence": [ "This section reports the results of the experiments conducted on two data sets for evaluating the performances of wDTW and wTED against other baseline methods.", "We denote the following distance/similarity measures.", "WMD: The Word Mover's Distance introduced in Section SECREF1 . WMD adapts the earth mover's distance to the space of documents.", "VSM: The similarity measure introduced in Section UID12 .", "PV-DTW: PV-DTW is the same as Algorithm SECREF21 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 where INLINEFORM1 is the PV embedding of paragraph INLINEFORM2 .", "PV-TED: PV-TED is the same as Algorithm SECREF23 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 ." ], "highlighted_evidence": [ "This section reports the results of the experiments conducted on two data sets for evaluating the performances of wDTW and wTED against other baseline methods.", "We denote the following distance/similarity measures.", "WMD: The Word Mover's Distance introduced in Section SECREF1 .", "VSM: The similarity measure introduced in Section UID12 .", "PV-DTW: PV-DTW is the same as Algorithm SECREF21 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 where INLINEFORM1 is the PV embedding of paragraph INLINEFORM2 .", "PV-TED: PV-TED is the same as Algorithm SECREF23 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 ." ] } ], "annotation_id": [ "a66b83c113c34aefe009dce1acd436272846ee73" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Figure 1: Revision network visualization", "Figure 2: Setting τ", "Figure 3: Corpora simulation", "Figure 4: Precision, recall and F-measure on the Wikipedia revision dumps", "Table 1: A simulated data set", "Figure 5: Average precision, recall and F-measure on the simulated data sets", "Table 2: Running time of VSM, PV-TED, PV-DTW, wTED, wDTW and WMD", "Figure 1: wDTW visualization", "Figure 2: wTED visualization" ], "file": [ "3-Figure1-1.png", "6-Figure2-1.png", "6-Figure3-1.png", "7-Figure4-1.png", "7-Table1-1.png", "8-Figure5-1.png", "8-Table2-1.png", "11-Figure1-1.png", "11-Figure2-1.png" ] }
1909.03526
Multi-Task Bidirectional Transformer Representations for Irony Detection
Supervised deep learning requires large amounts of training data. In the context of the FIRE2019 Arabic irony detection shared task (IDAT@FIRE2019), we show how we mitigate this need by fine-tuning the pre-trained bidirectional encoders from transformers (BERT) on gold data in a multi-task setting. We further improve our models by by further pre-training BERT on `in-domain' data, thus alleviating an issue of dialect mismatch in the Google-released BERT model. Our best model acquires 82.4 macro F1 score, and has the unique advantage of being feature-engineering free (i.e., based exclusively on deep learning).
{ "section_name": [ "Introduction", "Methods", "Methods ::: GRU", "Methods ::: BERT", "Methods ::: Multi-task Learning", "Data", "Models ::: GRU", "Models ::: Single-Task BERT", "Models ::: Multi-Task BERT", "Models ::: In-Domain Pre-Training", "Models ::: IDAT@FIRE2019 Submission", "Related Work", "Conclusion", "Acknowledgement" ], "paragraphs": [ [ "The proliferation of social media has provided a locus for use, and thereby collection, of figurative and creative language data, including irony BIBREF0. According to the Merriam-Webster online dictionary, irony refers to “the use of word to express something other than and especially the opposite of the literal meaning.\" A complex, controversial, and intriguing linguistic phenomenon, irony has been studied in disciplines such as linguistics, philosophy, and rhetoric. Irony detection also has implications for several NLP tasks such as sentiment analysis, hate speech detection, fake news detection, etc BIBREF0. Hence, automatic irony detection can potentially improve systems designed for each of these tasks. In this paper, we focus on learning irony. More specifically, we report our work submitted to the FIRE 2019 Arabic irony detection task (IDAT@FIRE2019). We focus our energy on an important angle of the problem–the small size of training data.", "Deep learning is the most successful under supervised conditions with large amounts of training data (tens-to-hundreds of thousands of examples). For most real-world tasks, we hard to obtain labeled data. Hence, it is highly desirable to eliminate, or at least reduce, dependence on supervision. In NLP, pre-training language models on unlabeled data has emerged as a successful approach for improving model performance. In particular, the pre-trained multilingual Bidirectional Encoder Representations from Transformers (BERT) BIBREF1 was introduced to learn language regularities from unlabeled data. Multi-task learning (MTL) is another approach that helps achieve inductive transfer between various tasks. More specifically, MTL leverages information from one or more source tasks to improve a target task BIBREF2, BIBREF3. In this work, we introduce Transformer representations (BERT) in an MTL setting to address the data bottleneck in IDAT@FIRE2019. To show the utility of BERT, we compare to a simpler model with gated recurrent units (GRU) in a single task setting. To identify the utility, or lack thereof, of MTL BERT, we compare to a single task BERT model. For MTL BERT, we train on a number of tasks simultaneously. Tasks we train on are sentiment analysis, gender detection, age detection, dialect identification, and emotion detection.", "Another problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA). This introduces a language variety mismatch due to the irony data involving a number of dialects that come from the Twitter domain. To mitigate this issue, we further pre-train BERT on an in-house dialectal Twitter dataset, showing the utility of this measure. To summarize, we make the following contributions:", "In the context of the Arabic irony task, we show how a small-sized labeled data setting can be mitigated by training models in a multi-task learning setup.", "We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data)." ], [ "" ], [ "For our baseline, we use gated recurrent units (GRU) BIBREF4, a simplification of long-short term memory (LSTM) BIBREF5, which in turn is a variation of recurrent neural networks (RNNs). A GRU learns based on the following:", "where the update state $\\textbf {\\textit {z}}^{(t)}$ decides how much the unit updates its content:", "where W and U are weight matrices. The candidate activation makes use of a reset gate $\\textbf {\\textit {r}}^{(t)}$:", "where $\\odot $ is a Hadamard product (element-wise multiplication). When its value is close to zero, the reset gate allows the unit to forget the previously computed state. The reset gate $\\textbf {\\textit {r}}^{(t)}$ is computed as follows:" ], [ "BERT BIBREF1 is based on the Transformer BIBREF6, a network architecture that depends solely on encoder-decoder attention. The Transformer attention employs a function operating on queries, keys, and values. This attention function maps a query and a set of key-value pairs to an output, where the output is a weighted sum of the values. Encoder of the Transformer in BIBREF6 has 6 attention layers, each of which is composed of two sub-layers: (1) multi-head attention where queries, keys, and values are projected h times into linear, learned projections and ultimately concatenated; and (2) fully-connected feed-forward network (FFN) that is applied to each position separately and identically. Decoder of the Transformer also employs 6 identical layers, yet with an extra sub-layer that performs multi-head attention over the encoder stack. The architecture of BERT BIBREF1 is a multi-layer bidirectional Transformer encoder BIBREF6. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task captures context (i.e., sentence relationships). More information about BERT can be found in BIBREF1." ], [ "In multi-task learning (MTL), a learner uses a number of (usually relevant) tasks to improve performance on a target task BIBREF2, BIBREF3. The MTL setup enables the learner to use cues from various tasks to improve the performance on the target task. MTL also usually helps regularize the model since the learner needs to find representations that are not specific to a single task, but rather more general. Supervised learning with deep neural networks requires large amounts of labeled data, which is not always available. By employing data from additional tasks, MTL thus practically augments training data to alleviate need for large labeled data. Many researchers achieve state-of-the-art results by employing MTL in supervised learning settings BIBREF7, BIBREF8. In specific, BERT was successfully used with MTL. Hence, we employ multi-task BERT (following BIBREF8). For our training, we use the same pre-trained BERT-Base Multilingual Cased model as the initial checkpoint. For this MTL pre-training of BERT, we use the same afore-mentioned single-task BERT parameters. We now describe our data." ], [ "The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e. targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony\"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine.", "IDAT@FIRE2019 is set up as a binary classification task where tweets are assigned labels from the set {ironic, non-ironic}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90% TRAIN ($n$=3,621 tweets; `ironic'=1,882 and `non-ironic'=1,739) and 10% DEV ($n$=403 tweets; `ironic'=209 and `non-ironic'=194). We train our models on TRAIN, and evaluate on DEV.", "Our multi-task BERT models involve six different Arabic classification tasks. We briefly introduce the data for these tasks here:", "Author profiling and deception detection in Arabic (APDA). BIBREF9 . From APDA, we only use the corpus of author profiling (which includes the three profiling tasks of age, gender, and variety). The organizers of APDA provide 225,000 tweets as training data. Each tweet is labelled with three tags (one for each task). To develop our models, we split the training data into 90% training set ($n$=202,500 tweets) and 10% development set ($n$=22,500 tweets). With regard to age, authors consider tweets of three classes: {Under 25, Between 25 and 34, and Above 35}. For the Arabic varieties, they consider the following fifteen classes: {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. Gender is labeled as a binary task with {male,female} tags.", "LAMA+DINA Emotion detection. Alhuzali et al. BIBREF10 introduce LAMA, a dataset for Arabic emotion detection. They use a first-person seed phrase approach and extend work by Abdul-Mageed et al. BIBREF11 for emotion data collection from 6 to 8 emotion categories (i.e. anger, anticipation, disgust, fear, joy, sadness, surprise and trust). We use the combined LAMA+DINA corpus. It is split by the authors as 189,902 tweets training set, 910 as development, and 941 as test. In our experiment, we use only the training set for out MTL experiments.", "Sentiment analysis in Arabic tweets. This dataset is a shared task on Kaggle by Motaz Saad . The corpus contains 58,751 Arabic tweets (46,940 training, and 11,811 test). The tweets are annotated with positive and negative labels based on an emoji lexicon." ], [ "We train a baseline GRU network with our iorny TRAIN data. This network has only one layer unidirectional GRU, with 500 unites and a linear, output layer. The input word tokens are embedded by the trainable word vectors which are initialized with a standard normal distribution, with $\\mu =0$, and $\\sigma =1$, i.e., $W \\sim N(0,1)$. We use Adam BIBREF12 with a fixed learning rate of $1e-3$ for optimization. For regularization, we use dropout BIBREF13 with a rate of 0.5 on the hidden layer. We set the maximum sequence sequence in our GRU model to 50 words, and use all 22,000 words of training set as vocabulary. We employ batch training with a batch size of 64 for this model. We run the network for 20 epochs and save the model at the end of each epoch, choosing the model that performs highest on DEV as our best model. We report our best result on DEV in Table TABREF22. Our best result is acquired with 12 epochs. As Table TABREF22 shows, the baseline obtains $accuracy=73.70\\%$ and $F_1=73.47$." ], [ "We use the BERT-Base Multilingual Cased model released by the authors BIBREF1 . The model is trained on 104 languages (including Arabic) with 12 layers, 768 hidden units each, 12 attention heads. The entire model has 110M parameters. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 20 epochs. For single-task learning, we fine-tune BERT on the training set (i.e., TRAIN) of the irony task exclusively. We refer to this model as BERT-ST, ST standing for `single task.' As Table TABREF22 shows, BERT-ST unsurprisingly acquires better performance than the baseline GRU model. On accuracy, BERT-ST is 7.94% better than the baseline. BERT-ST obtains 81.62 $F_1$ which is 7.35 better than the baseline." ], [ "We follow the work of Liu et al. BIBREF8 for training an MTL BERT in that we fine-tune the afore-mentioned BERT-Base Multilingual Cased model with different tasks jointly. First, we fine-tune with the three tasks of author profiling and the irony task simultaneously. We refer to this model trained on the 4 tasks simply as BERT-MT4. BERT-MT5 refers to the model fine-tuned on the 3 author profiling tasks, the emotion task, and the irony task. We also refer to the model fine-tuned on all six tasks (adding the sentiment task mentioned earlier) as BERT-MT6. For MTL BERT, we use the same parameters as the single task BERT listed in the previous sub-section (i.e., Single-Task BERT). In Table TABREF22, we present the performance on the DEV set of only the irony detection task. We note that all the results of multitask learning with BERT are better than those with the single task BERT. The model trained on all six tasks obtains the best result, which is 2.23% accuracy and 2.25% $F_1$ higher than the single task BERT model." ], [ "Our irony data involves dialects such as Egyptian, Gulf, and Levantine, as we explained earlier. The BERT-Base Multilingual Cased model we used, however, was trained on Arabic Wikipedia, which is mostly MSA. We believe this dialect mismatch is sub-optimal. As Sun et al. BIBREF14 show, further pre-training with domain specific data can improve performance of a learner. Viewing dialects as constituting different domains, we turn to dialectal data to further pre-train BERT. Namely, we use 1M tweets randomly sampled from an in-house Twitter dataset to resume pre-training BERT before we fine-tune on the irony data. We use BERT-Base Multilingual Cased model as an initial checkpoint and pre-train on this 1M dataset with a learning rate of $2e-5$, for 10 epochs. Then, we fine-tune on MT5 (and then on MT6) with the new further-pre-trained BERT model. We refer to the new models as BERT-1M-MT5 and BERT-1M-MT6, respectively. As Table TABREF22 shows, BERT-1M-MT5 performs best: BERT-1M-MT5 obtains 84.37% accuracy (0.5% less than BERT-MT6) and 83.34 $F_1$ (0.47% less than BERT-MT6)." ], [ "For the shared task submission, we use the predictions of BERT-1M-MT5 as our first submitted system. Then, we concatenate our DEV and TRAIN data to compose a new training set (thus using all the training data released by organizers) to re-train BERT-1M-MT5 and BERT-MT6 with the same parameters. We use the predictions of these two models as our second and third submissions. Our second submission obtains 82.4 $F_1$ on the official test set, and ranks $4th$ on this shared task." ], [ "Multi-Task Learning. MTL has been effectively used to model several NLP problems. These include, for example, syntactic parsing BIBREF15, sequence labeling BIBREF16, BIBREF17, and text classification BIBREF18.", "Irony in different languages. Irony detection has been investigated in various languages. For example, Hee et al. BIBREF19 propose two irony detection tasks in English tweets. Task A is a binary classification task (irony vs. non-irony), and Task B is multi-class identification of a specific type of irony from the set {verbal, situational, other-irony, non-ironic}. They use hashtags to automatically collect tweets that they manually annotate using a fine-grained annotation scheme. Participants in this competition construct models based on logistic regression and support vector machine (SVM) BIBREF20, XGBoost BIBREF21, convolutional neural networks (CNNs) BIBREF21, long short-term memory networks (LSTMs) BIBREF22, etc. For the Italian language, Cignarella et al. propose the IronTA shared task BIBREF23, and the best system BIBREF24 is a combination of bi-directional LSTMs, word $n$-grams, and affective lexicons. For Spanish, Ortega-Bueno1 et al. BIBREF25 introduce the IroSvA shared task, a binary classification task for tweets and news comments. The best-performing model on the task, BIBREF26, employs pre-trained Word2Vec, multi-head Transformer encoder and a global average pooling mechanism.", "Irony in Arabic. Although Arabic is a widely spoken collection of languages ($\\sim $ 300 million native speakers) BIBREF27, BIBREF28, there has not been works on irony that we know of on the language. IDAT@FIRE2019 aims at bridging this gap. The closest works in Arabic are those focusing on other text classification tasks such as sentiment analysis BIBREF29, BIBREF30, BIBREF31, BIBREF32, emotion BIBREF10, and dialect identification BIBREF28, BIBREF33, BIBREF34, BIBREF35." ], [ "In this paper, we described our submissions to the Irony Detection in Arabic shared task (IDAT@FIRE2019). We presented how we acquire effective models using pre-trained BERT in a multi-task learning setting. We also showed the utility of viewing different varieties of Arabic as different domains by reporting better performance with models pre-trained with dialectal data rather than exclusively on MSA. Our multi-task model with domain-specific BERT ranks $4th$ in the official IDAT@FIRE2019 evaluation. The model has the advantage of being exclusively based on deep learning. In the future, we will investigate other multi-task learning architectures, and extend our work with semi-supervised methods." ], [ "We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca)." ] ] }
{ "question": [ "Why is being feature-engineering free an advantage?", "Where did this model place in the final evaluation of the shared task?", "What in-domain data is used to continue pre-training?", "What dialect is used in the Google BERT model and what is used in the task data?", "What are the tasks used in the mulit-task learning setup?" ], "question_id": [ "dc57ae854d78aa5d5e8c979826d3e2524d4e9165", "18412237f7faafc6befe975d5bcd348e2b499b55", "02945c85d6cc4cdd1757b2f2bfa5e92ee4ed14a0", "6e51af9088c390829703c6fa966e98c3a53114c1", "07ee4e0277ad1083270131d32a71c3fe062a916d" ], "nlp_background": [ "five", "five", "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "irony", "irony", "irony", "irony", "irony" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "17dfdb8d991a75967a343b61db898afdf2327080" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "$4th$" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For the shared task submission, we use the predictions of BERT-1M-MT5 as our first submitted system. Then, we concatenate our DEV and TRAIN data to compose a new training set (thus using all the training data released by organizers) to re-train BERT-1M-MT5 and BERT-MT6 with the same parameters. We use the predictions of these two models as our second and third submissions. Our second submission obtains 82.4 $F_1$ on the official test set, and ranks $4th$ on this shared task." ], "highlighted_evidence": [ "Our second submission obtains 82.4 $F_1$ on the official test set, and ranks $4th$ on this shared task." ] } ], "annotation_id": [ "54c3382fecec47ef37f88604af2bf6bf02e2820b" ], "worker_id": [ "e70d8110563d53282f1a26e823d27e6f235772db" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "dialectal tweet data" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data)." ], "highlighted_evidence": [ "We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data)." ] } ], "annotation_id": [ "d0c72070dcae3cbedd92cf8585d532a3c7a6910f" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Modern Standard Arabic (MSA)", "MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Another problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA). This introduces a language variety mismatch due to the irony data involving a number of dialects that come from the Twitter domain. To mitigate this issue, we further pre-train BERT on an in-house dialectal Twitter dataset, showing the utility of this measure. To summarize, we make the following contributions:", "The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e. targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony\"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine." ], "highlighted_evidence": [ "Another problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA).", "The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018.", "Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine." ] } ], "annotation_id": [ "6b71a7d32bc26c0095340b9926610c7dbe00decc" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Author profiling and deception detection in Arabic", "LAMA+DINA Emotion detection", "Sentiment analysis in Arabic tweets" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our multi-task BERT models involve six different Arabic classification tasks. We briefly introduce the data for these tasks here:", "Author profiling and deception detection in Arabic (APDA). BIBREF9 . From APDA, we only use the corpus of author profiling (which includes the three profiling tasks of age, gender, and variety). The organizers of APDA provide 225,000 tweets as training data. Each tweet is labelled with three tags (one for each task). To develop our models, we split the training data into 90% training set ($n$=202,500 tweets) and 10% development set ($n$=22,500 tweets). With regard to age, authors consider tweets of three classes: {Under 25, Between 25 and 34, and Above 35}. For the Arabic varieties, they consider the following fifteen classes: {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. Gender is labeled as a binary task with {male,female} tags.", "LAMA+DINA Emotion detection. Alhuzali et al. BIBREF10 introduce LAMA, a dataset for Arabic emotion detection. They use a first-person seed phrase approach and extend work by Abdul-Mageed et al. BIBREF11 for emotion data collection from 6 to 8 emotion categories (i.e. anger, anticipation, disgust, fear, joy, sadness, surprise and trust). We use the combined LAMA+DINA corpus. It is split by the authors as 189,902 tweets training set, 910 as development, and 941 as test. In our experiment, we use only the training set for out MTL experiments.", "Sentiment analysis in Arabic tweets. This dataset is a shared task on Kaggle by Motaz Saad . The corpus contains 58,751 Arabic tweets (46,940 training, and 11,811 test). The tweets are annotated with positive and negative labels based on an emoji lexicon." ], "highlighted_evidence": [ "Our multi-task BERT models involve six different Arabic classification tasks.", "Author profiling and deception detection in Arabic (APDA).", "LAMA+DINA Emotion detection.", "Sentiment analysis in Arabic tweets." ] } ], "annotation_id": [ "43319bfb4454a9b53022fc8a9e2afd95057d70bb" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Table 1. Model Performance" ], "file": [ "6-Table1-1.png" ] }
1902.11049
Evaluating Rewards for Question Generation Models
Recent approaches to question generation have used modifications to a Seq2Seq architecture inspired by advances in machine translation. Models are trained using teacher forcing to optimise only the one-step-ahead prediction. However, at test time, the model is asked to generate a whole sequence, causing errors to propagate through the generation process (exposure bias). A number of authors have proposed countering this bias by optimising for a reward that is less tightly coupled to the training data, using reinforcement learning. We optimise directly for quality metrics, including a novel approach using a discriminator learned directly from the training data. We confirm that policy gradient methods can be used to decouple training from the ground truth, leading to increases in the metrics used as rewards. We perform a human evaluation, and show that although these metrics have previously been assumed to be good proxies for question quality, they are poorly aligned with human judgement and the model simply learns to exploit the weaknesses of the reward source.
{ "section_name": [ "Introduction", "Background", "Experimental setup", "Model description", "Fine tuning", "Adversarial training", "Evaluation", "Results", "Conclusion", "Discriminator architecture" ], "paragraphs": [ [ "Posing questions about a document in natural language is a crucial aspect of the effort to automatically process natural language data, enabling machines to ask clarification questions BIBREF0 , become more robust to queries BIBREF1 , and to act as automatic tutors BIBREF2 .", "Recent approaches to question generation have used Seq2Seq BIBREF3 models with attention BIBREF4 and a form of copy mechanism BIBREF5 , BIBREF6 . Such models are trained to generate a plausible question, conditioned on an input document and answer span within that document BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 .", "There are currently no dedicated question generation datasets, and authors have used the context-question-answer triples available in SQuAD. Only a single question is available for each context-answer pair, and models are trained using teacher forcing BIBREF11 . This lack of diverse training data combined with the one-step-ahead training procedure exacerbates the problem of exposure bias BIBREF12 . The model does not learn how to distribute probability mass over sequences that are valid but different to the ground truth; during inference, the model must predict the whole sequence, and may not be robust to mistakes during decoding.", "Recent work has investigated training the models directly on a performance based objective, either by optimising for BLEU score BIBREF13 or other quality metrics BIBREF10 . By decoupling the training procedure from the ground truth data, the model is able to explore the space of possible questions and become more robust to mistakes during decoding. While the metrics used often seem to be intuitively good choices, there is an assumption that they are good proxies for question quality which has not yet been confirmed.", "Our contributions are as follows. We perform fine tuning using a range of rewards, including an adversarial objective. We show that although fine tuning leads to increases in reward scores, the resulting models perform worse when evaluated by human workers. We also demonstrate that the generated questions exploit weaknesses in the reward models." ], [ "Many of the advances in natural language generation have been led by machine translation BIBREF3 , BIBREF4 , BIBREF6 .", "Previous work on question generation has made extensive use of these techniques. BIBREF8 use a Seq2Seq based model to generate questions conditioned on context-answer pairs, and build on this work by preprocessing the context to resolve coreferences and adding a pointer network BIBREF9 . Similarly, BIBREF7 use a part-of-speech tagger to augment the embedding vectors. Both authors perform a human evaluation of their models, and show significant improvement over their baseline. BIBREF13 use a similar model, but apply it to the task of generating questions without conditioning on a specific answer span. BIBREF14 use a modified context encoder based on multi-perspective context matching BIBREF15 .", " BIBREF16 propose a framework for fine tuning using policy gradients, using BLEU and other automatic metrics linked to the ground truth data as the rewards. BIBREF10 describe a Seq2Seq model with attention and a pointer network, with an additional encoding layer for the answer. They also describe a method for further tuning their model on language model and question answering reward objectives using policy gradients. Unfortunately they do not perform any human evaluation to determine whether this tuning led to improved question quality.", "For the related task of summarisation, BIBREF17 propose a framework for fine tuning a summarisation model using reinforcement learning, with the ROUGE similarity metric used as the reward." ], [ "The task is to generate a natural language question, conditioned on a document and answer. For example, given the input document “this paper investigates rewards for question generation\" and answer “question generation\", the model should produce a question such as “what is investigated in the paper?\"" ], [ "We use the model architecture described by BIBREF10 . Briefly, this is a Seq2Seq model BIBREF3 with attention BIBREF4 and copy mechanism BIBREF5 , BIBREF6 . BIBREF10 also add an additional answer encoder layer, and initialise the decoder with a hidden state constructed from the final state of the encoder. Beam search BIBREF18 is used to sample from the model at inference time. The model was trained using maximum likelihood before fine tuning was applied. Our implementation achieves a competitive BLEU-4 score BIBREF19 of $13.5$ on the test set used by BIBREF8 , before fine tuning." ], [ "Generated questions should be formed of language that is both fluent and relevant to the context and answer. We therefore performed fine tuning on a trained model, using rewards given either by the negative perplexity under a LSTM language model, or the F1 score attained by a question answering (QA) system, or a weighted combination of both. The language model is a standard recurrent neural network formed of a single LSTM layer. For the QA system, we use QANet BIBREF1 as implemented by BIBREF20 ." ], [ "Additionally, we propose a novel approach by learning the reward directly from the training data, using a discriminator detailed in Appendix \"Discriminator architecture\" . We pre-trained the discriminator to predict whether an input question and associated context-answer pair were generated by our model, or originated from the training data. We then used as the reward the probability estimated by the discriminator that a generated question was in fact real. In other words, the generator was rewarded for successfully fooling the discriminator. We also experimented with interleaving updates to the discriminator within the fine tuning phase, allowing the discriminator to become adversarial and adapt alongside the generator.", "These rewards $R(\\hat{Y})$ were used to update the model parameters via the REINFORCE policy gradient algorithm BIBREF21 , according to $\\nabla \\mathcal {L} = \\nabla \\frac{1}{l} \\sum \\limits _t (\\frac{R(\\hat{Y})-\\mu _R}{\\sigma _R}) \\log p(\\hat{y}_t | \\hat{y}_{< t}, \\mathbf {D}, \\mathbf {A})$ . We teacher forced the decoder with the generated sequence to reproduce the activations calculated during beam search, to enable backpropagation. All rewards were normalised with a simple form of PopArt BIBREF22 , with the running mean $\\mu _R$ and standard deviation $\\sigma _R$ updated online during training. We continued to apply a maximum likelihood training objective during this fine tuning." ], [ "We report the negative log-likelihood (NLL) of the test set under the different models, as well as the corpus level BLEU-4 score BIBREF19 of the generated questions compared to the ground truth. We also report the rewards achieved on the test set, as the QA, LM and discriminator scores.", "For the human evaluation, we follow the standard approach in evaluating machine translation systems BIBREF23 , as used for question generation by BIBREF9 . We asked three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer." ], [ "Table 2 shows the changes in automatic metrics for models fine tuned on various combinations of rewards, compared to the model without tuning. In all cases, the BLEU score reduced, as the training objective was no longer closely coupled to the training data. In general, models achieved better scores on the metrics on which they were fine tuned. Jointly training on a QA and LM reward resulted in better LM scores than training on only a LM reward. We conclude that fine tuning using policy gradients can be used to attain higher rewards, as expected.", "Table 3 shows the human evaluation scores for a subset of the fine tuned models. The model fine tuned on a QA and LM objective is rated as significantly worse by human annotators, despite achieving higher scores in the automatic metrics. In other words, the training objective given by these reward sources does not correspond to true question quality, despite them being intuitively good choices.", "The model fine tuned using an adversarial discriminator has also failed to achieve better human ratings, with the discriminator model unable to learn a useful reward source.", "Table 1 shows an example where fine tuning has not only failed to improve the quality of generated questions, but has caused the model to exploit the reward source. The model fine tuned on a LM reward has degenerated into producing a loop of words that is evidently deemed probable, while the model trained on a QA reward has learned that it can simply point at the location of the answer. This observation is supported by the metrics; the model fine tuned on a QA reward has suffered a catastrophic worsening in LM score of +226.", "Figure 1 shows the automatic scores against human ratings for all rated questions. The correlation coefficient between human relevance and automatic QA scores was 0.439, and between fluency and LM score was only 0.355. While the automatic scores are good indicators of whether a question will achieve the lowest human rating or not, they do not differentiate clearly between the higher ratings: training a model on these objectives will not necessarily learn to generate better questions. A good question will likely attain a high QA and LM score, but the inverse is not true; a sequence may exploit the weaknesses of the metrics and achieve a high score despite being unintelligible to a human. We conclude that fine tuning a question generation model on these rewards does not lead to better quality questions." ], [ "In this paper, we investigated the use of external reward sources for fine tuning question generation models to counteract the lack of task-specific training data. We showed that although fine tuning can be used to attain higher rewards, this does not equate to better quality questions when rated by humans. Using QA and LM rewards as a training objective causes the generator to expose the weaknesses in these models, which in turn suggests a possible use of this approach for generating adversarial training examples for QA models. The QA and LM scores are well correlated with human ratings at the lower end of the scale, suggesting they could be used as part of a reranking or filtering system." ], [ "We used an architecture based on a modified QANet as shown in Figure 2 , replacing the output layers of the model to produce a single probability. Since the discriminator is also able to consider a full context-question-answer triple as input (as opposed to a context-question pair for the QA task), we fused this information in the output layers.", "Specifically, we applied max pooling over time to the output of the first two encoders, and we took the mean of the outputs of the third encoder that formed part of the answer span. These three reduced encodings were concatenated, a 64 unit hidden layer with ReLU activation applied, and the output passed through a single unit sigmoid output layer to give the estimated probability that an input context-question-answer triple originated from the ground truth dataset or was generated." ] ] }
{ "question": [ "What human evaluation metrics were used in the paper?" ], "question_id": [ "bfce2afe7a4b71f9127d4f9ef479a0bfb16eaf76" ], "nlp_background": [ "infinity" ], "topic_background": [ "familiar" ], "paper_read": [ "somewhat" ], "search_query": [ "question generation" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "rating questions on a scale of 1-5 based on fluency of language used and relevance of the question to the context", "evidence": [ "For the human evaluation, we follow the standard approach in evaluating machine translation systems BIBREF23 , as used for question generation by BIBREF9 . We asked three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer." ], "highlighted_evidence": [ "For the human evaluation, we follow the standard approach in evaluating machine translation systems BIBREF23 , as used for question generation by BIBREF9 . We asked three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer." ] } ], "annotation_id": [ "17e6b7b37247467814f2f6f83917ca3c8623aedd" ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] } ] }
{ "caption": [ "Table 1: Example generated questions for various fine-tuning objectives. The answer is highlighted in bold. The model trained on a QA reward has learned to simply point at the answer and exploit the QA model, while the model trained on a language model objective has learned to repeat common phrase templates.", "Table 2: Changes in automatic evaluation metrics after models were fine tuned on various objectives. QA refers to the F1 score obtained by a question answering system on the generated questions. LM refers to the perplexity of generated questions under a separate language model. The discriminator reward refers to the percentage of generated sequences that fooled the discriminator. Lower LM and NLL scores are better. BLEU scores decreased in all cases.", "Table 3: Summary of human evaluation of selected models", "Figure 1: Comparison of human and automatic metrics.", "Figure 2: Discriminator architecture diagram." ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "3-Table3-1.png", "4-Figure1-1.png", "6-Figure2-1.png" ] }
1905.06906
Gated Convolutional Neural Networks for Domain Adaptation
Domain Adaptation explores the idea of how to maximize performance on a target domain, distinct from source domain, upon which the classifier was trained. This idea has been explored for the task of sentiment analysis extensively. The training of reviews pertaining to one domain and evaluation on another domain is widely studied for modeling a domain independent algorithm. This further helps in understanding correlation between domains. In this paper, we show that Gated Convolutional Neural Networks (GCN) perform effectively at learning sentiment analysis in a manner where domain dependant knowledge is filtered out using its gates. We perform our experiments on multiple gate architectures: Gated Tanh ReLU Unit (GTRU), Gated Tanh Unit (GTU) and Gated Linear Unit (GLU). Extensive experimentation on two standard datasets relevant to the task, reveal that training with Gated Convolutional Neural Networks give significantly better performance on target domains than regular convolution and recurrent based architectures. While complex architectures like attention, filter domain specific knowledge as well, their complexity order is remarkably high as compared to gated architectures. GCNs rely on convolution hence gaining an upper hand through parallelization.
{ "section_name": [ "Introduction", "Related Work", "Gated Convolutional Neural Networks", "Problem Definition", "Model Architecture", "Gating mechanisms", "Datasets", "Baselines", "Implementation details", "Results", "Discussion", "Conclusion" ], "paragraphs": [ [ "With the advancement in technology and invention of modern web applications like Facebook and Twitter, users started expressing their opinions and ideologies at a scale unseen before. The growth of e-commerce companies like Amazon, Walmart have created a revolutionary impact in the field of consumer business. People buy products online through these companies and write reviews for their products. These consumer reviews act as a bridge between consumers and companies. Through these reviews, companies polish the quality of their services. Sentiment Classification (SC) is one of the major applications of Natural Language Processing (NLP) which aims to find the polarity of text. In the early stages BIBREF0 of text classification, sentiment classification was performed using traditional feature selection techniques like Bag-of-Words (BoW) BIBREF1 or TF-IDF. These features were further used to train machine learning classifiers like Naive Bayes (NB) BIBREF2 and Support Vector Machines (SVM) BIBREF3 . They are shown to act as strong baselines for text classification BIBREF4 . However, these models ignore word level semantic knowledge and sequential nature of text. Neural networks were proposed to learn distributed representations of words BIBREF5 . Skip-gram and CBOW architectures BIBREF6 were introduced to learn high quality word representations which constituted a major breakthrough in NLP. Several neural network architectures like recursive neural networks BIBREF7 and convolutional neural networks BIBREF8 achieved excellent results in text classification. Recurrent neural networks which were proposed for dealing sequential inputs suffer from vanishing BIBREF9 and exploding gradient problems BIBREF10 . To overcome this problem, Long Short Term Memory (LSTM) was introduced BIBREF11 .", "All these architectures have been successful in performing sentiment classification for a specific domain utilizing large amounts of labelled data. However, there exists insufficient labelled data for a target domain of interest. Therefore, Domain Adaptation (DA) exploits knowledge from a relevant domain with abundant labeled data to perform sentiment classification on an unseen target domain. However, expressions of sentiment vary in each domain. For example, in $\\textit {Books}$ domain, words $\\textit {thoughtful}$ and $\\textit {comprehensive}$ are used to express sentiment whereas $\\textit {cheap}$ and $\\textit {costly}$ are used in $\\textit {Electronics}$ domain. Hence, models should generalize well for all domains. Several methods have been introduced for performing Domain Adaptation. Blitzer BIBREF12 proposed Structural Correspondence Learning (SCL) which relies on pivot features between source and target domains. Pan BIBREF13 performed Domain Adaptation using Spectral Feature Alignment (SFA) that aligns features across different domains. Glorot BIBREF14 proposed Stacked Denoising Autoencoder (SDA) that learns generalized feature representations across domains. Zheng BIBREF15 proposed end-to-end adversarial network for Domain Adaptation. Qi BIBREF16 proposed a memory network for Domain Adaptation. Zheng BIBREF17 proposed a Hierarchical transfer network relying on attention for Domain Adaptation.", "However, all the above architectures use a different sub-network altogether to incorporate domain agnostic knowledge and is combined with main network in the final layers. This makes these architectures computationally intensive. To address this issue, we propose a Gated Convolutional Neural Network (GCN) model that learns domain agnostic knowledge using gated mechanism BIBREF18 . Convolution layers learns the higher level representations for source domain and gated layer selects domain agnostic representations. Unlike other models, GCN doesn't rely on a special sub-network for learning domain agnostic representations. As, gated mechanism is applied on Convolution layers, GCN is computationally efficient." ], [ "Traditionally methods for tackling Domain Adaptation are lexicon based. Blitzer BIBREF19 used a pivot method to select features that occur frequently in both domains. It assumes that the selected pivot features can reliably represent the source domain. The pivots are selected using mutual information between selected features and the source domain labels. SFA BIBREF13 method argues that pivot features selected from source domain cannot attest a representation of target domain. Hence, SFA tries to exploit the relationship between domain-specific and domain independent words via simultaneously co-clustering them in a common latent space. SDA BIBREF14 performs Domain Adaptation by learning intermediate representations through auto-encoders. Yu BIBREF20 used two auxiliary tasks to help induce sentence embeddings that work well across different domains. These embeddings are trained using Convolutional Neural Networks (CNN).", "Gated convolutional neural networks have achieved state-of-art results in language modelling BIBREF18 . Since then, they have been used in different areas of natural language processing (NLP) like sentence similarity BIBREF21 and aspect based sentiment analysis BIBREF22 ." ], [ "In this section, we introduce a model based on Gated Convolutional Neural Networks for Domain Adaptation. We present the problem definition of Domain Adaptation, followed by the architecture of the proposed model." ], [ "Given a source domain $D_{S}$ represented as $D_{S}$ = { $(x_{s_{1}},y_{s_{1}})$ , $(x_{s_{2}},y_{s_{2}})$ .... $(x_{s_{n}},y_{s_{n}})$ } where $x_{s_{i}} \\in \\mathbb {R}$ represents the vector of $i^{th}$ source text and $y_{s_{i}}$ represents the corresponding source domain label. Let $T_{S}$ represent the task in source domain. Given a target domain $D_{T}$ represented as $D_{S}$0 = { $D_{S}$1 , $D_{S}$2 .... $D_{S}$3 }, where $D_{S}$4 represents the vector of $D_{S}$5 target text and $D_{S}$6 represents corresponding target domain label. Let $D_{S}$7 represent the task in target domain. Domain Adaptation (DA) is defined by the target predictive function $D_{S}$8 calculated using the knowledge of $D_{S}$9 and $(x_{s_{1}},y_{s_{1}})$0 where $(x_{s_{1}},y_{s_{1}})$1 but $(x_{s_{1}},y_{s_{1}})$2 . It is imperative to note that the domains are different but only a single task. In this paper, the task is sentiment classification." ], [ "The proposed model architecture is shown in the Figure 1 . Recurrent Neural Networks like LSTM, GRU update their weights at every timestep sequentially and hence lack parallelization over inputs in training. In case of attention based models, the attention layer has to wait for outputs from all timesteps. Hence, these models fail to take the advantage of parallelism either. Since, proposed model is based on convolution layers and gated mechanism, it can be parallelized efficiently. The convolution layers learn higher level representations for the source domain. The gated mechanism learn the domain agnostic representations. They together control the information that has to flow through further fully connected output layer after max pooling.", "Let $I$ denote the input sentence represented as $I$ = { $w_{1}$ $w_{2}$ $w_{3}$ ... $w_{N}$ } where $w_{i}$ represents the $i_{th}$ word in $I$ and $N$ is the maximum sentence length considered. Let $I$0 be the vocabulary size for each dataset and $I$1 denote the word embedding matrix where each $I$2 is a $I$3 dimensional vector. Input sentences whose length is less than $I$4 are padded with 0s to reach maximum sentence length. Words absent in the pretrained word embeddings are initialized to 0s. Therefore each input sentence $I$5 is converted to $I$6 dimensional vector. Convolution operation is applied on $I$7 with kernel $I$8 . The convolution operation is one-dimensional, applied with a fixed window size across words. We consider kernel size of 3,4 and 5. The weight initialization of these kernels is done using glorot uniform BIBREF23 . Each kernel is a feature detector which extracts patterns from n-grams. After convolution we obtain a new feature map $I$9 = [ $w_{1}$0 ] for each kernel $w_{1}$1 . ", "$$C_{i} = f(P_{i:i+h} \\ast W_{a} + b_{a})$$ (Eq. 5) ", "where $f$ represents the activation function in convolution layer. The gated mechanism is applied on each convolution layer. Each gated layer learns to filter domain agnostic representations for every time step $i$ . ", "$$S_{i} = g(P_{i:i+h} \\ast W_{s} + b_{s})$$ (Eq. 6) ", "where $g$ is the activation function used in gated convolution layer. The outputs from convolution layer and gated convolution layer are element wise multiplied to compute a new feature representation $G_{i}$ ", "$$G_{i} = C_{i} \\times S_{i}$$ (Eq. 7) ", "Maxpooling operation is applied across each filter in this new feature representation to get the most important features BIBREF8 . As shown in Figure 1 the outputs from maxpooling layer across all filters are concatenated. The concatenated layer is fully connected to output layer. Sigmoid is used as the activation function in the output layer." ], [ "Gating mechanisms have been effective in Recurrent Neural Networks like GRU and LSTM. They control the information flow through their recurrent cells. In case of GCN, these gated units control the domain information that flows to pooling layers. The model must be robust to change in domain knowledge and should be able to generalize well across different domains. We use the gated mechanisms Gated Tanh Unit (GTU) and Gated Linear Unit (GLU) and Gated Tanh ReLU Unit (GTRU) BIBREF22 in proposed model. The gated architectures are shown in figure 2 . The outputs from Gated Tanh Unit is calculated as $tanh(P \\ast W + c) \\times \\sigma (P \\ast V + c)$ . In case of Gated Linear Unit, it is calculated as $(P \\ast W + c) \\times \\sigma (P \\ast V + c)$ where $tanh$ and $\\sigma $ denotes Tanh and Sigmoid activation functions respectively. In case of Gated Tanh ReLU Unit, output is calculated as $tanh(P \\ast W + c) \\times relu(P \\ast V + c)$ " ], [ "Multi Domain Dataset BIBREF19 is a short dataset with reviews from distinct domains namely Books(B), DVD(D), Electronics(E) and Kitchen(K). Each domain consists of 2000 reviews equally divided among positive and negative sentiment. We consider 1280 reviews for training, 320 reviews for validation and 400 reviews for testing from each domain.", "Amazon Reviews Dataset BIBREF24 is a large dataset with millions of reviews from different product categories. For our experiments, we consider a subset of 20000 reviews from the domains Cell Phones and Accessories(C), Clothing and Shoes(S), Home and Kitchen(H) and Tools and Home Improvement(T). Out of 20000 reviews, 10000 are positive and 10000 are negative. We use 12800 reviews for training, 3200 reviews for validation and 4000 reviews for testing from each domain." ], [ "To evaluate the performance of proposed model, we consider various baselines like traditional lexicon approaches, CNN models without gating mechanisms and LSTM models.", "Bag-of-words (BoW) is one of the strongest baselines in text classification BIBREF4 . We consider all the words as features with a minimum frequency of 5. These features are trained using Logistic Regression (LR).", "TF-IDF is a feature selection technique built upon Bag-of-Words. We consider all the words with a minimum frequency of 5. The features selected are trained using Logistic Regression (LR).", "Paragraph2vec or doc2vec BIBREF25 is a strong and popularly used baseline for text classification. Paragraph2Vec represents each sentence or paragraph in the form of a distributed representation. We trained our own doc2vec model using DBOW model. The paragraph vectors obtained are trained using Feed Forward Neural Network (FNN).", "To show the effectiveness of gated layer, we consider a CNN model which does not contain gated layers. Hence, we consider Static CNN model, a popular CNN architecture proposed in Kim BIBREF8 as a baseline.", "Wang BIBREF26 proposed a combination of Convolutional and Recurrent Neural Network for sentiment Analysis of short texts. This model takes the advantages of features learned by CNN and long-distance dependencies learned by RNN. It achieved remarkable results on benchmark datasets. We report the results using code published by the authors.", "We offer a comparison with LSTM model with a single hidden layer. This model is trained with equivalent experimental settings as proposed model.", "In this baseline, attention mechanism BIBREF27 is applied on the top of LSTM outputs across different timesteps." ], [ "All the models are experimented with approximately matching number of parameters for a solid comparison using a Tesla K80 GPU.", "Input Each word in the input sentence is converted to a 300 dimensional vector using GloVe pretrained vectors BIBREF28 . A maximum sentence length 100 is considered for all the datasets. Sentences with length less than 100 are padded with 0s.", "Architecture details: The model is implemented using keras. We considered 100 convolution filters for each of the kernels of sizes 3,4 and 5. To get the same sentence length after convolution operation zero padding is done on the input.", "Training Each sentence or paragraph is converted to lower case. Stopword removal is not done. A vocabulary size of 20000 is considered for all the datasets. We apply a dropout layer BIBREF29 with a probability of 0.5, on the embedding layer and probability 0.2, on the dense layer that connects the output layer. Adadelta BIBREF30 is used as the optimizer for training with gradient descent updates. Batch-size of 16 is taken for MDD and 50 for ARD. The model is trained for 50 epochs. We employ an early stopping mechanism based on validation loss for a patience of 10 epochs. The models are trained on source domain and tested on unseen target domain in all experiments." ], [ "The performance of all models on MDD is shown in Tables 2 and 3 while for ARD, in Tables 4 and 5 . All values are shown in accuracy percentage. Furthermore time complexity of each model is presented in Table 1 ." ], [ "We find that gated architectures vastly outperform non gated CNN model. The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation.", "In Figure 3 , we have illustrated the visualization of convolution outputs(kernel size = 3) from the sigmoid gate in GLU across domains. As the kernel size is 3, each row in the output corresponds to a trigram from input sentence. This heat map visualizes values of all 100 filters and their average for every input trigram. These examples demonstrate what the convolution gate learns. Trigrams with domain independent but heavy polarity like “_ _ good” and “_ costly would” have higher weightage. Meanwhile, Trigrams with domain specific terms like “quality functional case” and “sell entire kitchen” get some of the least weights. In Figure 3 (b) example, the trigram “would have to” just consists of function words, hence gets the least weight. While “sell entire kitchen” gets more weight comparatively. This might be because while function words are merely grammatical units which contribute minimal to overall sentiment, domain specific terms like “sell” may contain sentiment level knowledge only relevant within the domain. In such a case it is possible that the filters effectively propagate sentiment level knowledge from domain specific terms as well.", "We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. This is depicted in Table 1 .", "While the gated architectures outperform other baselines, within them as well we make observations. Gated Linear Unit (GLU) performs the best often over other gated architectures. In case of GTU, outputs from Sigmoid and Tanh are multiplied together, this may result in small gradients, and hence resulting in the vanishing gradient problem. However, this will not be the in the case of GLU, as the activation is linear. In case of GTRU, outputs from Tanh and ReLU are multiplied. In ReLU, because of absence of negative activations, corresponding Tanh outputs will be completely ignored, resulting in loss of some domain independent knowledge." ], [ "In this paper, we proposed Gated Convolutional Neural Network(GCN) model for Domain Adaptation in Sentiment Analysis. We show that gates in GCN, filter out domain dependant knowledge, hence performing better at an unseen target domain. Our experiments reveal that gated architectures outperform other popular recurrent and non-gated architectures. Furthermore, because these architectures rely on convolutions, they take advantage of parellalization, vastly reducing time complexity." ] ] }
{ "question": [ "For the purposes of this paper, how is something determined to be domain specific knowledge?", "Does the fact that GCNs can perform well on this tell us that the task is simpler than previously thought?", "Are there conceptual benefits to using GCNs over more complex architectures like attention?" ], "question_id": [ "dfbab3cd991f86d998223726617d61113caa6193", "df510c85c277afc67799abcb503caa248c448ad2", "d95180d72d329a27ddf2fd5cc6919f469632a895" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "", "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "reviews under distinct product categories are considered specific domain knowledge", "evidence": [ "Amazon Reviews Dataset BIBREF24 is a large dataset with millions of reviews from different product categories. For our experiments, we consider a subset of 20000 reviews from the domains Cell Phones and Accessories(C), Clothing and Shoes(S), Home and Kitchen(H) and Tools and Home Improvement(T). Out of 20000 reviews, 10000 are positive and 10000 are negative. We use 12800 reviews for training, 3200 reviews for validation and 4000 reviews for testing from each domain." ], "highlighted_evidence": [ "Amazon Reviews Dataset BIBREF24 is a large dataset with millions of reviews from different product categories. For our experiments, we consider a subset of 20000 reviews from the domains Cell Phones and Accessories(C), Clothing and Shoes(S), Home and Kitchen(H) and Tools and Home Improvement(T). Out of 20000 reviews, 10000 are positive and 10000 are negative. We use 12800 reviews for training, 3200 reviews for validation and 4000 reviews for testing from each domain." ] } ], "annotation_id": [ "cb93eb69ccaf6c5aeb4a0872eca940f6e7c3de73" ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. This is depicted in Table 1 .", "We find that gated architectures vastly outperform non gated CNN model. The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation." ], "highlighted_evidence": [ "We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models.", "The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation." ] } ], "annotation_id": [ "17f76c3bdf4540ead18e680255d62b29b9465324" ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "The proposed model architecture is shown in the Figure 1 . Recurrent Neural Networks like LSTM, GRU update their weights at every timestep sequentially and hence lack parallelization over inputs in training. In case of attention based models, the attention layer has to wait for outputs from all timesteps. Hence, these models fail to take the advantage of parallelism either. Since, proposed model is based on convolution layers and gated mechanism, it can be parallelized efficiently. The convolution layers learn higher level representations for the source domain. The gated mechanism learn the domain agnostic representations. They together control the information that has to flow through further fully connected output layer after max pooling.", "We find that gated architectures vastly outperform non gated CNN model. The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation.", "We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. This is depicted in Table 1 ." ], "highlighted_evidence": [ "The gated mechanism learn the domain agnostic representations. They together control the information that has to flow through further fully connected output layer after max pooling.", "The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation.", "We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. " ] } ], "annotation_id": [ "b8e383cc449251a1ee84b2df1f89fc66aa517156" ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] } ] }
{ "caption": [ "Fig. 1: Architecture of the proposed model", "Fig. 2: Variations in gates of the proposed GCN architecture.", "Table 1: Average training time for all the models on ARD", "Table 2: Accuracy scores on Multi Domain Dataset.", "Table 3: Accuracy scores on Multi Domain Dataset.", "Table 4: Accuracy scores on Amazon Reviews Dataset.", "Table 5: Accuracy scores on Amazon Reviews Dataset.", "Fig. 3: Visualizing outputs from gated convolutions (filter size = 3) of GLU for example sentences, darker indicates higher weightage" ], "file": [ "3-Figure1-1.png", "5-Figure2-1.png", "7-Table1-1.png", "8-Table2-1.png", "8-Table3-1.png", "8-Table4-1.png", "9-Table5-1.png", "10-Figure3-1.png" ] }
1809.09795
Deep contextualized word representations for detecting sarcasm and irony
Predicting context-dependent and non-literal utterances like sarcastic and ironic expressions still remains a challenging task in NLP, as it goes beyond linguistic patterns, encompassing common sense and shared knowledge as crucial components. To capture complex morpho-syntactic features that can usually serve as indicators for irony or sarcasm across dynamic contexts, we propose a model that uses character-level vector representations of words, based on ELMo. We test our model on 7 different datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them, and otherwise offering competitive results.
{ "section_name": [ "Introduction", "Related work", "Proposed Approach", "Experimental Setup", "Results", "Conclusions" ], "paragraphs": [ [ "Sarcastic and ironic expressions are prevalent in social media and, due to the tendency to invert polarity, play an important role in the context of opinion mining, emotion recognition and sentiment analysis BIBREF0 . Sarcasm and irony are two closely related linguistic phenomena, with the concept of meaning the opposite of what is literally expressed at its core. There is no consensus in academic research on the formal definition, both terms are non-static, depending on different factors such as context, domain and even region in some cases BIBREF1 .", "In light of the general complexity of natural language, this presents a range of challenges, from the initial dataset design and annotation to computational methods and evaluation BIBREF2 . The difficulties lie in capturing linguistic nuances, context-dependencies and latent meaning, due to richness of dynamic variants and figurative use of language BIBREF3 .", "The automatic detection of sarcastic expressions often relies on the contrast between positive and negative sentiment BIBREF4 . This incongruence can be found on a lexical level with sentiment-bearing words, as in \"I love being ignored\". In more complex linguistic settings an action or a situation can be perceived as negative, without revealing any affect-related lexical elements. The intention of the speaker as well as common knowledge or shared experience can be key aspects, as in \"I love waking up at 5 am\", which can be sarcastic, but not necessarily. Similarly, verbal irony is referred to as saying the opposite of what is meant and based on sentiment contrast BIBREF5 , whereas situational irony is seen as describing circumstances with unexpected consequences BIBREF6 , BIBREF7 .", "Empirical studies have shown that there are specific linguistic cues and combinations of such that can serve as indicators for sarcastic and ironic expressions. Lexical and morpho-syntactic cues include exclamations and interjections, typographic markers such as all caps, quotation marks and emoticons, intensifiers and hyperboles BIBREF8 , BIBREF9 . In the case of Twitter, the usage of emojis and hashtags has also proven to help automatic irony detection.", "We propose a purely character-based architecture which tackles these challenges by allowing us to use a learned representation that models features derived from morpho-syntactic cues. To do so, we use deep contextualized word representations, which have recently been used to achieve the state of the art on six NLP tasks, including sentiment analysis BIBREF10 . We test our proposed architecture on 7 different irony/sarcasm datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them and otherwise offering competitive results, showing the effectiveness of our proposal. We make our code available at https://github.com/epochx/elmo4irony." ], [ "Apart from the relevance for industry applications related to sentiment analysis, sarcasm and irony detection has received great traction within the NLP research community, resulting in a variety of methods, shared tasks and benchmark datasets. Computational approaches for the classification task range from rule-based systems BIBREF4 , BIBREF11 and statistical methods and machine learning algorithms such as Support Vector Machines BIBREF3 , BIBREF12 , Naive Bayes and Decision Trees BIBREF13 leveraging extensive feature sets, to deep learning-based approaches. In this context, BIBREF14 . delivered state-of-the-art results by using an intra-attentional component in addition to a recurrent neural network. Previous work such as the one by BIBREF15 had proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that also achieved excellent results. A comprehensive survey on automatic sarcasm detection was done by BIBREF16 , while computational irony detection was reviewed by BIBREF17 .", "Further improvements both in terms of classic and deep models came as a result of the SemEval 2018 Shared Task on Irony in English Tweets BIBREF18 . The system that achieved the best results was hybrid, namely, a densely-connected BiLSTM with a multi-task learning strategy, which also makes use of features such as POS tags and lexicons BIBREF19 ." ], [ "The wide spectrum of linguistic cues that can serve as indicators for sarcastic and ironic expressions has been usually exploited for automatic sarcasm or irony detection by modeling them in the form of binary features in traditional machine learning.", "On the other hand, deep models for irony and sarcasm detection, which are currently offer state-of-the-art performance, have exploited sequential neural networks such as LSTMs and GRUs BIBREF15 , BIBREF23 on top of distributed word representations. Recently, in addition to using a sequential model, BIBREF14 proposed to use intra-attention to compare elements in a sequence against themselves. This allowed the model to better capture word-to-word level interactions that could also be useful for detecting sarcasm, such as the incongruity phenomenon BIBREF3 . Despite this, all models in the literature rely on word-level representations, which keeps the models from being able to easily capture some of the lexical and morpho-syntactic cues known to denote irony, such as all caps, quotation marks and emoticons, and in Twitter, also emojis and hashtags.", "The usage of a purely character-based input would allow us to directly recover and model these features. Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10 . The ELMo layer allows to recover a rich 1,024-dimensional dense vector for each word. Using CNNs, each vector is built upon the characters that compose the underlying words. As ELMo also contains a deep bi-directional LSTM on top of this character-derived vectors, each word-level embedding contains contextual information from their surroundings. Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 .", "Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification." ], [ "We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary.", "Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.", "Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.", "Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 .", "In Table TABREF1 , we see a notable difference in terms of size among the Twitter datasets. Given this circumstance, and in light of the findings by BIBREF18 , we are interested in studying how the addition of external soft-annotated data impacts on the performance. Thus, in addition to the datasets introduced before, we use two corpora for augmentation purposes. The first dataset was collected using the Twitter API, targeting tweets with the hashtags #sarcasm or #irony, resulting on a total of 180,000 and 45,000 tweets respectively. On the other hand, to obtain non-sarcastic and non-ironic tweets, we relied on the SemEval 2018 Task 1 dataset BIBREF25 . To augment each dataset with our external data, we first filter out tweets that are not in English using language guessing systems. We later extract all the hashtags in each target dataset and proceed to augment only using those external tweets that contain any of these hashtags. This allows us to, for each class, add a total of 36,835 tweets for the Ptáček corpus, 8,095 for the Riloff corpus and 26,168 for the SemEval-2018 corpus.", "In terms of pre-processing, as in our case the preservation of morphological structures is crucial, the amount of normalization is minimal. Concretely, we forgo stemming or lemmatizing, punctuation removal and lowercasing. We limit ourselves to replacing user mentions and URLs with one generic token respectively. In the case of the SemEval-2018 dataset, an additional step was to remove the hashtags #sarcasm, #irony and #not, as they are the artifacts used for creating the dataset. For tokenizing, we use a variation of the Twokenizer BIBREF26 to better deal with emojis.", "Our models are trained using Adam with a learning rate of 0.001 and a decay rate of 0.5 when there is no improvement on the accuracy on the validation set, which we use to select the best models. We also experimented using a slanted triangular learning rate scheme, which was shown by BIBREF27 to deliver excellent results on several tasks, but in practice we did not obtain significant differences. We experimented with batch sizes of 16, 32 and 64, and dropouts ranging from 0.1 to 0.5. The size of the LSTM hidden layer was fixed to 1,024, based on our preliminary experiments. We do not train the ELMo embeddings, but allow their dropouts to be active during training." ], [ "Table TABREF2 summarizes our results. For each dataset, the top row denotes our baseline and the second row shows our best comparable model. Rows with FULL models denote our best single model trained with all the development available data, without any other preprocessing other than mentioned in the previous section. In the case of the Twitter datasets, rows indicated as AUG refer to our the models trained using the augmented version of the corresponding datasets.", "For the case of the SemEval-2018 dataset we use the best performing model from the Shared Task as a baseline, taken from the task description paper BIBREF18 . As the winning system is a voting-based ensemble of 10 models, for comparison, we report results using an equivalent setting. For the Riloff, Ptáček, SC-V1 and SC-V2 datasets, our baseline models are taken directly from BIBREF14 . As their pre-processing includes truncating sentence lengths at 40 and 80 tokens for the Twitter and Dialog datasets respectively, while always removing examples with less than 5 tokens, we replicate those steps and report our results under these settings. Finally, for the Reddit datasets, our baselines are taken from BIBREF21 . Although their models are trained for binary classification, instead of reporting the performance in terms of standard classification evaluation metrics, their proposed evaluation task is predicting which of two given statements that share the same context is sarcastic, with performance measured solely by accuracy. We follow this and report our results.", "In summary, we see our introduced models are able to outperform all previously proposed methods for all metrics, except for the SemEval-2018 best system. Although our approach yields higher Precision, it is not able to reach the given Recall and F1-Score. We note that in terms of single-model architectures, our setting offers increased performance compared to BIBREF19 and their obtained F1-score of 0.674. Moreover, our system does so without requiring external features or multi-task learning. For the other tasks we are able to outperform BIBREF14 without requiring any kind of intra-attention. This shows the effectiveness of using pre-trained character-based word representations, that allow us to recover many of the morpho-syntactic cues that tend to denote irony and sarcasm.", "Finally, our experiments showed that enlarging existing Twitter datasets by adding external soft-labeled data from the same media source does not yield improvements in the overall performance. This complies with the observations made by BIBREF18 . Since we have designed our augmentation tactics to maximize the overlap in terms of topic, we believe the soft-annotated nature of the additional data we have used is the reason that keeps the model from improving further." ], [ "We have presented a deep learning model based on character-level word representations obtained from ELMo. It is able to obtain the state of the art in sarcasm and irony detection in 6 out of 7 datasets derived from 3 different data sources. Our results also showed that the model does not benefit from using additional soft-labeled data in any of the three tested Twitter datasets, showing that manually-annotated data may be needed in order to improve the performance in this way." ] ] }
{ "question": [ "Do they evaluate only on English?", "What are the 7 different datasets?", "What are the three different sources of data?", "What type of model are the ELMo representations used in?", "Which morphosyntactic features are thought to indicate irony or sarcasm?" ], "question_id": [ "e196e2ce72eb8b2d50732c26e9bf346df6643f69", "46570c8faaeefecc8232cfc2faab0005faaba35f", "982d375378238d0adbc9a4c987d633ed16b7f98f", "bbdb2942dc6de3d384e3a1b705af996a5341031b", "4ec538e114356f72ef82f001549accefaf85e99c" ], "nlp_background": [ "five", "five", "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "irony", "irony", "irony", "irony", "irony" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary.", "Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.", "Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.", "Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 .", "In Table TABREF1 , we see a notable difference in terms of size among the Twitter datasets. Given this circumstance, and in light of the findings by BIBREF18 , we are interested in studying how the addition of external soft-annotated data impacts on the performance. Thus, in addition to the datasets introduced before, we use two corpora for augmentation purposes. The first dataset was collected using the Twitter API, targeting tweets with the hashtags #sarcasm or #irony, resulting on a total of 180,000 and 45,000 tweets respectively. On the other hand, to obtain non-sarcastic and non-ironic tweets, we relied on the SemEval 2018 Task 1 dataset BIBREF25 . To augment each dataset with our external data, we first filter out tweets that are not in English using language guessing systems. We later extract all the hashtags in each target dataset and proceed to augment only using those external tweets that contain any of these hashtags. This allows us to, for each class, add a total of 36,835 tweets for the Ptáček corpus, 8,095 for the Riloff corpus and 26,168 for the SemEval-2018 corpus." ], "highlighted_evidence": [ "We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary.", "Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 ", "Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.", "Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.", "Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 .", " To augment each dataset with our external data, we first filter out tweets that are not in English using language guessing systems." ] } ], "annotation_id": [ "f0359b9fa0253f4c525798ade165f7b481f56f79" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "SemEval 2018 Task 3", "BIBREF20", "BIBREF4", "SARC 2.0", "SARC 2.0 pol", "Sarcasm Corpus V1 (SC-V1)", "Sarcasm Corpus V2 (SC-V2)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "FLOAT SELECTED: Table 1: Benchmark datasets: Tweets, Reddit posts and online debates for sarcasm and irony detection.", "Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.", "Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.", "Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 ." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Benchmark datasets: Tweets, Reddit posts and online debates for sarcasm and irony detection.", "Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.", "Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.", "Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC)." ] } ], "annotation_id": [ "18016d1acfcc7b6103afc803290537c3c1f1fd56" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Twitter", "Reddit", "Online Dialogues" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary.", "Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.", "Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.", "Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 ." ], "highlighted_evidence": [ "We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary.", "Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.", "Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.", "Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 ." ] } ], "annotation_id": [ "e28d5fd6dc9a7a62a4379a2ef6ecd8067f107814" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "A bi-LSTM with max-pooling on top of it", "evidence": [ "The usage of a purely character-based input would allow us to directly recover and model these features. Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10 . The ELMo layer allows to recover a rich 1,024-dimensional dense vector for each word. Using CNNs, each vector is built upon the characters that compose the underlying words. As ELMo also contains a deep bi-directional LSTM on top of this character-derived vectors, each word-level embedding contains contextual information from their surroundings. Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 .", "Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification." ], "highlighted_evidence": [ "Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10", "Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 .", "Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification." ] } ], "annotation_id": [ "688d0d6d3bd1d868fc1805da56a9bbee0719fade" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "all caps", "quotation marks", "emoticons", "emojis", "hashtags" ], "yes_no": null, "free_form_answer": "", "evidence": [ "On the other hand, deep models for irony and sarcasm detection, which are currently offer state-of-the-art performance, have exploited sequential neural networks such as LSTMs and GRUs BIBREF15 , BIBREF23 on top of distributed word representations. Recently, in addition to using a sequential model, BIBREF14 proposed to use intra-attention to compare elements in a sequence against themselves. This allowed the model to better capture word-to-word level interactions that could also be useful for detecting sarcasm, such as the incongruity phenomenon BIBREF3 . Despite this, all models in the literature rely on word-level representations, which keeps the models from being able to easily capture some of the lexical and morpho-syntactic cues known to denote irony, such as all caps, quotation marks and emoticons, and in Twitter, also emojis and hashtags." ], "highlighted_evidence": [ "Despite this, all models in the literature rely on word-level representations, which keeps the models from being able to easily capture some of the lexical and morpho-syntactic cues known to denote irony, such as all caps, quotation marks and emoticons, and in Twitter, also emojis and hashtags." ] } ], "annotation_id": [ "4b83c2f3ddd9bea522c8164c3ca418c289cda628" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] } ] }
{ "caption": [ "Table 1: Benchmark datasets: Tweets, Reddit posts and online debates for sarcasm and irony detection.", "Table 2: Summary of our obtained results." ], "file": [ "3-Table1-1.png", "4-Table2-1.png" ] }
1812.03593
SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering
Conversational question answering (CQA) is a novel QA task that requires understanding of dialogue context. Different from traditional single-turn machine reading comprehension (MRC) tasks, CQA includes passage comprehension, coreference resolution, and contextual understanding. In this paper, we propose an innovated contextualized attention-based deep neural network, SDNet, to fuse context into traditional MRC models. Our model leverages both inter-attention and self-attention to comprehend conversation context and extract relevant information from passage. Furthermore, we demonstrated a novel method to integrate the latest BERT contextual model. Empirical results show the effectiveness of our model, which sets the new state of the art result in CoQA leaderboard, outperforming the previous best model by 1.6% F1. Our ensemble model further improves the result by 2.7% F1.
{ "section_name": [ "Introduction", "Approach", "Model Overview", "Encoding layer", "Integration layer", "Output layer", "Experiments", "Conclusions", "Implementation Details" ], "paragraphs": [ [ "Traditional machine reading comprehension (MRC) tasks share the single-turn setting of answering a single question related to a passage. There is usually no connection between different questions and answers to the same passage. However, the most natural way humans seek answers is via conversation, which carries over context through the dialogue flow.", "To incorporate conversation into reading comprehension, recently there are several public datasets that evaluate QA model's efficacy in conversational setting, such as CoQA BIBREF0 , QuAC BIBREF1 and QBLink BIBREF2 . In these datasets, to generate correct responses, models are required to fully understand the given passage as well as the context of previous questions and answers. Thus, traditional neural MRC models are not suitable to be directly applied to this scenario. Existing approaches to conversational QA tasks include BiDAF++ BIBREF3 , FlowQA BIBREF4 , DrQA+PGNet BIBREF0 , which all try to find the optimal answer span given the passage and dialogue history.", "In this paper, we propose SDNet, a contextual attention-based deep neural network for the task of conversational question answering. Our network stems from machine reading comprehension models, but has several unique characteristics to tackle contextual understanding during conversation. Firstly, we apply both inter-attention and self-attention on passage and question to obtain a more effective understanding of the passage and dialogue history. Secondly, SDNet leverages the latest breakthrough in NLP: BERT contextual embedding BIBREF5 . Different from the canonical way of appending a thin layer after BERT structure according to BIBREF5 , we innovatively employed a weighted sum of BERT layer outputs, with locked BERT parameters. Thirdly, we prepend previous rounds of questions and answers to the current question to incorporate contextual information. Empirical results show that each of these components has substantial gains in prediction accuracy.", "We evaluated SDNet on CoQA dataset, which improves the previous state-of-the-art model's result by 1.6% (from 75.0% to 76.6%) overall $F_1$ score. The ensemble model further increase the $F_1$ score to $79.3\\%$ . Moreover, SDNet is the first model ever to pass $80\\%$ on CoQA's in-domain dataset." ], [ "In this section, we propose the neural model, SDNet, for the conversational question answering task, which is formulated as follows. Given a passage $\\mathcal {C}$ , and history question and answer utterances $Q_1, A_1, Q_2, A_2, ..., Q_{k-1}, A_{k-1}$ , the task is to generate response $A_k$ given the latest question $Q_k$ . The response is dependent on both the passage and history utterances.", "To incorporate conversation history into response generation, we employ the idea from DrQA+PGNet BIBREF0 to prepend the latest $N$ rounds of utterances to the current question $Q_k$ . The problem is then converted into a machine reading comprehension task. In other words, the reformulate question is $\\mathcal {Q}_k=\\lbrace Q_{k-N}; A_{k-N}; ..., Q_{k-1}; A_{k-1}; Q_k\\rbrace $ . To differentiate between question and answering, we add symbol $\\langle Q \\rangle $ before each question and $\\langle A \\rangle $ before each answer in the experiment." ], [ "Encoding layer encodes each token in passage and question into a fixed-length vector, which includes both word embeddings and contextualized embeddings. For contextualized embedding, we utilize the latest result from BERT BIBREF5 . Different from previous work, we fix the parameters in BERT model and use the linear combination of embeddings from different layers in BERT.", "Integration layer uses multi-layer recurrent neural networks (RNN) to capture contextual information within passage and question. To characterize the relationship between passage and question, we conduct word-level attention from question to passage both before and after the RNNs. We employ the idea of history-of-word from FusionNet BIBREF6 to reduce the dimension of output hidden vectors. Furthermore, we conduct self-attention to extract relationship between words at different positions of context and question.", "Output layer computes the final answer span. It uses attention to condense the question into a fixed-length vector, which is then used in a bilinear projection to obtain the probability that the answer should start and end at each position.", "An illustration of our model SDNet is in fig:model." ], [ "We use 300-dim GloVe BIBREF7 embedding and contextualized embedding for each word in context and question. We employ BERT BIBREF5 as contextualized embedding. Instead of adding a scoring layer to BERT structure as proposed in BIBREF5 , we use the transformer output from BERT as contextualized embedding in our encoding layer. BERT generates $L$ layers of hidden states for all BPE tokens BIBREF8 in a sentence/passage and we employ a weighted sum of these hidden states to obtain contextualized embedding. Furthermore, we lock BERT's internal weights, setting their gradients to zero. In ablation studies, we will show that this weighted sum and weight-locking mechanism can significantly boost the model's performance.", "In detail, suppose a word $w$ is tokenized to $s$ BPE tokens $w=\\lbrace b_1, b_2, ..., b_s\\rbrace $ , and BERT generates $L$ hidden states for each BPE token, $\\mathbf {h^l_t}, 1\\le l \\le L, 1\\le t \\le s$ . The contextual embedding $\\operatorname{\\mbox{BERT}}_w$ for word $w$ is then a per-layer weighted sum of average BERT embedding, with weights $\\alpha _1, ..., \\alpha _L$ . $\\operatorname{\\mbox{BERT}}_w = \\sum _{l=1}^L \\alpha _l \\frac{\\sum _{t=1}^s \\mathbf {h}^l_t}{s}\n$ " ], [ "Word-level Inter-Attention. We conduct attention from question to context (passage) based on GloVe word embeddings. Suppose the context word embeddings are $\\lbrace {h}^C_1, ..., {h}^C_m\\rbrace \\subset \\mathbb {R}^d$ , and the question word embeddings are $\\lbrace {h}^Q_1, ..., {h}^Q_n\\rbrace \\subset \\mathbb {R}^d$ . Then the attended vectors from question to context are $\\lbrace \\hat{{h}}^C_1, ..., \\hat{{h}}^C_m\\rbrace $ , defined as, $S_{ij} = \\operatornamewithlimits{ReLU}(Uh^C_i)D\\operatornamewithlimits{ReLU}(Uh^Q_j),$ $\\alpha _{ij} \\propto {exp(S_{ij})},$ ", "where $D\\in \\mathbb {R}^{k\\times k}$ is a diagonal matrix and $U\\in \\mathbb {R}^{d\\times k}$ , $k$ is the attention hidden size.", "To simplify notation, we define the attention function above as $\\mbox{Attn}({A}, {B}, {C})$ , meaning we compute the attention score $\\alpha _{ij}$ based on two sets of vectors ${A}$ and ${B}$ , and use that to linearly combine vector set ${C}$ . So the word-level attention above can be simplified as $\\mbox{Attn}(\\lbrace {h}^C_i\\rbrace _{i=1}^m, \\lbrace {h}^Q_i\\rbrace _{i=1}^n\\rbrace , \\lbrace {h}^Q_i\\rbrace _{i=1}^n\\rbrace )$ .", "For each context word in $\\mathcal {C}$ , we also include a feature vector $f_w$ including 12-dim POS embedding, 8-dim NER embedding, a 3-dim exact matching vector $em_i$ indicating whether each context word appears in the question, and a normalized term frequency, following the approach in DrQA BIBREF9 .", "Therefore, the input vector for each context word is $\\tilde{{w}}_i^C=[\\operatorname{GloVe}(w_i^C); \\operatorname{\\mbox{BERT}}_{w_i^C}; \\hat{{h}}^C_i; f_{w_i^C}]$ ; the input vector for each question word is $\\tilde{{w}}_i^Q=[\\operatorname{GloVe}(w_i^Q); \\operatorname{\\mbox{BERT}}_{w_i^Q}]$ .", "RNN. In this component, we use two separate bidirectional RNNs (BiLSTMs BIBREF10 ) to form the contextualized understanding for $\\mathcal {C}$ and $\\mathcal {Q}$ . $\n{h}_1^{C,k}, ..., {h}_m^{C,k} = \\operatorname{\\mbox{BiLSTM}}{({h}_1^{C,k-1}, ..., {h}_m^{C,k-1})},\n$ $\n{h}_1^{Q,k}, ..., {h}_n^{Q,k} = \\operatorname{\\mbox{BiLSTM}}{({h}_1^{Q,k-1}, ..., {h}_n^{Q,k-1})},\n$ ", "where $1\\le k \\le K$ and $K$ is the number of RNN layers. We use variational dropout BIBREF11 for input vector to each layer of RNN, i.e. the dropout mask is shared over different timesteps.", "Question Understanding. For each question word in $\\mathcal {Q}$ , we employ one more layer of RNN to generate a higher level of understanding of the question. $\n{h}_1^{Q,K+1}, ..., {h}_n^{Q,K+1} = \\operatorname{\\mbox{BiLSTM}}{({h}_1^{Q}, ..., {h}_n^{Q})},\n$ $\n{h}_i^{Q} = [{h}_i^{Q,1};...;{h}_i^{Q,K}]\n$ ", "Self-Attention on Question. As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution. We thus employ self-attention on question. The formula is the same as word-level attention, except that we are attending a question to itself: $\\lbrace {u}_i^Q\\rbrace _{i=1}^n=\\mbox{Attn}(\\lbrace {h}_i^{Q,K+1}\\rbrace _{i=1}^n, \\lbrace {h}_i^{Q,K+1}\\rbrace _{i=1}^n, \\lbrace {h}_i^{Q,K+1}\\rbrace _{i=1}^n)$ . The final question representation is thus $\\lbrace {u}_i^Q\\rbrace _{i=1}^n$ .", "Multilevel Inter-Attention. After multiple layers of RNN extract different levels of understanding of each word, we conduct multilevel attention from question to context based on all layers of generated representations.", "However, the aggregated dimensions can be very large, which is computationally inefficient. We thus leverage the history-of-word idea from FusionNet BIBREF6 : we use all previous levels to compute attentions scores, but only linearly combine RNN outputs.", "In detail, we conduct $K+1$ times of multilevel attention from each RNN layer output of question to context. $\n\\lbrace {m}_i^{(k),C}\\rbrace _{i=1}^m=\\mbox{Attn}(\\lbrace \\mbox{HoW}_i^C\\rbrace _{i=1}^m, \\lbrace \\mbox{HoW}_i^Q\\rbrace _{i=1}^n,\\lbrace {h}_i^{Q,k}\\rbrace _{i=1}^n), 1\\le k \\le K+1\n$ ", "where history-of-word vectors are defined as $\\mbox{HoW}_i^C = [\\operatorname{GloVe}(w_i^C); \\operatorname{\\mbox{BERT}}_{w_i^C}; {h}_i^{C,1}; ..., {h}_i^{C,k}],$ $\\mbox{HoW}_i^Q = [\\operatorname{GloVe}(w_i^Q); \\operatorname{\\mbox{BERT}}_{w_i^Q}; {h}_i^{Q,1}; ..., {h}_i^{Q,k}].$ ", "An additional RNN layer is applied to obtain the contextualized representation ${v}_i^C$ for each word in $\\mathcal {C}$ . $\n{y}_i^C = [{h}_i^{C,1}; ...; {h}_i^{C,k}; {m}_i^{(1),C}; ...; {m}_i^{(K+1),C}],\n$ $\n{v}_1^{C}, ..., {v}_m^{C} = \\operatorname{\\mbox{BiLSTM}}{({y}_1^{C}, ..., {y}_n^{C})},\n$ ", "Self Attention on Context. Similar to questions, we conduct self attention on context to establish direct correlations between all pairs of words in $\\mathcal {C}$ . Again, we use the history of word concept to reduce the output dimension by linearly combining ${v}_i^C$ . $\n{s}_i^C = &[\\operatorname{GloVe}(w_i^C); \\operatorname{\\mbox{BERT}}_{w_i^C}; {h}_i^{C,1}; ...; {h}_i^{C,k}; {m}_i^{(1),Q}; ...; {m}_i^{(K+1),Q}; {v}_i^C]\n$ $\\lbrace \\tilde{{v}}_i^C\\rbrace _{i=1}^m=\\mbox{Attn}(\\lbrace {s}_i^C\\rbrace _{i=1}^m, \\lbrace {s}_i^C\\rbrace _{i=1}^m, \\lbrace {v}_i^C\\rbrace _{i=1}^m)$ ", "The self-attention is followed by an additional layer of RNN to generate the final representation of context: $\\lbrace {u}_i^C\\rbrace _{i=1}^m = \\operatorname{\\mbox{BiLSTM}}{([{v}_1^C; \\tilde{{v}}_1^C], ..., [{v}_m^C; \\tilde{{v}}_m^C])}$ " ], [ "Generating Answer Span. This component is to generate two scores for each context word corresponding to the probability that the answer starts and ends at this word, respectively.", "Firstly, we condense the question representation into one vector: ${u}^Q=\\sum _i{\\beta _i}{u}_i^Q$ , where $\\beta _i\\propto {\\exp {({w}^T{u}_i^Q)}}$ and ${w}$ is a parametrized vector.", "Secondly, we compute the probability that the answer span should start at the $i$ -th word: $P_i^S\\propto {\\exp {(({u}^Q)^TW_S{u}_i^C)}},$ ", "where $W_S$ is a parametrized matrix. We further fuse the start-position probability into the computation of end-position probability via a GRU, ${t}^Q = \\operatorname{GRU}{({u}^Q, \\sum _i P_i^S{u}_i^C)}$ . Thus, the probability that the answer span should end at the $i$ -th word is: $P_i^E\\propto {\\exp {(({t}^Q)^TW_E{u}_i^C)}},$ ", "where $W_E$ is another parametrized matrix.", "For CoQA dataset, the answer could be affirmation “yes”, negation “no” or no answer “unknown”. We separately generate three probabilities corresponding to these three scenarios, $P_Y, P_N, P_U$ , respectively. For instance, to generate the probability that the answer is “yes”, $P_Y$ , we use: $P_i^{Y}\\propto {\\exp {(({u}^Q)^T W_{Y}{u}_i^C})},$ $P_{Y} = (\\sum _i P_i^{Y}{u}_i^C)^T{w}_{Y},$ ", "where $W_Y$ and ${w}_Y$ are parametrized matrix and vector, respectively.", "Training. For training, we use all questions/answers for one passage as a batch. The goal is to maximize the probability of the ground-truth answer, including span start/end position, affirmation, negation and no-answer situations. Equivalently, we minimize the negative log-likelihood function $\\mathcal {L}$ : $\n\\mathcal {L} =& \\sum _k I^S_k(\\mbox{log}(P^S_{i_k^s}) + \\mbox{log}(P^E_{i_k^e})) + I^Y_k\\mbox{log}P^Y_k+I^N_k\\mbox{log}P^N_k + I^U_k\\mbox{log}P^U_k,\n$ ", " where $i_k^s$ and $i_k^e$ are the ground-truth span start and end position for the $k$ -th question. $I^S_k, I^Y_k, I^N_k, I^U_k$ indicate whether the $k$ -th ground-truth answer is a passage span, “yes”, “no” and “unknown”, respectively. More implementation details are in Appendix.", "Prediction. During inference, we pick the largest span/yes/no/unknown probability. The span is constrained to have a maximum length of 15." ], [ "We evaluated our model on CoQA BIBREF0 , a large-scale conversational question answering dataset. In CoQA, many questions require understanding of both the passage and previous questions and answers, which poses challenge to conventional machine reading models. table:coqa summarizes the domain distribution in CoQA. As shown, CoQA contains passages from multiple domains, and the average number of question answering turns is more than 15 per passage. Many questions require contextual understanding to generate the correct answer.", "For each in-domain dataset, 100 passages are in the development set, and 100 passages are in the test set. The rest in-domain dataset are in the training set. The test set also includes all of the out-of-domain passages.", "Baseline models and metrics. We compare SDNet with the following baseline models: PGNet (Seq2Seq with copy mechanism) BIBREF12 , DrQA BIBREF9 , DrQA+PGNet BIBREF0 , BiDAF++ BIBREF3 and FlowQA BIBREF4 . Aligned with the official leaderboard, we use $F_1$ as the evaluation metric, which is the harmonic mean of precision and recall at word level between the predicted answer and ground truth.", "Results. table:mainresult report the performance of SDNet and baseline models. As shown, SDNet achieves significantly better results than baseline models. In detail, the single SDNet model improves overall $F_1$ by 1.6%, compared with previous state-of-art model on CoQA, FlowQA. Ensemble SDNet model further improves overall $F_1$ score by 2.7%, and it's the first model to achieve over 80% $F_1$ score on in-domain datasets (80.7%).", "fig:epoch shows the $F_1$ score on development set over epochs. As seen, SDNet overpasses all but one baseline models after the second epoch, and achieves state-of-the-art results only after 8 epochs.", "Ablation Studies. We conduct ablation studies on SDNet model and display the results in table:ablation. The results show that removing BERT can reduce the $F_1$ score on development set by $7.15\\%$ . Our proposed weight sum of per-layer output from BERT is crucial, which can boost the performance by $1.75\\%$ , compared with using only last layer's output. This shows that the output from each layer in BERT is useful in downstream tasks. This technique can also be applied to other NLP tasks. Using BERT-base instead of BERT-large pretrained model hurts the $F_1$ score by $2.61\\%$ , which manifests the superiority of BERT-large model. Variational dropout and self attention can each improve the performance by 0.24% and 0.75%, respectively.", "Contextual history. In SDNet, we utilize conversation history via prepending the current question with previous $N$ rounds of questions and ground-truth answers. We experimented the effect of $N$ and show the result in table:context. Excluding dialogue history ( $N=0$ ) can reduce the $F_1$ score by as much as $8.56\\%$ , showing the importance of contextual information in conversational QA task. The performance of our model peaks when $N=2$ , which was used in the final SDNet model." ], [ "In this paper, we propose a novel contextual attention-based deep neural network, SDNet, to tackle conversational question answering task. By leveraging inter-attention and self-attention on passage and conversation history, the model is able to comprehend dialogue flow and fuse it with the digestion of passage content. Furthermore, we incorporate the latest breakthrough in NLP, BERT, and leverage it in an innovative way. SDNet achieves superior results over previous approaches. On the public dataset CoQA, SDNet outperforms previous state-of-the-art model by 1.6% in overall $F_1$ metric.", "Our future work is to apply this model to open-domain multiturn QA problem with large corpus or knowledge base, where the target passage may not be directly available. This will be an even more realistic setting to human question answering." ], [ "We use spaCy for tokenization. As BERT use BPE as the tokenizer, we did BPE tokenization for each token generated by spaCy. In case a token in spaCy corresponds to multiple BPE sub-tokens, we average the BERT embeddings of these BPE sub-tokens as the embedding for the token. We fix the BERT weights and use the BERT-Large-Uncased model.", "During training, we use a dropout rate of 0.4 for BERT layer outputs and 0.3 for other layers. We use variational dropout BIBREF11 , which shares the dropout mask over timesteps in RNN. We batch the data according to passages, so all questions and answers from the same passage make one batch.", "We use Adamax BIBREF13 as the optimizer, with a learning rate of $\\alpha =0.002, \\beta =(0.9, 0.999)$ and $\\epsilon =10^{-8}$ . We train the model using 30 epochs, with each epoch going over the data once. We clip the gradient at length 10.", "The word-level attention has a hidden size of 300. The flow module has a hidden size of 300. The question self attention has a hidden size of 300. The RNN for both question and context has $K=2$ layers and each layer has a hidden size of 125. The multilevel attention from question to context has a hidden size of 250. The context self attention has a hidden size of 250. The final layer of RNN for context has a hidden size of 125. " ] ] }
{ "question": [ "Is the model evaluated on other datasets?", "Does the model incorporate coreference and entailment?", "Is the incorporation of context separately evaluated?" ], "question_id": [ "40a45d59a2ef7a67c8ab0f2b2d5b43fc85b85498", "b29b5c39575454da9566b3dd27707fced8c6f4a1", "4040f5c9f365f9bc80b56dce944ada85bb8b4ab4" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "research", "research", "research" ], "paper_read": [ "somewhat", "somewhat", "somewhat" ], "search_query": [ "BERT", "BERT", "BERT" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "We evaluated SDNet on CoQA dataset, which improves the previous state-of-the-art model's result by 1.6% (from 75.0% to 76.6%) overall $F_1$ score. The ensemble model further increase the $F_1$ score to $79.3\\%$ . Moreover, SDNet is the first model ever to pass $80\\%$ on CoQA's in-domain dataset." ], "highlighted_evidence": [ "We evaluated SDNet on CoQA dataset" ] } ], "annotation_id": [ "d2f44cadcd69899c3f4c4e6f3489ffc23d413e2f" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution." ], "yes_no": null, "free_form_answer": "", "evidence": [ "Self-Attention on Question. As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution. We thus employ self-attention on question. The formula is the same as word-level attention, except that we are attending a question to itself: $\\lbrace {u}_i^Q\\rbrace _{i=1}^n=\\mbox{Attn}(\\lbrace {h}_i^{Q,K+1}\\rbrace _{i=1}^n, \\lbrace {h}_i^{Q,K+1}\\rbrace _{i=1}^n, \\lbrace {h}_i^{Q,K+1}\\rbrace _{i=1}^n)$ . The final question representation is thus $\\lbrace {u}_i^Q\\rbrace _{i=1}^n$ ." ], "highlighted_evidence": [ "Self-Attention on Question. As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution. We thus employ self-attention on question." ] } ], "annotation_id": [ "308e8da146b4f31b11eb57d603afadb71a0ff2d3" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "180a2d433f1daa2afeda15f17f800b148bf50056" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] } ] }
{ "caption": [ "Table 1: Domain distribution in CoQA dataset.", "Table 2: Model and human performance (% in F1 score) on the CoQA test set.", "Figure 1: F1 score on CoQA dev set over training epochs. For BERT base model, as there is no associated paper, we use the number on test set from the leaderboard.", "Table 3: Ablation study of SDNet on CoQA development dataset.", "Table 4: Performance of SDNet on development set when prepending different number of history questions and answers to the question. The model uses BERT-Large contextual embedding and fixes BERT’s weights." ], "file": [ "5-Table1-1.png", "5-Table2-1.png", "6-Figure1-1.png", "6-Table3-1.png", "7-Table4-1.png" ] }
2003.01769
Phonetic Feedback for Speech Enhancement With and Without Parallel Speech Data
While deep learning systems have gained significant ground in speech enhancement research, these systems have yet to make use of the full potential of deep learning systems to provide high-level feedback. In particular, phonetic feedback is rare in speech enhancement research even though it includes valuable top-down information. We use the technique of mimic loss to provide phonetic feedback to an off-the-shelf enhancement system, and find gains in objective intelligibility scores on CHiME-4 data. This technique takes a frozen acoustic model trained on clean speech to provide valuable feedback to the enhancement model, even in the case where no parallel speech data is available. Our work is one of the first to show intelligibility improvement for neural enhancement systems without parallel speech data, and we show phonetic feedback can improve a state-of-the-art neural enhancement system trained with parallel speech data.
{ "section_name": [ "Introduction", "Related Work", "Related Work ::: Perceptual Loss", "Related Work ::: Enhancement Without Parallel Data", "Mimic Loss for Enhancement", "Experiments", "Experiments ::: Without parallel data", "Experiments ::: With parallel data", "Conclusion" ], "paragraphs": [ [ "Typical speech enhancement techniques focus on local criteria for improving speech intelligibility and quality. Time-frequency prediction techniques use local spectral quality estimates as an objective function; time domain methods directly predict clean output with a potential spectral quality metric BIBREF0. Such techniques have been extremely successful in predicting a speech denoising function, but also require parallel clean and noisy speech for training. The trained systems implicitly learn the phonetic patterns of the speech signal in the coordinated output of time-domain or time-frequency units. However, our hypothesis is that directly providing phonetic feedback can be a powerful additional signal for speech enhancement. For example, many local metrics will be more attuned to high-energy regions of speech, but not all phones of a language carry equal energy in production (compare /v/ to /ae/).", "Our proxy for phonetic intelligibility is a frozen automatic speech recognition (ASR) acoustic model trained on clean speech; the loss functions we incorporate into training encourage the speech enhancement system to produce output that is interpretable to a fixed acoustic model as clean speech, by making the output of the acoustic model mimic its behavior under clean speech. This mimic loss BIBREF1 provides key linguistic insights to the enhancement model about what a recognizable phoneme looks like.", "When no parallel data is available, but transcripts are available, a loss is easily computed against hard senone labels and backpropagated to the enhancement model trained from scratch. Since the clean acoustic model is frozen, the only way for the enhancement model to improve the loss is to make a signal that is more recognizable to the acoustic model. The improvement by this model demonstrates the power of phonetic feedback; very few neural enhancement techniques until now have been able to achieve improvements without parallel data.", "When parallel data is available, mimic loss works by comparing the outputs of the acoustic model on clean speech with the outputs of the acoustic model on denoised speech. This is a more informative loss than the loss against hard senone labels, and is complimentary to local losses. We show that mimic loss can be applied to an off-the-shelf enhancement system and gives an improvement in intelligibility scores. Our technique is agnostic to the enhancement system as long as it is differentiably trainable.", "Mimic loss has previously improved performance on robust ASR tasks BIBREF1, but has not yet demonstrated success at enhancement metrics, and has not been used in a non-parallel setting. We seek to demonstrate these advantages here:", "We show that using hard targets in the mimic loss framework leads to improvements in objective intelligibility metrics when no parallel data is available.", "We show that when parallel data is available, training the state-of-the-art method with mimic loss improves objective intelligibility metrics." ], [ "Speech enhancement is a rich field of work with a huge variety of techniques. Spectral feature based enhancement systems have focused on masking approaches BIBREF2, and have gained popularity with deep learning techniques BIBREF3 for ideal ratio mask and ideal binary mask estimation BIBREF4." ], [ "Perceptual losses are a form of knowledge transfer BIBREF5, which is defined as the technique of adding auxiliary information at train time, to better inform the trained model. The first perceptual loss was introduced for the task of style transfer BIBREF6. These losses depends on a pre-trained network that can disentangle relevant factors. Two examples are fed through the network to generate a loss at a high level of the network. In style transfer, the perceptual loss ensures that the high-level contents of an image remain the same, while allowing the texture of the image to change.", "For speech-related tasks a perceptual loss has been used to denoise time-domain speech data BIBREF7, where the loss was called a \"deep feature loss\". The perceiving network was trained for acoustic environment detection and domestic audio tagging. The clean and denoised signals are both fed to this network, and a loss is computed at a higher level.", "Perceptual loss has also been used for spectral-domain data, in the mimic loss framework. This has been used for spectral mapping for robust ASR in BIBREF1 and BIBREF8. The perceiving network in this case is an acoustic model trained with senone targets. Clean and denoised spectral features are fed through the acoustic model, and a loss is computed from the outputs of the network. These works did not evaluate mimic loss for speech enhancement, nor did they develop the framework for use without parallel data." ], [ "One approach for enhancement without parallel data introduces an adversarial loss to generate realistic masks BIBREF9. However, this work is only evaluated for ASR performance, and not speech enhancement performance.", "For the related task of voice conversion, a sparse representation was used by BIBREF10 to do conversion without parallel data. This wasn't evaluated on enhancement metrics or ASR metrics, but would prove an interesting approach.", "Several recent works have investigated jointly training the acoustic model with a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13, but these works did not evaluate their system on speech enhancement metrics. Indeed, our internal experiments show that without access to the clean data, joint training severely harms performance on these metrics." ], [ "As noted before, we build on the work by Pandey and Wang that denoises the speech signal in the time domain, but computes a mapping loss on the spectral magnitudes of the clean and denoised speech samples. This is possible because the STFT operation for computing the spectral features is fully differentiable. This framework for enhancement lends itself to other spectral processing techniques, such as mimic loss.", "In order to train this off-the-shelf denoiser using the mimic loss objective, we first train an acoustic model on clean spectral magnitudes. The training objective for this model is cross-entropy loss against hard senone targets. Crucially, the weights of the acoustic model are frozen during the training of the enhancement model. This prevents passing information from enhancement model to acoustic model in a manner other than by producing a signal that behaves like clean speech. This is in contrast to joint training, where the weights of the acoustic model are updated at the same time as the denoising model weights, which usually leads to a degradation in enhancement metrics.", "Without parallel speech examples, we apply the mimic loss framework by using hard senone targets instead of soft targets. The loss against these hard targets is cross-entropy loss ($L_{CE}$). The senone labels can be gathered from a hard alignment of the transcripts with the noisy or denoised features; the process does not require clean speech samples. Since this method only has access to phone alignments and not clean spectra, we do not expect it to improve the speech quality, but expect it to improve intelligibility.", "We also ran experiments on different formats for the mimic loss when parallel data is available. Setting the mapping losses to be $L_1$ was determined to be most effective by Pandey and Wang. For the mimic loss, we tried both teacher-student learning with $L_1$ and $L_2$ losses, and knowledge-distillation with various temperature parameters on the softmax outputs. We found that using $L_1$ loss on the pre-softmax outputs performed the best, likely due to the fact that the other losses are also $L_1$. When the loss types are different, one loss type usually comes to dominate, but each loss serves an important purpose here.", "We provide an example of the effects of mimic loss, both with and without parallel data, by showing the log-mel filterbank features, seen in Figure FIGREF6. A set of relatively high-frequency and low-magnitude features is seen in the highlighted portion of the features. Since local metrics tend to emphasize regions of high energy differences, they miss this important phonetic information. However, in the mimic-loss-trained systems, this information is retained." ], [ "For all experiments, we use the CHiME-4 corpus, a popular corpus for robust ASR experiments, though it has not often been used for enhancement experiments. During training, we randomly select a channel for each example each epoch, and we evaluate our enhancement results on channel 5 of et05.", "Before training the enhancement system, we train the acoustic model used for mimic loss on the clean spectral magnitudes available in CHiME-4. Our architecture is a Wide-ResNet-inspired model, that takes a whole utterance and produces a posterior over each frame. The model has 4 blocks of 3 layers, where the blocks have 128, 256, 512, 1024 filters respectively. The first layer of each block has a stride of 2, down-sampling the input. After the convolutional layers, the filters are divided into 16 parts, and each part is fed to a fully-connected layer, so the number of output posterior vectors is the same as the input frames. This is an utterance-level version of the model in BIBREF8.", "In the case of parallel data, the best results were obtained by training the network for only a few epochs (we used 5). However, when using hard targets, we achieved better results from using the fully-converged network. We suspect that the outputs of the converged network more closely reflect the one-hot nature of the senone labels, which makes training easier for the enhancement model when hard targets are used. On the other hand, only lightly training the acoustic model generates softer targets when parallel data is available.", "For our enhancement model, we began with the state-of-the-art framework introduced by Pandey and Wang in BIBREF0, called AECNN. We reproduce the architecture of their system, replacing the PReLU activations with leaky ReLU activations, since the performance is similar, but the leaky ReLU network has fewer parameters." ], [ "We first train this network without the use of parallel data, using only the senone targets, and starting from random weights in the AECNN. In Table TABREF8 we see results for enhancement without parallel data: the cross-entropy loss with senone targets given a frozen clean-speech network is enough to improve eSTOI by 4.3 points. This is a surprising improvement in intelligibility given the lack of parallel data, and demonstrates that phonetic information alone is powerful enough to provide improvements to speech intelligibility metrics. The degradation in SI-SDR performance, a measure of speech quality, is expected, given that the denoising model does not have access to clean data, and may corrupt the phase.", "We compare also against joint training of the enhancement model with the acoustic model. This is a common technique for robust ASR, but has not been evaluated for enhancement. With the hard targets, joint training performs poorly on enhancement, due to co-adaptation of the enhancement and acoustic model networks. Freezing the acoustic model network is critical since it requires the enhancement model to produce speech the acoustic model sees as “clean.”" ], [ "In addition to the setting without any parallel data, we show results given parallel data. In Table TABREF10 we demonstrate that training the AECNN framework with mimic loss improves intelligibility over both the model trained with only time-domain loss (AECNN-T), as well as the model trained with both time-domain and spectral-domain losses (AECNN-T-SM). We only see a small improvement in the SI-SDR, likely due to the fact that the mimic loss technique is designed to improve the recognizablity of the results. In fact, seeing any improvement in SI-SDR at all is a surprising result.", "We also compare against joint training with an identical setup to the mimic setup (i.e. a combination of three losses: teacher-student loss against the clean outputs, spectral magnitude loss, and time-domain loss). The jointly trained acoustic model is initialized with the weights of the system trained on clean speech. We find that joint training performs much better on the enhancement metrics in this setup, though still not quite as well as the mimic setup. Compared to the previous experiment without parallel data, the presence of the spectral magnitude and time-domain losses likely keep the enhancement output more stable when joint training, at the cost of requiring parallel training data." ], [ "We have shown that phonetic feedback is valuable for speech enhancement systems. In addition, we show that our approach to this feedback, the mimic loss framework, is useful in many scenarios: with and without the presence of parallel data, in both the enhancement and robust ASR scenarios. Using this framework, we show improvement on a state-of-the-art model for speech enhancement. The methodology is agnostic to the enhancement technique, so may be applicable to other differentiably trained enhancement modules.", "In the future, we hope to address the reduction in speech quality scores when training without parallel data. One approach may be to add a GAN loss to the denoised time-domain signal, which may help with introduced distortions. In addition, we could soften the cross-entropy loss to an $L_1$ loss by generating \"prototypical\" posterior distributions for each senone, averaged across the training dataset. Mimic loss as a framework allows for a rich space of future possibilities. To that end, we have made our code available at http://github.com/OSU-slatelab/mimic-enhance." ] ] }
{ "question": [ "Which frozen acoustic model do they use?", "By how much does using phonetic feedback improve state-of-the-art systems?" ], "question_id": [ "7dce1b64c0040500951c864fce93d1ad7a1809bc", "e1b36927114969f3b759cba056cfb3756de474e4" ], "nlp_background": [ "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ " ", " " ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Several recent works have investigated jointly training the acoustic model with a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13, but these works did not evaluate their system on speech enhancement metrics. Indeed, our internal experiments show that without access to the clean data, joint training severely harms performance on these metrics." ], "highlighted_evidence": [ "Several recent works have investigated jointly training the acoustic model with a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13, but these works did not evaluate their system on speech enhancement metrics. Indeed, our internal experiments show that without access to the clean data, joint training severely harms performance on these metrics." ] } ], "annotation_id": [ "186ba39454e05f9639db6260d2b306a1537e7783" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Improved AECNN-T by 2.1 and AECNN-T-SM BY 0.9", "evidence": [ "In addition to the setting without any parallel data, we show results given parallel data. In Table TABREF10 we demonstrate that training the AECNN framework with mimic loss improves intelligibility over both the model trained with only time-domain loss (AECNN-T), as well as the model trained with both time-domain and spectral-domain losses (AECNN-T-SM). We only see a small improvement in the SI-SDR, likely due to the fact that the mimic loss technique is designed to improve the recognizablity of the results. In fact, seeing any improvement in SI-SDR at all is a surprising result.", "FLOAT SELECTED: Table 2. Speech enhancement scores for the state-of-the-art system trained with the parallel data available in the CHiME4 corpus. Evaluation is done on channel 5 of the simulation et05 data. Mimic loss is applied to the AECNN model trained with time-domain mapping loss only, as well as time-domain and spectral magnitude mapping losses. The joint training system is done with an identical setup to the mimic system with all three losses." ], "highlighted_evidence": [ "In addition to the setting without any parallel data, we show results given parallel data. In Table TABREF10 we demonstrate that training the AECNN framework with mimic loss improves intelligibility over both the model trained with only time-domain loss (AECNN-T), as well as the model trained with both time-domain and spectral-domain losses (AECNN-T-SM). We only see a small improvement in the SI-SDR, likely due to the fact that the mimic loss technique is designed to improve the recognizablity of the results. In fact, seeing any improvement in SI-SDR at all is a surprising result.", "FLOAT SELECTED: Table 2. Speech enhancement scores for the state-of-the-art system trained with the parallel data available in the CHiME4 corpus. Evaluation is done on channel 5 of the simulation et05 data. Mimic loss is applied to the AECNN model trained with time-domain mapping loss only, as well as time-domain and spectral magnitude mapping losses. The joint training system is done with an identical setup to the mimic system with all three losses." ] } ], "annotation_id": [ "764b4ad6a436ff5d072579453ba166f41ace98c0" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ] }
{ "caption": [ "Fig. 1. Operations are listed inside shapes, the circles are operations that are not parameterized, the rectangles represent parameterized operations. The gray operations are not trained, meaning the loss is backpropagated without any updates until the front-end denoiser is reached.", "Fig. 2. Comparison of a short segment of the log-mel filterbank features of utterance M06 441C020F STR from the CHiME-4 corpus. The generation procedure for the features are as follows: (a) noisy, (b) clean, (c) non-parallel mimic, (d) local losses, (e) local + mimic loss. Highlighted is a region enhanced by mimic loss but ignored by local losses.", "Table 1. Speech enhancement scores for the state-of-the-art architecture trained from scratch without the parallel clean speech data from the CHiME-4 corpus. Evaluation is done on channel 5 of the simulated et05 data. The joint training is done with an identical setup to the mimic system.", "Table 2. Speech enhancement scores for the state-of-the-art system trained with the parallel data available in the CHiME4 corpus. Evaluation is done on channel 5 of the simulation et05 data. Mimic loss is applied to the AECNN model trained with time-domain mapping loss only, as well as time-domain and spectral magnitude mapping losses. The joint training system is done with an identical setup to the mimic system with all three losses." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png", "4-Table2-1.png" ] }
1910.04006
Assessing the Efficacy of Clinical Sentiment Analysis and Topic Extraction in Psychiatric Readmission Risk Prediction
Predicting which patients are more likely to be readmitted to a hospital within 30 days after discharge is a valuable piece of information in clinical decision-making. Building a successful readmission risk classifier based on the content of Electronic Health Records (EHRs) has proved, however, to be a challenging task. Previously explored features include mainly structured information, such as sociodemographic data, comorbidity codes and physiological variables. In this paper we assess incorporating additional clinically interpretable NLP-based features such as topic extraction and clinical sentiment analysis to predict early readmission risk in psychiatry patients.
{ "section_name": [ "Introduction and Related Work", "Data", "Feature Extraction", "Feature Extraction ::: Structured Features", "Feature Extraction ::: Unstructured Features", "Experiments and Results", "Conclusions", "Acknowledgments" ], "paragraphs": [ [ "Psychotic disorders affect approximately 2.5-4% of the population BIBREF0 BIBREF1. They are one of the leading causes of disability worldwide BIBREF2 and are a frequent cause of inpatient readmission after discharge BIBREF3. Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF4 BIBREF5. Assessing readmission risk is therefore critically needed, as it can help inform the selection of treatment interventions and implement preventive measures.", "Predicting hospital readmission risk is, however, a complex endeavour across all medical fields. Prior work in readmission risk prediction has used structured data (such as medical comorbidity, prior hospitalizations, sociodemographic factors, functional status, physiological variables, etc) extracted from patients' charts BIBREF6. NLP-based prediction models that extract unstructured data from EHR have also been developed with some success in other medical fields BIBREF7. In Psychiatry, due to the unique characteristics of medical record content (highly varied and context-sensitive vocabulary, abundance of multiword expressions, etc), NLP-based approaches have seldom been applied BIBREF8, BIBREF9, BIBREF10 and strategies to study readmission risk factors primarily rely on clinical observation and manual review BIBREF11 BIBREF12, which is effort-intensive, and does not scale well.", "In this paper we aim to assess the suitability of using NLP-based features like clinical sentiment analysis and topic extraction to predict 30-day readmission risk in psychiatry patients. We begin by describing the EHR corpus that was created using in-house data to train and evaluate our models. We then present the NLP pipeline for feature extraction that was used to parse the EHRs in our corpus. Finally, we compare the performances of our model when using only structured clinical variables and when incorporating features derived from free-text narratives." ], [ "The corpus consists of a collection of 2,346 clinical notes (admission notes, progress notes, and discharge summaries), which amounts to 2,372,323 tokens in total (an average of 1,011 tokens per note). All the notes were written in English and extracted from the EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA, all of whom had in their history at least one instance of 30-day readmission.", "The age of the patients ranged from 20 to 67 (mean = 26.65, standard deviation = 8.73). 51% of the patients were male. The number of admissions per patient ranged from 2 to 21 (mean = 4, standard deviation = 2.85). Each admission contained on average 4.25 notes and 4,298 tokens. In total, the corpus contains 552 admissions, and 280 of those (50%) resulted in early readmissions." ], [ "The readmission risk prediction task was performed at the admission level. An admission consists of a collection of all the clinical notes for a given patient written by medical personnel between inpatient admission and discharge. Every admission was labeled as either `readmitted' (i.e. the patient was readmitted within the next 30 days of discharge) or `not readmitted'. Therefore, the classification task consists of creating a single feature representation of all the clinical notes belonging to one admission, plus the past medical history and demographic information of the patient, and establishing whether that admission will be followed by a 30-day readmission or not.", "45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier. These features can be grouped into three categories (See Table TABREF5 for complete list of features):", "Sociodemographics: gender, age, marital status, etc.", "Past medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc.", "Information from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc.", "The Current Admission feature group has the most number of features, with 29 features included in this group alone. These features can be further stratified into two groups: `structured' clinical features and `unstructured' clinical features." ], [ "Structure features are features that were identified on the EHR using regular expression matching and include rating scores that have been reported in the psychiatric literature as correlated with increased readmission risk, such as Global Assessment of Functioning, Insight and Compliance:", "Global Assessment of Functioning (GAF): The psychosocial functioning of the patient ranging from 100 (extremely high functioning) to 1 (severely impaired) BIBREF13.", "Insight: The degree to which the patient recognizes and accepts his/her illness (either Good, Fair or Poor).", "Compliance: The ability of the patient to comply with medication and to follow medical advice (either Yes, Partial, or None).", "These features are widely-used in clinical practice and evaluate the general state and prognosis of the patient during the patient's evaluation." ], [ "Unstructured features aim to capture the state of the patient in relation to seven risk factor domains (Appearance, Thought Process, Thought Content, Interpersonal, Substance Use, Occupation, and Mood) from the free-text narratives on the EHR. These seven domains have been identified as associated with readmission risk in prior work BIBREF14.", "These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain.", "These sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text. In our paper we also showed that this automatic pipeline achieves reasonably strong F-scores, with an overall performance of 0.828 F1 for the topic extraction component and 0.5 F1 on the clinical sentiment component.", "The clinical sentiment scores are computed for every note in the admission. Figure FIGREF4 details the data analysis pipeline that is employed for the feature extraction.", "First, a multilayer perceptron (MLP) classifier is trained on EHR sentences (8,000,000 sentences consisting of 340,000,000 tokens) that are extracted from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These sentences are automatically identified and labeled for their respective risk factor domain(s) by using a lexicon of clinician identified domain-related keywords and multiword expressions, and thus require no manual annotation. The sentences are vectorized using the Universal Sentence Encoder (USE), a transformer attention network pretrained on a large volume of general-domain web data and optimized for greater-than-word length sequences.", "Sentences that are marked for one or more of the seven risk factor domains are then passed to a suite of seven clinical sentiment MLP classifiers (one for each risk factor domain) that are trained on a corpus of 3,500 EHR sentences (63,127 tokens) labeled by a team of three clinicians involved in this project. To prevent overfitting to this small amount of training data, the models are designed to be more generalizable through the use of two hidden layers and a dropout rate BIBREF16 of 0.75.", "The outputs of each clinical sentiment model are then averaged across notes to create a single value for each risk factor domain that corresponds to the patient's level of functioning on a -1 to 1 scale (see Figure 2)." ], [ "We tested six different classification models: Stochastic Gradient Descent, Logistic Regression, C-Support Vector, Decision Tree, Random Forest, and MLP. All of them were implemented and fine-tuned using the scikit-learn machine learning toolkit BIBREF17. Because an accurate readmission risk prediction model is designed to be used to inform treatment decisions, it is important in adopting a model architecture that is clinically interpretable and allows for an analysis of the specific contribution of each feature in the input. As such, we include a Random Forest classifier, which we also found to have the best performance out of the six models.", "To systematically evaluate the importance of the clinical sentiment values extracted from the free text in EHRs, we first build a baseline model using the structured features, which are similar to prior studies on readmission risk prediction BIBREF6. We then compare two models incorporating the unstructured features. In the \"Baseline+Domain Sentences\" model, we consider whether adding the counts of sentences per EHR that involve each of the seven risk factor domains as identified by our topic extraction model improved the model performance. In the \"Baseline+Clinical Sentiment\" model, we evaluate whether adding clinical sentiment scores for each risk factor domain improved the model performance. We also experimented with combining both sets of features and found no additional improvement.", "Each model configuration was trained and evaluated 100 times and the features with the highest importance for each iteration were recorded. To further fine-tune our models, we also perform three-fold cross-validated recursive feature elimination 30 times on each of the three configurations and report the performances of the models with the best performing feature sets. These can be found in Table TABREF9.", "Our baseline results show that the model trained using only the structured features produce equivalent performances as reported by prior models for readmission risk prediction across all healthcare fields BIBREF18. The two models that were trained using unstructured features produced better results and both outperform the baseline results. The \"Baseline+Clinical Sentiment\" model produced the best results, resulting in an F1 of 0.72, an improvement of 14.3% over the baseline.", "In order to establish what features were not relevant in the classification task, we performed recursive feature elimination. We identified 13 feature values as being not predictive of readmission (they were eliminated from at least two of the three feature sets without producing a drop in performance) including: all values for marital status (Single, Married, Other, and Unknown), missing values for GAF at admission, GAF score difference between admission & discharge, GAF at discharge, Veteran status, Race, and Insight & Mode of Past Insight values reflecting a clinically positive change (Good and Improving). Poor Insight values, however, are predictive of readmission." ], [ "We have introduced and assessed the efficacy of adding NLP-based features like topic extraction and clinical sentiment features to traditional structured-feature based classification models for early readmission prediction in psychiatry patients. The approach we have introduced is a hybrid machine learning approach that combines deep learning techniques with linear methods to ensure clinical interpretability of the prediction model.", "Results show not only that both the number of sentences per risk domain and the clinical sentiment analysis scores outperform the structured-feature baseline and contribute significantly to better classification results, but also that the clinical sentiment features produce the highest results in all evaluation metrics (F1 = 0.72).", "These results suggest that clinical sentiment features for each of seven risk domains extracted from free-text narratives further enhance early readmission prediction. In addition, combining state-of-art MLP methods has a potential utility in generating clinical meaningful features that can be be used in downstream linear models with interpretable and transparent results. In future work, we intend to increase the size of the EHR corpus, increase the demographic spread of patients, and extract new features based on clinical expertise to increase our model performances. Additionally, we intend to continue our clinical sentiment annotation project from BIBREF15 to increase the accuracy of that portion of our NLP pipeline." ], [ "This work was supported by a grant from the National Institute of Mental Health (grant no. 5R01MH109687 to Mei-Hua Hall). We would also like to thank the LOUHI 2019 Workshop reviewers for their constructive and helpful comments." ] ] }
{ "question": [ "What features are used?", "Do they compare to previous models?", "How do they incorporate sentiment analysis?", "What is the dataset used?", "How do they extract topics?" ], "question_id": [ "186ccc18c6361904bee0d58196e341a719fb31c2", "fd5412e2784acefb50afc3bfae1e087580b90ab9", "c7f087c78768d5c6f3ff26921858186d627fd4fd", "82596190560dc2e2ced2131779730f40a3f3eb8c", "345f65eaff1610deecb02ff785198aa531648e75" ], "nlp_background": [ "five", "five", "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "Sentiment Analysis", "Sentiment Analysis", "Sentiment Analysis", "Sentiment Analysis", "Sentiment Analysis" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Sociodemographics: gender, age, marital status, etc.", "Past medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc.", "Information from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc." ], "yes_no": null, "free_form_answer": "", "evidence": [ "45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier. These features can be grouped into three categories (See Table TABREF5 for complete list of features):", "Sociodemographics: gender, age, marital status, etc.", "Past medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc.", "Information from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc.", "The Current Admission feature group has the most number of features, with 29 features included in this group alone. These features can be further stratified into two groups: `structured' clinical features and `unstructured' clinical features.", "Feature Extraction ::: Structured Features", "Structure features are features that were identified on the EHR using regular expression matching and include rating scores that have been reported in the psychiatric literature as correlated with increased readmission risk, such as Global Assessment of Functioning, Insight and Compliance:", "Global Assessment of Functioning (GAF): The psychosocial functioning of the patient ranging from 100 (extremely high functioning) to 1 (severely impaired) BIBREF13.", "Insight: The degree to which the patient recognizes and accepts his/her illness (either Good, Fair or Poor).", "Compliance: The ability of the patient to comply with medication and to follow medical advice (either Yes, Partial, or None).", "These features are widely-used in clinical practice and evaluate the general state and prognosis of the patient during the patient's evaluation.", "Feature Extraction ::: Unstructured Features", "Unstructured features aim to capture the state of the patient in relation to seven risk factor domains (Appearance, Thought Process, Thought Content, Interpersonal, Substance Use, Occupation, and Mood) from the free-text narratives on the EHR. These seven domains have been identified as associated with readmission risk in prior work BIBREF14.", "These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain." ], "highlighted_evidence": [ "45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier. These features can be grouped into three categories (See Table TABREF5 for complete list of features):\n\nSociodemographics: gender, age, marital status, etc.\n\nPast medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc.\n\nInformation from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc.\n\nThe Current Admission feature group has the most number of features, with 29 features included in this group alone. These features can be further stratified into two groups: `structured' clinical features and `unstructured' clinical features.\n\nFeature Extraction ::: Structured Features\nStructure features are features that were identified on the EHR using regular expression matching and include rating scores that have been reported in the psychiatric literature as correlated with increased readmission risk, such as Global Assessment of Functioning, Insight and Compliance:\n\nGlobal Assessment of Functioning (GAF): The psychosocial functioning of the patient ranging from 100 (extremely high functioning) to 1 (severely impaired) BIBREF13.\n\nInsight: The degree to which the patient recognizes and accepts his/her illness (either Good, Fair or Poor).\n\nCompliance: The ability of the patient to comply with medication and to follow medical advice (either Yes, Partial, or None).\n\nThese features are widely-used in clinical practice and evaluate the general state and prognosis of the patient during the patient's evaluation.\n\nFeature Extraction ::: Unstructured Features\nUnstructured features aim to capture the state of the patient in relation to seven risk factor domains (Appearance, Thought Process, Thought Content, Interpersonal, Substance Use, Occupation, and Mood) from the free-text narratives on the EHR. These seven domains have been identified as associated with readmission risk in prior work BIBREF14.\n\nThese unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain." ] } ], "annotation_id": [ "658b3303c18a65586574da4f2bbf426bda6d68f7" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "To systematically evaluate the importance of the clinical sentiment values extracted from the free text in EHRs, we first build a baseline model using the structured features, which are similar to prior studies on readmission risk prediction BIBREF6. We then compare two models incorporating the unstructured features. In the \"Baseline+Domain Sentences\" model, we consider whether adding the counts of sentences per EHR that involve each of the seven risk factor domains as identified by our topic extraction model improved the model performance. In the \"Baseline+Clinical Sentiment\" model, we evaluate whether adding clinical sentiment scores for each risk factor domain improved the model performance. We also experimented with combining both sets of features and found no additional improvement." ], "highlighted_evidence": [ "To systematically evaluate the importance of the clinical sentiment values extracted from the free text in EHRs, we first build a baseline model using the structured features, which are similar to prior studies on readmission risk prediction BIBREF6." ] } ], "annotation_id": [ "b244d95d600b9314b69b365e94acec063cbb12ac" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "features per admission were extracted as inputs to the readmission risk classifier" ], "yes_no": null, "free_form_answer": "", "evidence": [ "These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain.", "These sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text. In our paper we also showed that this automatic pipeline achieves reasonably strong F-scores, with an overall performance of 0.828 F1 for the topic extraction component and 0.5 F1 on the clinical sentiment component.", "45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier. These features can be grouped into three categories (See Table TABREF5 for complete list of features):" ], "highlighted_evidence": [ "These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain.\n\nThese sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text.", "45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier." ] } ], "annotation_id": [ "1875fb181ab80cb90655b8845f78e5cf03a68d05" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The corpus consists of a collection of 2,346 clinical notes (admission notes, progress notes, and discharge summaries), which amounts to 2,372,323 tokens in total (an average of 1,011 tokens per note). All the notes were written in English and extracted from the EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA, all of whom had in their history at least one instance of 30-day readmission." ], "highlighted_evidence": [ "The corpus consists of a collection of 2,346 clinical notes (admission notes, progress notes, and discharge summaries), which amounts to 2,372,323 tokens in total (an average of 1,011 tokens per note). All the notes were written in English and extracted from the EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA, all of whom had in their history at least one instance of 30-day readmission." ] } ], "annotation_id": [ "3526cba19058113df1b2da4e06c3080939587174" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15" ], "yes_no": null, "free_form_answer": "", "evidence": [ "These sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text. In our paper we also showed that this automatic pipeline achieves reasonably strong F-scores, with an overall performance of 0.828 F1 for the topic extraction component and 0.5 F1 on the clinical sentiment component." ], "highlighted_evidence": [ "These sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text." ] } ], "annotation_id": [ "a1483e59b612e5d67a2bca140d3a54c84cb5b6e8" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 1: Extracted features by category.", "Figure 1: NLP pipeline for feature extraction.", "Figure 2: Model architecture for USE embedding generation and unstructured feature extraction. Dotted arrows indicate operations that are performed only on sentences marked for 1+ risk factor domain(s). USE top-layer weights are fine-tuned during training.", "Table 2: Results (in ascending order)" ], "file": [ "2-Table1-1.png", "3-Figure1-1.png", "3-Figure2-1.png", "4-Table2-1.png" ] }
1611.04361
Attending to Characters in Neural Sequence Labeling Models
Sequence labeling architectures use word embeddings for capturing similarity, but suffer when handling previously unseen or rare words. We investigate character-level extensions to such models and propose a novel architecture for combining alternative word representations. By using an attention mechanism, the model is able to dynamically decide how much information to use from a word- or character-level component. We evaluated different architectures on a range of sequence labeling datasets, and character-level extensions were found to improve performance on every benchmark. In addition, the proposed attention-based architecture delivered the best results even with a smaller number of trainable parameters.
{ "section_name": [ "Introduction", "Bidirectional LSTM for sequence labeling", "Character-level sequence labeling", "Attention over character features", "Datasets", "Experiment settings", "Results", "Related work", "Conclusion" ], "paragraphs": [ [ " This work is licenced under a Creative Commons Attribution 4.0 International Licence.", "Licence details: http://creativecommons.org/licenses/by/4.0/", "Many NLP tasks, including named entity recognition (NER), part-of-speech (POS) tagging and shallow parsing can be framed as types of sequence labeling. The development of accurate and efficient sequence labeling models is thereby useful for a wide range of downstream applications. Work in this area has traditionally involved task-specific feature engineering – for example, integrating gazetteers for named entity recognition, or using features from a morphological analyser in POS-tagging. Recent developments in neural architectures and representation learning have opened the door to models that can discover useful features automatically from the data. Such sequence labeling systems are applicable to many tasks, using only the surface text as input, yet are able to achieve competitive results BIBREF0 , BIBREF1 .", "Current neural models generally make use of word embeddings, which allow them to learn similar representations for semantically or functionally similar words. While this is an important improvement over count-based models, they still have weaknesses that should be addressed. The most obvious problem arises when dealing with out-of-vocabulary (OOV) words – if a token has never been seen before, then it does not have an embedding and the model needs to back-off to a generic OOV representation. Words that have been seen very infrequently have embeddings, but they will likely have low quality due to lack of training data. The approach can also be sub-optimal in terms of parameter usage – for example, certain suffixes indicate more likely POS tags for these words, but this information gets encoded into each individual embedding as opposed to being shared between the whole vocabulary.", "In this paper, we construct a task-independent neural network architecture for sequence labeling, and then extend it with two different approaches for integrating character-level information. By operating on individual characters, the model is able to infer representations for previously unseen words and share information about morpheme-level regularities. We propose a novel architecture for combining character-level representations with word embeddings using a gating mechanism, also referred to as attention, which allows the model to dynamically decide which source of information to use for each word. In addition, we describe a new objective for model training where the character-level representations are optimised to mimic the current state of word embeddings.", "We evaluate the neural models on 8 datasets from the fields of NER, POS-tagging, chunking and error detection in learner texts. Our experiments show that including a character-based component in the sequence labeling model provides substantial performance improvements on all the benchmarks. In addition, the attention-based architecture achieves the best results on all evaluations, while requiring a smaller number of parameters." ], [ "We first describe a basic word-level neural network for sequence labeling, following the models described by Lample2016 and Rei2016, and then propose two alternative methods for incorporating character-level information.", "Figure 1 shows the general architecture of the sequence labeling network. The model receives a sequence of tokens $(w_1, ..., w_T)$ as input, and predicts a label corresponding to each of the input tokens. The tokens are first mapped to a distributed vector space, resulting in a sequence of word embeddings $(x_1, ..., x_T)$ . Next, the embeddings are given as input to two LSTM BIBREF2 components moving in opposite directions through the text, creating context-specific representations. The respective forward- and backward-conditioned representations are concatenated for each word position, resulting in representations that are conditioned on the whole sequence: ", "$$\\overrightarrow{h_t} = LSTM(x_t, \\overrightarrow{h_{t-1}})\\hspace{30.0pt}\n\\overleftarrow{h_t} = LSTM(x_t, \\overleftarrow{h_{t+1}})\\hspace{30.0pt}\nh_t = [\\overrightarrow{h_t};\\overleftarrow{h_t}]$$ (Eq. 1) ", "We include an extra narrow hidden layer on top of the LSTM, which proved to be a useful modification based on development experiments. An additional hidden layer allows the model to detect higher-level feature combinations, while constraining it to be small forces it to focus on more generalisable patterns: ", "$$d_t = tanh(W_d h_t)$$ (Eq. 2) ", "where $W_d$ is a weight matrix between the layers, and the size of $d_t$ is intentionally kept small.", "Finally, to produce label predictions, we use either a softmax layer or a conditional random field (CRF, Lafferty2001). The softmax calculates a normalised probability distribution over all the possible labels for each word: ", "$$P(y_t = k | d_t) = \\frac{e^{W_{o,k} d_t}}{\\sum _{\\tilde{k} \\in K} e^{W_{o,\\tilde{k}} d_t}}$$ (Eq. 3) ", "where $P (y_t = k|d_t )$ is the probability of the label of the $t$ -th word ( $y_t$ ) being $k$ , $K$ is the set of all possible labels, and $W_{o,k}$ is the $k$ -th row of output weight matrix $W_o$ . To optimise this model, we minimise categorical crossentropy, which is equivalent to minimising the negative log-probability of the correct labels: ", "$$E = - \\sum _{t=1}^{T} log(P(y_t| d_t))$$ (Eq. 4) ", "Following Huang2015, we can also use a CRF as the output layer, which conditions each prediction on the previously predicted label. In this architecture, the last hidden layer is used to predict confidence scores for the word having each of the possible labels. A separate weight matrix is used to learn transition probabilities between different labels, and the Viterbi algorithm is used to find an optimal sequence of weights. Given that $y$ is a sequence of labels $[y_1, ..., y_T]$ , then the CRF score for this sequence can be calculated as: ", "$$s(y) = \\sum _{t=1}^T A_{t,y_t} + \\sum _{t=0}^T B_{y_t,y_{t+1}}$$ (Eq. 5) ", "$$A_{t,y_t} = W_{o,y_t} d_t$$ (Eq. 6) ", "where $A_{t,y_t}$ shows how confident the network is that the label on the $t$ -th word is $y_t$ . $B_{y_t,y_{t+1}}$ shows the likelihood of transitioning from label $y_t$ to label $y_{t+1}$ , and these values are optimised during training. The output from the model is the sequence of labels with the largest score $s(y)$ , which can be found efficiently using the Viterbi algorithm. In order to optimise the CRF model, the loss function maximises the score for the correct label sequence, while minimising the scores for all other sequences: ", "$$E = - s(y) + log \\sum _{\\tilde{y} \\in \\widetilde{Y}} e^{s(\\tilde{y})}$$ (Eq. 7) ", "where $\\widetilde{Y}$ is the set of all possible label sequences." ], [ "Distributed embeddings map words into a space where semantically similar words have similar vector representations, allowing the models to generalise better. However, they still treat words as atomic units and ignore any surface- or morphological similarities between different words. By constructing models that operate over individual characters in each word, we can take advantage of these regularities. This can be particularly useful for handling unseen words – for example, if we have never seen the word cabinets before, a character-level model could still infer a representation for this word if it has previously seen the word cabinet and other words with the suffix -s. In contrast, a word-level model can only represent this word with a generic out-of-vocabulary representation, which is shared between all other unseen words.", "Research into character-level models is still in fairly early stages, and models that operate exclusively on characters are not yet competitive to word-level models on most tasks. However, instead of fully replacing word embeddings, we are interested in combining the two approaches, thereby allowing the model to take advantage of information at both granularity levels. The general outline of our approach is shown in Figure 2 . Each word is broken down into individual characters, these are then mapped to a sequence of character embeddings $(c_1, ..., c_R)$ , which are passed through a bidirectional LSTM: ", "$$\\overrightarrow{h^*_i} = LSTM(c_i, \\overrightarrow{h^*_{i-1}}) \\hspace{30.0pt}\n\\overleftarrow{h^*_i} = LSTM(c_i, \\overleftarrow{h^*_{i+1}})$$ (Eq. 9) ", "We then use the last hidden vectors from each of the LSTM components, concatenate them together, and pass the result through a separate non-linear layer. ", "$$h^* = [\\overrightarrow{h^*_R};\\overleftarrow{h^*_1}] \\hspace{30.0pt}\nm = tanh(W_m h^*)$$ (Eq. 10) ", "where $W_m$ is a weight matrix mapping the concatenated hidden vectors from both LSTMs into a joint word representation $m$ , built from individual characters.", "We now have two alternative feature representations for each word – $x_t$ from Section \"Bidirectional LSTM for sequence labeling\" is an embedding learned on the word level, and $m^{(t)}$ is a representation dynamically built from individual characters in the $t$ -th word of the input text. Following Lample2016, one possible approach is to concatenate the two vectors and use this as the new word-level representation for the sequence labeling model: ", "$$\\widetilde{x} = [x; m]$$ (Eq. 11) ", "This approach, also illustrated in Figure 2 , assumes that the word-level and character-level components learn somewhat disjoint information, and it is beneficial to give them separately as input to the sequence labeler." ], [ "Alternatively, we can have the word embedding and the character-level component learn the same semantic features for each word. Instead of concatenating them as alternative feature sets, we specifically construct the network so that they would learn the same representations, and then allow the model to decide how to combine the information for each specific word.", "We first construct the word representation from characters using the same architecture – a bidirectional LSTM operates over characters, and the last hidden states are used to create vector $m$ for the input word. Instead of concatenating this with the word embedding, the two vectors are added together using a weighted sum, where the weights are predicted by a two-layer network: ", "$$z = \\sigma (W^{(3)}_z tanh(W^{(1)}_{z} x + W^{(2)}_{z} m)) \\hspace{30.0pt}\n\\widetilde{x} = z\\cdot x + (1-z) \\cdot m$$ (Eq. 13) ", "where $W^{(1)}_{z}$ , $W^{(2)}_{z}$ and $W^{(3)}_{z}$ are weight matrices for calculating $z$ , and $\\sigma ()$ is the logistic function with values in the range $[0,1]$ . The vector $z$ has the same dimensions as $x$ or $m$ , acting as the weight between the two vectors. It allows the model to dynamically decide how much information to use from the character-level component or from the word embedding. This decision is done for each feature separately, which adds extra flexiblity – for example, words with regular suffixes can share some character-level features, whereas irregular words can store exceptions into word embeddings. Furthermore, previously unknown words are able to use character-level regularities whenever possible, and are still able to revert to using the generic OOV token when necessary.", "The main benefits of character-level modeling are expected to come from improved handling of rare and unseen words, whereas frequent words are likely able to learn high-quality word-level embeddings directly. We would like to take advantage of this, and train the character component to predict these word embeddings. Our attention-based architecture requires the learned features in both word representations to align, and we can add in an extra constraint to encourage this. During training, we add a term to the loss function that optimises the vector $m$ to be similar to the word embedding $x$ : ", "$$\\widetilde{E} = E + \\sum _{t=1}^{T} g_t (1 - cos(m^{(t)}, x_t)) \\hspace{30.0pt}\ng_t =\n{\\left\\lbrace \\begin{array}{ll}\n0, & \\text{if}\\ w_t = OOV \\\\\n1, & \\text{otherwise}\n\\end{array}\\right.}$$ (Eq. 14) ", "Equation 14 maximises the cosine similarity between $m^{(t)}$ and $x_t$ . Importantly, this is done only for words that are not out-of-vocabulary – we want the character-level component to learn from the word embeddings, but this should exclude the OOV embedding, as it is shared between many words. We use $g_t$ to set this cost component to 0 for any OOV tokens.", "While the character component learns general regularities that are shared between all the words, individual word embeddings provide a way for the model to store word-specific information and any exceptions. Therefore, while we want the character-based model to shift towards predicting high-quality word embeddings, it is not desireable to optimise the word embeddings towards the character-level representations. This can be achieved by making sure that the optimisation is performed only in one direction; in Theano BIBREF3 , the disconnected_grad function gives the desired effect." ], [ "We evaluate the sequence labeling models and character architectures on 8 different datasets. Table 1 contains information about the number of labels and dataset sizes for each of them." ], [ "For data prepocessing, all digits were replaced with the character '0'. Any words that occurred only once in the training data were replaced by the generic OOV token for word embeddings, but were still used in the character-level components. The word embeddings were initialised with publicly available pretrained vectors, created using word2vec BIBREF12 , and then fine-tuned during model training. For the general-domain datasets we used 300-dimensional vectors trained on Google News; for the biomedical datasets we used 200-dimensional vectors trained on PubMed and PMC. The embeddings for characters were set to length 50 and initialised randomly.", "The LSTM layer size was set to 200 in each direction for both word- and character-level components. The hidden layer $d$ has size 50, and the combined representation $m$ has the same length as the word embeddings. CRF was used as the output layer for all the experiments – we found that this gave most benefits to tasks with larger numbers of possible labels. Parameters were optimised using AdaDelta BIBREF13 with default learning rate $1.0$ and sentences were grouped into batches of size 64. Performance on the development set was measured at every epoch and training was stopped if performance had not improved for 7 epochs; the best-performing model on the development set was then used for evaluation on the test set. In order to avoid any outlier results due to randomness in the model initialisation, we trained each configuration with 10 different random seeds and present here the averaged results.", "When evaluating on each dataset, we report the measures established in previous work. Token-level accuracy is used for PTB-POS and GENIA-POS; $F_{0.5}$ score over the erroneous words for FCEPUBLIC; the official evaluation script for BC2GM which allows for alternative correct entity spans; and microaveraged mention-level $F_{1}$ score for the remaining datasets." ], [ "While optimising the hyperparameters for each dataset separately would likely improve individual performance, we conduct more controlled experiments on a task-independent model. Therefore, we use the same hyperparameters from Section \"Experiment settings\" on all datasets, and the development set is only used for the stopping condition. With these experiments, we wish to determine 1) on which sequence labeling tasks do character-based models offer an advantange, and 2) which character-based architecture performs better.", "Results for the different model architectures on all 8 datasets are shown in Table 2 . As can be seen, including a character-based component in the sequence labeling architecture improves performance on every benchmark. The NER datasets have the largest absolute improvement – the model is able to learn character-level patterns for names, and also improve the handling of any previously unseen tokens.", "Compared to concatenating the word- and character-level representations, the attention-based character model outperforms the former on all evaluations. The mechanism for dynamically deciding how much character-level information to use allows the model to better handle individual word representations, giving it an advantage in the experiments. Visualisation of the attention values in Figure 3 shows that the model is actively using character-based features, and the attention areas vary between different words.", "The results of this general tagging architecture are competitive, even when compared to previous work using hand-crafted features. The network achieves 97.27% on PTB-POS compared to 97.55% by Huang2015, and 72.70% on JNLPBA compared to 72.55% by Zhou2004. In some cases, we are also able to beat the previous best results – 87.99% on BC2GM compared to 87.48% by Campos2015, and 41.88% on FCEPUBLIC compared to 41.1% by Rei2016. Lample2016 report a considerably higher result of 90.94% on CoNLL03, indicating that the chosen hyperparameters for the baseline system are suboptimal for this specific task. Compared to the experiments presented here, their model used the IOBES tagging scheme instead of the original IOB, and embeddings pretrained with a more specialised method that accounts for word order.", "It is important to also compare the parameter counts of alternative neural architectures, as this shows their learning capacity and indicates their time requirements in practice. Table 3 contains the parameter counts on three representative datasets. While keeping the model hyperparameters constant, the character-level models require additional parameters for the character composition and character embeddings. However, the attention-based model uses fewer parameters compared to the concatenation approach. When the two representations are concatenated, the overall word representation size is increased, which in turn increases the number of parameters required for the word-level bidirectional LSTM. Therefore, the attention-based character architecture achieves improved results even with a smaller parameter footprint." ], [ "There is a wide range of previous work on constructing and optimising neural architectures applicable to sequence labeling. Collobert2011 described one of the first task-independent neural tagging models using convolutional neural networks. They were able to achieve good results on POS tagging, chunking, NER and semantic role labeling, without relying on hand-engineered features. Irsoy2014a experimented with multi-layer bidirectional Elman-style recurrent networks, and found that the deep models outperformed conditional random fields on the task of opinion mining. Huang2015 described a bidirectional LSTM model with a CRF layer, which included hand-crafted features specialised for the task of named entity recognition. Rei2016 evaluated a range of neural architectures, including convolutional and recurrent networks, on the task of error detection in learner writing. The word-level sequence labeling model described in this paper follows the previous work, combining useful design choices from each of them. In addition, we extended the model with two alternative character-level architectures, and evaluated its performance on 8 different datasets.", "Character-level models have the potential of capturing morpheme patterns, thereby improving generalisation on both frequent and unseen words. In recent years, there has been an increase in research into these models, resulting in several interesting applications. Ling2015b described a character-level neural model for machine translation, performing both encoding and decoding on individual characters. Kim2016 implemented a language model where encoding is performed by a convolutional network and LSTM over characters, whereas predictions are given on the word-level. Cao2016 proposed a method for learning both word embeddings and morphological segmentation with a bidirectional recurrent network over characters. There is also research on performing parsing BIBREF14 and text classification BIBREF15 with character-level neural models. Ling2015a proposed a neural architecture that replaces word embeddings with dynamically-constructed character-based representations. We applied a similar method for operating over characters, but combined them with word embeddings instead of replacing them, as this allows the model to benefit from both approaches. Lample2016 described a model where the character-level representation is combined with word embeddings through concatenation. In this work, we proposed an alternative architecture, where the representations are combined using an attention mechanism, and evaluated both approaches on a range of tasks and datasets. Recently, Miyamoto2016 have also described a related method for the task of language modelling, combining characters and word embeddings using gating." ], [ "Developments in neural network research allow for model architectures that work well on a wide range of sequence labeling datasets without requiring hand-crafted data. While word-level representation learning is a powerful tool for automatically discovering useful features, these models still come with certain weaknesses – rare words have low-quality representations, previously unseen words cannot be modeled at all, and morpheme-level information is not shared with the whole vocabulary.", "In this paper, we investigated character-level model components for a sequence labeling architecture, which allow the system to learn useful patterns from sub-word units. In addition to a bidirectional LSTM operating over words, a separate bidirectional LSTM is used to construct word representations from individual characters. We proposed a novel architecture for combining the character-based representation with the word embedding by using an attention mechanism, allowing the model to dynamically choose which information to use from each information source. In addition, the character-level composition function is augmented with a novel training objective, optimising it to predict representations that are similar to the word embeddings in the model.", "The evaluation was performed on 8 different sequence labeling datasets, covering a range of tasks and domains. We found that incorporating character-level information into the model improved performance on every benchmark, indicating that capturing features regarding characters and morphmes is indeed useful in a general-purpose tagging system. In addition, the attention-based model for combining character representations outperformed the concatenation method used in previous work in all evaluations. Even though the proposed method requires fewer parameters, the added ability of controlling how much character-level information is used for each word has led to improved performance on a range of different tasks." ] ] }
{ "question": [ "How does this compare to simple interpolation between a word-level and a character-level language model?" ], "question_id": [ "51d03f0741b72ae242c380266acd2321baf43444" ], "nlp_background": [ "infinity" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "18a04982830a968b643cdd93010d2c58933f69f2" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] } ] }
{ "caption": [ "Figure 1: Neural sequence labeling model. Word embeddings are given as input; a bidirectional LSTM produces context-dependent representations; the information is passed through a hidden layer and the output layer. The outputs are either probability distributions for softmax, or confidence scores for CRF.", "Figure 2: Left: concatenation-based character architecture. Right: attention-based character architecture. The dotted lines indicate vector concatenation.", "Table 1: Details for each of the evaluation datasets.", "Table 2: Comparison of word-based and character-based sequence labeling architectures on 8 datasets. The evaluation measure used for each dataset is specified in Section 6.", "Figure 3: Visualisation of attention values for two words, trained on the PTB-POS dataset. Darker blue indicates features with higher weights for the character-level representation. Restructuring was present in the vocabulary, while bankrupting is an OOV.", "Table 3: Comparison of trainable parameters in each of the neural model architectures. # total shows the total number of parameters; # noemb shows the parameter count excluding word embeddings, as only a small fraction of the embeddings are utilised at every iteration." ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "5-Table1-1.png", "6-Table2-1.png", "7-Figure3-1.png", "8-Table3-1.png" ] }
1911.01188
Analysing Coreference in Transformer Outputs
We analyse coreference phenomena in three neural machine translation systems trained with different data settings with or without access to explicit intra- and cross-sentential anaphoric information. We compare system performance on two different genres: news and TED talks. To do this, we manually annotate (the possibly incorrect) coreference chains in the MT outputs and evaluate the coreference chain translations. We define an error typology that aims to go further than pronoun translation adequacy and includes types such as incorrect word selection or missing words. The features of coreference chains in automatic translations are also compared to those of the source texts and human translations. The analysis shows stronger potential translationese effects in machine translated outputs than in human translations.
{ "section_name": [ "Introduction", "Background and Related Work ::: Coreference", "Background and Related Work ::: Translation studies", "Background and Related Work ::: Coreference in MT", "Systems, Methods and Resources ::: State-of-the-art NMT", "Systems, Methods and Resources ::: State-of-the-art NMT ::: S1", "Systems, Methods and Resources ::: State-of-the-art NMT ::: S2", "Systems, Methods and Resources ::: State-of-the-art NMT ::: S3", "Systems, Methods and Resources ::: Test data under analysis", "Systems, Methods and Resources ::: Manual annotation process", "Results and Analyses ::: Chain features", "Results and Analyses ::: MT quality at system level", "Results and Analyses ::: Error analysis", "Results and Analyses ::: Error analysis ::: Predefined error categories", "Results and Analyses ::: Error analysis ::: Additional error types", "Results and Analyses ::: Error analysis ::: Types of erroneous mentions", "Summary and Conclusions", "Acknowledgments" ], "paragraphs": [ [ "In the present paper, we analyse coreference in the output of three neural machine translation systems (NMT) that were trained under different settings. We use a transformer architecture BIBREF0 and train it on corpora of different sizes with and without the specific coreference information. Transformers are the current state-of-the-art in NMT BIBREF1 and are solely based on attention, therefore, the kind of errors they produce might be different from other architectures such as CNN or RNN-based ones. Here we focus on one architecture to study the different errors produced only under different data configurations.", "Coreference is an important component of discourse coherence which is achieved in how discourse entities (and events) are introduced and discussed. Coreference chains contain mentions of one and the same discourse element throughout a text. These mentions are realised by a variety of linguistic devices such as pronouns, nominal phrases (NPs) and other linguistic means. As languages differ in the range of such linguistic means BIBREF2, BIBREF3, BIBREF4, BIBREF5 and in their contextual restrictions BIBREF6, these differences give rise to problems that may result in incoherent (automatic) translations. We focus on coreference chains in English-German translations belonging to two different genres. In German, pronouns, articles and adjectives (and some nouns) are subject to grammatical gender agreement, whereas in English, only person pronouns carry gender marking. An incorrect translation of a pronoun or a nominal phrase may lead to an incorrect relation in a discourse and will destroy a coreference chain. Recent studies in automatic coreference translation have shown that dedicated systems can lead to improvements in pronoun translation BIBREF7, BIBREF8. However, standard NMT systems work at sentence level, so improvements in NMT translate into improvements on pronouns with intra-sentential antecedents, but the phenomenon of coreference is not limited to anaphoric pronouns, and even less to a subset of them. Document-level machine translation (MT) systems are needed to deal with coreference as a whole. Although some attempts to include extra-sentential information exist BIBREF9, BIBREF10, BIBREF11, BIBREF12, the problem is far from being solved. Besides that, some further problems of NMT that do not seem to be related to coreference at first glance (such as translation of unknown words and proper names or the hallucination of additional words) cause coreference-related errors.", "In our work, we focus on the analysis of complete coreference chains, manually annotating them in the three translation variants. We also evaluate them from the point of view of coreference chain translation. The goal of this paper is two-fold. On the one hand, we are interested in various properties of coreference chains in these translations. They include total number of chains, average chain length, the size of the longest chain and the total number of annotated mentions. These features are compared to those of the underlying source texts and also the corresponding human translation reference. On the other hand, we are also interested in the quality of coreference translations. Therefore, we define a typology of errors, and and chain members in MT output are annotated as to whether or not they are correct. The main focus is on such errors as gender, number and case of the mentions, but we also consider wrong word selection or missing words in a chain. Unlike previous work, we do not restrict ourselves to pronouns. Our analyses show that there are further errors that are not directly related to coreference but consequently have an influence on the correctness of coreference chains.", "The remainder of the paper is organised as follows. Section SECREF2 introduces the main concepts and presents an overview of related MT studies. Section SECREF3 provides details on the data, systems used and annotation procedures. Section SECREF4 analyses the performance of our transformer systems on coreferent mentions. Finally we summarise and draw conclusions in Section SECREF5." ], [ "Coreference is related to cohesion and coherence. The latter is the logical flow of inter-related ideas in a text, whereas cohesion refers to the text-internal relationship of linguistic elements that are overtly connected via lexico-grammatical devices across sentences BIBREF13. As stated by BIBREF14, this connectedness of texts implies dependencies between sentences. And if these dependencies are neglected in translation, the output text no longer has the property of connectedness which makes a sequence of sentences a text. Coreference expresses identity to a referent mentioned in another textual part (not necessarily in neighbouring sentences) contributing to text connectedness. An addressee is following the mentioned referents and identifies them when they are repeated. Identification of certain referents depends not only on a lexical form, but also on other linguistic means, e.g. articles or modifying pronouns BIBREF15. The use of these is influenced by various factors which can be language-dependent (range of linguistic means available in grammar) and also context-independent (pragmatic situation, genre). Thus, the means of expressing reference differ across languages and genres. This has been shown by some studies in the area of contrastive linguistics BIBREF6, BIBREF3, BIBREF5. Analyses in cross-lingual coreference resolution BIBREF16, BIBREF17, BIBREF18, BIBREF19 show that there are still unsolved problems that should be addressed." ], [ "Differences between languages and genres in the linguistic means expressing reference are important for translation, as the choice of an appropriate referring expression in the target language poses challenges for both human and machine translation. In translation studies, there is a number of corpus-based works analysing these differences in translation. However, most of them are restricted to individual phenomena within coreference. For instance, BIBREF20 analyse abstract anaphors in English-German translations. To our knowledge, they do not consider chains. BIBREF21 in their contrastive analysis of potential coreference chain members in English-German translations, describe transformation patterns that contain different types of referring expressions. However, the authors rely on automatic tagging and parsing procedures and do not include chains into their analysis. The data used by BIBREF4 and BIBREF22 contain manual chain annotations. The authors focus on different categories of anaphoric pronouns in English-Czech translations, though not paying attention to chain features (e.g. their number or size).", "Chain features are considered in a contrastive analysis by BIBREF6. Their study concerns different phenomena in a variety of genres in English and German comparable texts. Using contrastive interpretations, they suggest preferred translation strategies from English into German, i.e. translators should use demonstrative pronouns instead of personal pronouns (e.g. dies/das instead of es/it) when translating from English into German and vice versa. However, corpus-based studies show that translators do not necessarily apply such strategies. Instead, they often preserve the source language anaphor's categories BIBREF20 which results in the shining through effects BIBREF23. Moreover, due to the tendency of translators to explicitly realise meanings in translations that were implicit in the source texts BIBREF24, translations are believed to contain more (explicit) referring expressions, and subsequently, more (and longer) coreference chains.", "Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation. These features include number of mentions, number of chains, average chain length and the longest chain size. Machine-translated texts are compared to their sources and the corresponding human translations in terms of these features. We expect to find shining through and explicitation effects in automatic translations." ], [ "As explained in the introduction, several recent works tackle the automatic translation of pronouns and also coreference BIBREF25, BIBREF26 and this has, in part, motivated the creation of devoted shared tasks and test sets to evaluate the quality of pronoun translation BIBREF7, BIBREF27, BIBREF28, BIBREF29.", "But coreference is a wider phenomenon that affects more linguistic elements. Noun phrases also appear in coreference chains but they are usually studied under coherence and consistency in MT. BIBREF30 use topic modelling to extract coherence chains in the source, predict them in the target and then promote them as translations. BIBREF31 use word embeddings to enforce consistency within documents. Before these works, several methods to post-process the translations and even including a second decoding pass were used BIBREF32, BIBREF33, BIBREF34, BIBREF35.", "Recent NMT systems that include context deal with both phenomena, coreference and coherence, but usually context is limited to the previous sentence, so chains as a whole are never considered. BIBREF10 encode both a source and a context sentence and then combine them to obtain a context-aware input. The same idea was implemented before by BIBREF36 where they concatenate a source sentence with the previous one to include context. Caches BIBREF37, memory networks BIBREF38 and hierarchical attention methods BIBREF39 allow to use a wider context. Finally, our work is also related to BIBREF40 and BIBREF41 where their oracle translations are similar to the data-based approach we introduce in Section SECREF4." ], [ "Our NMT systems are based on a transformer architecture BIBREF0 as implemented in the Marian toolkit BIBREF42 using the transformer big configuration.", "We train three systems (S1, S2 and S3) with the corpora summarised in Table TABREF5. The first two systems are transformer models trained on different amounts of data (6M vs. 18M parallel sentences as seen in the Table). The third system includes a modification to consider the information of full coreference chains throughout a document augmenting the sentence to be translated with this information and it is trained with the same amount of sentence pairs as S1. A variant of the S3 system participated in the news machine translation of the shared task held at WMT 2019 BIBREF43." ], [ "is trained with the concatenation of Common Crawl, Europarl, a cleaned version of Rapid and the News Commentary corpus. We oversample the latter in order to have a significant representation of data close to the news genre in the final corpus." ], [ "uses the same data as S1 with the addition of a filtered portion of Paracrawl. This corpus is known to be noisy, so we use it to create a larger training corpus but it is diluted by a factor 4 to give more importance to high quality translations." ], [ "S3 uses the same data as S1, but this time enriched with the cross- and intra-sentential coreference chain markup as described below. The information is included as follows.", "Source documents are annotated with coreference chains using the neural annotator of Stanford CoreNLP BIBREF44. The tool detects pronouns, nominal phrases and proper names as mentions in a chain. For every mention, CoreNLP extracts its gender (male, female, neutral, unknown), number (singular, plural, unknown), and animacy (animate, inanimate, unknown). This information is not added directly but used to enrich the single sentence-based MT training data by applying a set of heuristics implemented in DocTrans:", "We enrich pronominal mentions with the exception of \"I\" with the head (main noun phrase) of the chain. The head is cleaned by removing articles and Saxon genitives and we only consider heads with less than 4 tokens in order to avoid enriching a word with a full sentence", "We enrich nominal mentions including proper names with the gender of the head", "The head itself is enriched with she/he/it/they depending on its gender and animacy", "The enrichment is done with the addition of tags as shown in the examples:", "I never cook with $<$b_crf$>$ salt $<$e_crf$>$ it.", "", "$<$b_crf$>$ she $<$e_crf$>$ Biles arrived late.", "In the first case heuristic 1 is used, salt is the head of the chain and it is prepended to the pronoun. The second example shows a sentence where heuristic 2 has been used and the proper name Biles has now information about the gender of the person it is referring to.", "Afterwards, the NMT system is trained at sentence level in the usual way. The data used for the three systems is cleaned, tokenised, truecased with Moses scripts and BPEd with subword-nmt using separated vocabularies with 50 k subword units each. The validation set ($news2014$) and the test sets described in the following section are pre-processed in the same way." ], [ "As one of our aims is to compare coreference chain properties in automatic translation with those of the source texts and human reference, we derive data from ParCorFull, an English-German corpus annotated with full coreference chains BIBREF46. The corpus contains ca. 160.7 thousand tokens manually annotated with about 14.9 thousand mentions and 4.7 thousand coreference chains. For our analysis, we select a portion of English news texts and TED talks from ParCorFull and translate them with the three NMT systems described in SECREF4 above. As texts considerably differ in their length, we select 17 news texts (494 sentences) and four TED talks (518 sentences). The size (in tokens) of the total data set under analysis – source (src) and human translations (ref) from ParCorFull and the automatic translations produced within this study (S1, S2 and S3) are presented in Table TABREF20.", "Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set." ], [ "The English sources and their corresponding human translations into German were already manually annotated for coreference chains. We follow the same scheme as BIBREF47 to annotate the MT outputs with coreference chains. This scheme allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause). The mentions can be defined further in terms of their cohesive function (antecedent, anaphoric, cataphoric, comparative, substitution, ellipsis, apposition). Antecedents can either be marked as simple or split or as entity or event. The annotation scheme also includes pronoun type (personal, possessive, demonstrative, reflexive, relative) and modifier types of NPs (possessive, demonstrative, definite article, or none for proper names), see BIBREF46 for details. The mentions referring to the same discourse item are linked between each other. We use the annotation tool MMAX2 BIBREF48 which was also used for the annotation of ParCorFull.", "In the next step, chain members are annotated for their correctness. For the incorrect translations of mentions, we include the following error categories: gender, number, case, ambiguous and other. The latter category is open, which means that the annotators can add their own error types during the annotation process. With this, the final typology of errors also considered wrong named entity, wrong word, missing word, wrong syntactic structure, spelling error and addressee reference.", "The annotation of machine-translated texts was integrated into a university course on discourse phenomena. Our annotators, well-trained students of linguistics, worked in small groups on the assigned annotation tasks (4-5 texts, i.e. 12-15 translations per group). At the beginning of the annotation process, the categories under analysis were discussed within the small groups and also in the class. The final versions of the annotation were then corrected by the instructor." ], [ "First, we compare the distribution of several chain features in the three MT outputs, their source texts and the corresponding human translations.", "Table TABREF20 shows that, overall, all machine translations contain a greater number of annotated mentions in both news texts and TED talks than in the annotated source (src and src$_{\\rm CoreNLP}$) and reference (ref) texts. Notice that src$_{\\rm CoreNLP}$ —where coreferences are not manually but automatically annotated with CoreNLP— counts also the tokens that the mentions add to the sentences, but not the tags. The larger number of mentions may indicate a strong explicitation effect observed in machine-translated texts. Interestingly, CoreNLP detects a similar number of mentions in both genres, while human annotators clearly marked more chains for TED than for news. Both genres are in fact quite different in nature; whereas only $37\\%$ of the mentions are pronominal in news texts (343 out of 915), the number grows to $58\\%$ for TED (577 out of 989), and this could be an indicator of the difficulty of the genres for NMT systems. There is also a variation in terms of chain number between translations of TED talks and news. While automatic translations of news texts contain more chains than the corresponding human annotated sources and references, machine-translated TED talks contain less chains than the sources and human translations. However, there is not much variation between the chain features of the three MT outputs. The chains are also longer in machine-translated output than in reference translations as can be seen by the number of mentions per chain and the length of the longest chain." ], [ "We evaluate the quality of the three transformer engines with two automatic metrics, BLEU BIBREF49 and METEOR BIBREF50. Table TABREF25 shows the scores in two cases: all, when the complete texts are evaluated and coref, when only the subset of sentences that have been augmented in S3 are considered – 265 out of 494 for news and 239 out of 518 for TED. For news, the best system is that trained on more data, S2; but for TED talks S3 with less data has the best performance.", "The difference between the behaviour of the systems can be related to the different genres. We have seen that news are dominated by nominal mentions while TED is dominated by pronominal ones. Pronouns mostly need coreference information to be properly translated, while noun phrases can be improved simply because more instances of the nouns appear in the training data. With this, S3 improves the baseline S1 in +1.1 BLEU points for TED$_{coref}$ but -0.2 BLEU points for news$_{coref}$.", "However, even if the systems differ in the overall performance, the change is not related to the number of errors in coreference chains. Table TABREF25 also reports the number of mistakes in the translation of coreferent mentions. Whereas the number of errors correlates with translation quality (as measured by BLEU) for news$_{coref}$ this is not the case of TED$_{coref}$." ], [ "The total distribution for the 10 categories of errors defined in Section SECREF23 can be seen in Figure FIGREF29. Globally, the proportion of errors due to our closed categories (gender, number, case and ambiguous) is larger for TED talks than for news (see analysis in Section SECREF28). Gender is an issue with all systems and genres which does not get solved by the addition of more data. Additionally, news struggle with wrong words and named entities; for this genre the additional error types (see analysis in Section SECREF30) represent around 60% of the errors of S1/S3 to be compared to the 40% of TED talks." ], [ "0.4em 0.4Within our predefined closed categories (gender, number, case and ambiguous), the gender errors belong to the most frequent errors. They include wrong gender translation of both pronouns, as sie (“her”) instead of ihn (“him”) in example SECREF28 referring to the masculine noun Mindestlohn, and nominal phrases, as der Stasi instead of die Stasi, where a masculine form of the definite article is used instead of a feminine one, in example SECREF28.", ".src: [The current minimum wage] of 7.25 US dollars is a pittance... She wants to raise [it] to 15 dollars an hour.", "S3: [Der aktuelle Mindestlohn] von 7,25 US-Dollar sei Almosen... Sie möchte [sie] auf 15 Dollar pro Stunde erhöhen.", ". src: ...let's have a short look at the history of [the Stasi], because it is really important for understanding [its] self-conception.", "S2: Lassen sie uns... einen kurzen Blick auf die Geschichte [des Stasi] werfen denn es wirklich wichtig, [seine] Selbstauffassung zu verstehen.", "The gender-related errors are common to all the automatic translations. Interestingly, systems S1 and S3 have more problems with gender in translations of TED talks, whereas they do better in translating news, which leads us to assume that this is a data-dependent issue: while the antecedent for news is in the same sentence it is not for TED talks. A closer look at the texts with a high number of gender problems confirms this assumption —they contain references to females who were translated with male forms of nouns and pronouns (e.g. Mannschaftskapitän instead of Mannschaftskapitänin).", "We also observe errors related to gender for the cases of explicitation in translation. Some impersonal English constructions not having direct equivalents in German are translated with personal constructions, which requires an addition of a pronoun. Such cases of explicitation were automatically detected in parallel data in BIBREF21, BIBREF2. They belong to the category of obligatory explicitation, i.e. explicitation dictated by differences in the syntactic and semantic structure of languages, as defined by BIBREF51. An MT system tends to insert a male form instead of a female one even if it's marked as feminine (S3 adds the feminine form she as markup), as illustrated in example SECREF28 where the automatic translation contains the masculine pronoun er (“he”) instead of sie (“she”).", ". src: [Biles] earned the first one on Tuesday while serving as the exclamation point to retiring national team coordinator Martha Karolyi's going away party.", "ref: [Biles] holte die erste Medaille am Dienstag, während [sie] auf der Abschiedsfeier der sich in Ruhestand begehenden Mannschaftskoordinatorin Martha Karolyi als Ausrufezeichen diente.", "S2: [Biles] verdiente den ersten am Dienstag, während [er] als Ausrufezeichen für den pensionierten Koordinator der Nationalmannschaft, Martha Karolyi, diente.", "Another interesting case of a problem related to gender is the dependence of the referring expressions on grammatical restrictions in German. In example SECREF28, the source chain contains the pronoun him referring to both a 6-year-old boy and The child. In German, these two nominal phrases have different gender (masculine vs. neutral). The pronoun has grammatical agreement with the second noun of the chain (des Kindes) and not its head (ein 6 Jahre alter Junge).", ". src: Police say [a 6-year-old boy] has been shot in Philadelphia... [The child]'s grandparents identified [him] to CBS Philadelphia as [Mahaj Brown].", "S1: Die Polizei behauptet, [ein 6 Jahre alter Junge] sei in Philadelphia erschossen worden... Die Großeltern [des Kindes] identifizierten [ihn] mit CBS Philadelphia als [Mahaj Brown].", "Case- and number-related errors are less frequent in our data. However, translations of TED talks with S2 contain much more number-related errors than other outputs. Example SECREF28 illustrates this error type which occurs within a sentence. The English source contains the nominal chain in singular the cost – it, whereas the German correspondence Kosten has a plural form and requires a plural pronoun (sie). However, the automatic translation contains the singular pronoun es.", ". src: ...to the point where [the cost] is now below 1,000 dollars, and it's confidently predicted that by the year 2015 [it] will be below 100 dollars...", "S2: bis zu dem Punkt, wo [die Kosten] jetzt unter 1.000 Dollar liegen, und es ist zuversichtlich, dass [es] bis zum Jahr 2015 unter 100 Dollar liegen wird...", "Ambiguous cases often contain a combination of errors or they are difficult to categorise due to the ambiguity of the source pronouns, as the pronoun it in example SECREF28 which may refer either to the noun trouble or even the clause Democracy is in trouble is translated with the pronoun sie (feminine). In case of the first meaning, the pronoun would be correct, but the form of the following verb should be in plural. In case of a singular form, we would need to use a demonstrative pronoun dies (or possibly the personal pronoun es).", ". src: Democracy is in trouble... and [it] comes in part from a deep dilemma...", "S2: Die Demokratie steckt in Schwierigkeiten ... und [sie] rührt teilweise aus einem tiefen Dilemma her..." ], [ "At first glance, the error types discussed in this section do not seem to be related to coreference —a wrong translation of a noun can be traced back to the training data available and the way NMT deals with unknown words. However, a wrong translation of a noun may result in its invalidity to be a referring expression for a certain discourse item. As a consequence, a coreference chain is damaged. We illustrate a chain with a wrong named entity translation in example SECREF30. The source chain contains five nominal mentions referring to an American gymnast Aly Raisman: silver medalist – “Final Five” teammate – Aly Raisman – Aly Raisman – Raisman. All the three systems used different names. Example SECREF30 illustrates the translation with S2, where Aly Donovan and Aly Encence were used instead of Aly Raisman, and the mention Raisman disappears completely from the chain.", ". src: Her total of 62.198 was well clear of [silver medalist] and [“Final Five” teammate] [Aly Raisman]...United States' Simone Biles, left, and [Aly Raisman] embrace after winning gold and silver respectively... [Raisman]'s performance was a bit of revenge from four years ago, when [she] tied...", "S2: Ihre Gesamtmenge von 62.198 war deutlich von [Silbermedaillengewinner] und [“Final Five” Teamkollegen] [Aly Donovan]... Die Vereinigten Staaten Simone Biles, links und [Aly Encence] Umarmung nach dem Gewinn von Gold und Silber... Vor vier Jahren, als [sie]...", "Example SECREF30 illustrates translation of the chain The scaling in the opposite direction – that scale. The noun phrases Die Verlagerung in die entgegengesetzte Richtung (“the shift in the opposite direction”) and dieses Ausmaß (“extent/scale”) used in the S1 output do not corefer (cf. Wachstum in die entgegengesetzte Richtung and Wachstum in the reference translation). Notice that these cases with long noun phrases are not tackled by S3 either.", ". src: [The scaling in the opposite direction]...drive the structure of business towards the creation of new kinds of institutions that can achieve [that scale].", "ref: [Wachstum in die entgegengesetzte Richtung]... steuert die Struktur der Geschäfte in Richtung Erschaffung von neuen Institutionen, die [dieses Wachstum] erreichen können.", "S1: [Die Verlagerung in die entgegengesetzte Richtung]... treibt die Struktur der Unternehmen in Richtung der Schaffung neuer Arten von Institutionen, die [dieses Ausmaß] erreichen können." ], [ "Finally, we also analyse the types of the mentions marked as errors. They include either nominal phrases or pronouns. Table TABREF32 shows that there is a variation between the news texts and TED talks in terms of these features. News contain more erroneous nominal phrases, whereas TED talks contain more pronoun-related errors. Whereas both the news and the TED talks have more errors in translating anaphors, there is a higher proportion of erroneous antecedents in the news than in the TED talks.", "It is also interesting to see that S3 reduces the percentage of errors in anaphors for TED, but has a similar performance to S2 on news." ], [ "We analysed coreferences in the translation outputs of three transformer systems that differ in the training data and in whether they have access to explicit intra- and cross-sentential anaphoric information (S3) or not (S1, S2). We see that the translation errors are more dependent on the genre than on the nature of the specific NMT system: whereas news (with mainly NP mentions) contain a majority of errors related to wrong word selection, TED talks (with mainly pronominal mentions) are prone to accumulate errors on gender and number.", "System S3 was specifically designed to solve this issue, but we cannot trace the improvement from S1 to S3 by just counting the errors and error types, as some errors disappear and others emerge: coreference quality and automatic translation quality do not correlate in our analysis on TED talks. As a further improvement to address the issue, we could add more parallel data to our training corpus with a higher density of coreference chains such as movie subtitles or parallel TED talks.", "We also characterised the originals and translations according to coreference features such as total number of chains and mentions, average chain length and size of the longest chain. We see how NMT translations increase the number of mentions about $30\\%$ with respect to human references showing even a more marked explicitation effect than human translations do. As future work, we consider a more detailed comparison of the human and machine translations, and analyse the purpose of the additional mentions added by the NMT systems. It would be also interesting to evaluate of the quality of the automatically computed coreferences chains used for S3." ], [ "The annotation work was performed at Saarland University. We thank Anna Felsing, Francesco Fernicola, Viktoria Henn, Johanna Irsch, Kira Janine Jebing, Alicia Lauer, Friederike Lessau and Christina Pollkläsener for performing the manual annotation of the NMT outputs. The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee) and by the German Research Foundation (DFG) as part of SFB 1102 Information Density and Linguistic Encoding. Responsibility for the content of this publication is with the authors." ] ] }
{ "question": [ "What translationese effects are seen in the analysis?", "What languages are seen in the news and TED datasets?", "How are the coreference chain translations evaluated?", "How are the (possibly incorrect) coreference chains in the MT outputs annotated?", "Which three neural machine translation systems are analyzed?", "Which coreference phenomena are analyzed?" ], "question_id": [ "96c20af8bbef435d0d534d10c42ae15ff2f926f8", "9544cc0244db480217ce9174aa13f1bf09ba0d94", "c97a4a1c0e3d00137a9ae8d6fbb809ba6492991d", "3758669426e8fb55a4102564cf05f2864275041b", "1ebd6f703458eb6690421398c79abf3fc114148f", "15a1df59ed20aa415a4daf0acb256747f6766f77" ], "nlp_background": [ "five", "five", "five", "five", "five", "five" ], "topic_background": [ "", "", "", "", "", "" ], "paper_read": [ "", "", "", "", "", "" ], "search_query": [ "", "", "", "", "", "" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "potentially indicating a shining through effect", "explicitation effect" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set.", "We also characterised the originals and translations according to coreference features such as total number of chains and mentions, average chain length and size of the longest chain. We see how NMT translations increase the number of mentions about $30\\%$ with respect to human references showing even a more marked explicitation effect than human translations do. As future work, we consider a more detailed comparison of the human and machine translations, and analyse the purpose of the additional mentions added by the NMT systems. It would be also interesting to evaluate of the quality of the automatically computed coreferences chains used for S3." ], "highlighted_evidence": [ "Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set.", "We see how NMT translations increase the number of mentions about $30\\%$ with respect to human references showing even a more marked explicitation effect than human translations do." ] } ], "annotation_id": [ "18a26bce183c19040ba5a2cdb5758320a814c111" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "English", "German" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As one of our aims is to compare coreference chain properties in automatic translation with those of the source texts and human reference, we derive data from ParCorFull, an English-German corpus annotated with full coreference chains BIBREF46. The corpus contains ca. 160.7 thousand tokens manually annotated with about 14.9 thousand mentions and 4.7 thousand coreference chains. For our analysis, we select a portion of English news texts and TED talks from ParCorFull and translate them with the three NMT systems described in SECREF4 above. As texts considerably differ in their length, we select 17 news texts (494 sentences) and four TED talks (518 sentences). The size (in tokens) of the total data set under analysis – source (src) and human translations (ref) from ParCorFull and the automatic translations produced within this study (S1, S2 and S3) are presented in Table TABREF20." ], "highlighted_evidence": [ "As one of our aims is to compare coreference chain properties in automatic translation with those of the source texts and human reference, we derive data from ParCorFull, an English-German corpus annotated with full coreference chains BIBREF46." ] } ], "annotation_id": [ "96c518fcc23dd74945b59dd244a9bb2c70308f16" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "6b5ffd4b8f8b3190c6581d86fae84e0b52848236" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause)", "The mentions referring to the same discourse item are linked between each other.", "chain members are annotated for their correctness" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The English sources and their corresponding human translations into German were already manually annotated for coreference chains. We follow the same scheme as BIBREF47 to annotate the MT outputs with coreference chains. This scheme allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause). The mentions can be defined further in terms of their cohesive function (antecedent, anaphoric, cataphoric, comparative, substitution, ellipsis, apposition). Antecedents can either be marked as simple or split or as entity or event. The annotation scheme also includes pronoun type (personal, possessive, demonstrative, reflexive, relative) and modifier types of NPs (possessive, demonstrative, definite article, or none for proper names), see BIBREF46 for details. The mentions referring to the same discourse item are linked between each other. We use the annotation tool MMAX2 BIBREF48 which was also used for the annotation of ParCorFull.", "In the next step, chain members are annotated for their correctness. For the incorrect translations of mentions, we include the following error categories: gender, number, case, ambiguous and other. The latter category is open, which means that the annotators can add their own error types during the annotation process. With this, the final typology of errors also considered wrong named entity, wrong word, missing word, wrong syntactic structure, spelling error and addressee reference." ], "highlighted_evidence": [ "We follow the same scheme as BIBREF47 to annotate the MT outputs with coreference chains. This scheme allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause). The mentions can be defined further in terms of their cohesive function (antecedent, anaphoric, cataphoric, comparative, substitution, ellipsis, apposition). Antecedents can either be marked as simple or split or as entity or event. The annotation scheme also includes pronoun type (personal, possessive, demonstrative, reflexive, relative) and modifier types of NPs (possessive, demonstrative, definite article, or none for proper names), see BIBREF46 for details. The mentions referring to the same discourse item are linked between each other. We use the annotation tool MMAX2 BIBREF48 which was also used for the annotation of ParCorFull.", "In the next step, chain members are annotated for their correctness. For the incorrect translations of mentions, we include the following error categories: gender, number, case, ambiguous and other. The latter category is open, which means that the annotators can add their own error types during the annotation process. With this, the final typology of errors also considered wrong named entity, wrong word, missing word, wrong syntactic structure, spelling error and addressee reference." ] } ], "annotation_id": [ "2987496edf8f363db6268e8e34a9b260c839f838" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "first two systems are transformer models trained on different amounts of data", "The third system includes a modification to consider the information of full coreference chains" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We train three systems (S1, S2 and S3) with the corpora summarised in Table TABREF5. The first two systems are transformer models trained on different amounts of data (6M vs. 18M parallel sentences as seen in the Table). The third system includes a modification to consider the information of full coreference chains throughout a document augmenting the sentence to be translated with this information and it is trained with the same amount of sentence pairs as S1. A variant of the S3 system participated in the news machine translation of the shared task held at WMT 2019 BIBREF43." ], "highlighted_evidence": [ "We train three systems (S1, S2 and S3) with the corpora summarised in Table TABREF5. The first two systems are transformer models trained on different amounts of data (6M vs. 18M parallel sentences as seen in the Table). The third system includes a modification to consider the information of full coreference chains throughout a document augmenting the sentence to be translated with this information and it is trained with the same amount of sentence pairs as S1." ] } ], "annotation_id": [ "b5ac63f145933d694ede904dcf714701c157e889" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "shining through", "explicitation" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation. These features include number of mentions, number of chains, average chain length and the longest chain size. Machine-translated texts are compared to their sources and the corresponding human translations in terms of these features. We expect to find shining through and explicitation effects in automatic translations." ], "highlighted_evidence": [ "Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation." ] } ], "annotation_id": [ "d7757635c3260f24dc0f779f9636d7d6fea36190" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 1: Number of lines of the corpora used for training the NMT systems under study. The 2nd and 3rd columns show the amount of oversampling used.", "Table 2: Statistics on coreference features for news and TED texts considered.", "Table 3: BLEU and METEOR (MTR) scores for the 3 systems on our full test set (all) and the subset of sentences where coreference occurrs (coref ). The number of erroneous mentions is shown for comparison.", "Figure 1: Number of errors per system (S1, S2, S3) and genre (news, TED). Notice that the total number of errors differs for each plot, total numbers are reported in Table 3. Labels in Figure (b)–S3 apply to all the chart pies that use the same order and color scale for the different error types defined in Section 4.3.", "Table 4: Percentage of erroneous mentions: antencedent vs. anaphor, and noun phrase vs. pronominal." ], "file": [ "3-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png", "8-Figure1-1.png", "9-Table4-1.png" ] }
1909.08191
Exploring Scholarly Data by Semantic Query on Knowledge Graph Embedding Space
The trends of open science have enabled several open scholarly datasets which include millions of papers and authors. Managing, exploring, and utilizing such large and complicated datasets effectively are challenging. In recent years, the knowledge graph has emerged as a universal data format for representing knowledge about heterogeneous entities and their relationships. The knowledge graph can be modeled by knowledge graph embedding methods, which represent entities and relations as embedding vectors in semantic space, then model the interactions between these embedding vectors. However, the semantic structures in the knowledge graph embedding space are not well-studied, thus knowledge graph embedding methods are usually only used for knowledge graph completion but not data representation and analysis. In this paper, we propose to analyze these semantic structures based on the well-studied word embedding space and use them to support data exploration. We also define the semantic queries, which are algebraic operations between the embedding vectors in the knowledge graph embedding space, to solve queries such as similarity and analogy between the entities on the original datasets. We then design a general framework for data exploration by semantic queries and discuss the solution to some traditional scholarly data exploration tasks. We also propose some new interesting tasks that can be solved based on the uncanny semantic structures of the embedding space.
{ "section_name": [ "Introduction", "Related Work ::: Knowledge graph for scholarly data", "Related Work ::: Knowledge graph embedding", "Related Work ::: Word embedding", "Theoretical analysis", "Theoretical analysis ::: The semantic structures of CP@!START@$ _h $@!END@", "Theoretical analysis ::: Semantic query", "Semantic query framework", "Exploration tasks and semantic queries conversion", "Exploration tasks and semantic queries conversion ::: Similar entities", "Exploration tasks and semantic queries conversion ::: Similar entities with bias", "Exploration tasks and semantic queries conversion ::: Analogy query", "Exploration tasks and semantic queries conversion ::: Analogy browsing", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "In recent years, digital libraries have moved towards open science and open access with several large scholarly datasets being constructed. Most popular datasets include millions of papers, authors, venues, and other information. Their large size and heterogeneous contents make it very challenging to effectively manage, explore, and utilize these datasets. The knowledge graph has emerged as a universal data format for representing knowledge about entities and their relationships in such complicated data. The main part of a knowledge graph is a collection of triples, with each triple $ (h, t, r) $ denoting the fact that relation $ r $ exists between head entity $ h $ and tail entity $ t $. This can also be formalized as a labeled directed multigraph where each triple $ (h, t, r) $ represents a directed edge from node $ h $ to node $ t $ with label $ r $. Therefore, it is straightforward to build knowledge graphs for scholarly data by representing natural connections between scholarly entities with triples such as (AuthorA, Paper1, write) and (Paper1, Paper2, cite).", "Notably, instead of using knowledge graphs directly in some tasks, we can model them by knowledge graph embedding methods, which represent entities and relations as embedding vectors in semantic space, then model the interactions between them to solve the knowledge graph completion task. There are many approaches BIBREF0 to modeling the interactions between embedding vectors resulting in many knowledge graph embedding methods such as ComplEx BIBREF1 and CP$ _h $ BIBREF2. In the case of word embedding methods such as word2vec, embedding vectors are known to contain rich semantic information that enables them to be used in many semantic applications BIBREF3. However, the semantic structures in the knowledge graph embedding space are not well-studied, thus knowledge graph embeddings are only used for knowledge graph completion but remain absent in the toolbox for data analysis of heterogeneous data in general and scholarly data in particular, although they have the potential to be highly effective and efficient. In this paper, we address these issues by providing a theoretical understanding of their semantic structures and designing a general semantic query framework to support data exploration.", "For theoretical analysis, we first analyze the state-of-the-art knowledge graph embedding model CP$ _h $ BIBREF2 in comparison to the popular word embedding model word2vec skipgram BIBREF3 to explain its components and provide understandings to its semantic structures. We then define the semantic queries on the knowledge graph embedding spaces, which are algebraic operations between the embedding vectors in the knowledge graph embedding space to solve queries such as similarity and analogy between the entities on the original datasets.", "Based on our theoretical results, we design a general framework for data exploration on scholarly data by semantic queries on knowledge graph embedding space. The main component in this framework is the conversion between the data exploration tasks and the semantic queries. We first outline the semantic query solutions to some traditional data exploration tasks, such as similar paper prediction and similar author prediction. We then propose a group of new interesting tasks, such as analogy query and analogy browsing, and discuss how they can be used in modern digital libraries." ], [ "Knowledge graph has gradually become the standard data format for heterogeneous and complicated datasets BIBREF4. There have been several attempts to build knowledge graph for scholarly data, either adopting the scholarly network directly BIBREF5, or deriving the knowledge graph from some similarity measures BIBREF6 BIBREF7, or constructing the knowledge graph from survey papers BIBREF8. However, they mostly focus on the data format or graph inference aspects of knowledge graph. In this paper, we instead focus on the knowledge graph embedding methods and especially the application of embedding vectors in data exploration." ], [ "For a more in depth survey of knowledge graph embedding methods, please refer to BIBREF0, which defines their architecture, categorization, and interaction mechanisms. In this paper, we only focus on the semantic structures of the state-of-the-art model CP$ _h $ BIBREF2, which is an extension of CP BIBREF9.", "In CP, each entity $ e $ has two embedding vectors $ $ and $ ^{(2)} $ depending on its role in a triple as head or as tail, respectively. CP$ _h $ augments the data by making an inverse triple $ (t, h, r^{(a)}) $ for each existing triple $ (h, t, r) $, where $ r^{(a)} $ is the augmented relation corresponding to $ r $. When maximizing the likelihood by stochastic gradient descent, its score function is the sum:", "where $ , ^{(2)}, , ^{(2)}, , ^{(a)} \\in ^{D} $ are the embedding vectors of $ h $, $ t $, and $ r $, respectively, and the trilinear-product $ \\langle \\cdot , \\cdot , \\cdot \\rangle $ is defined as:", "where $ D $ is the embedding size and $ d $ is the dimension for which $ h_d $, $ t_d $, and $ r_d $ are the scalar entries.", "The validity of each triple is modeled as a Bernoulli distribution and its validity probability is computed by the standard logistic function $ \\sigma (\\cdot ) $ as:" ], [ "The most popular word embedding models in recent years are word2vec variants such as word2vec skipgram BIBREF3, which predicts the context-words $ c_i $ independently given the target-word $ w $, that is:", "In practice, the expensive softmax functions in these multinoulli distributions are avoided by approximating them with negative sampling and solve for the Bernoulli distributions by using the standard logistic function $ \\sigma (\\cdot ) $:", "where $ _{c_i} $ is the context-embedding vector of context-word $ c_i $ and $ _w $ is the word-embedding vector of target-word $ w $." ], [ "Word2vec skipgram and its semantic structures are well-studied both theoretically and empirically BIBREF3. CP$ _h $ is a new state of the art among many knowledge graph embedding models. We first ground the theoretical basis of CP$ _h $ on word2vec skipgram to explain its components and understand its semantic structures. We then define semantic queries on knowledge graph embedding space." ], [ "We first look at Eq. DISPLAY_FORM8 of word2vec skipgram and consider only one context-word $ c $ for simplicity. We can write the probability in proportional format as:", "Note that the context-word $ c $ and target-word $ w $ are ordered and in word2vec skipgram, the target-word is the central word in a sliding window, e.g., $ w_i $ is the target-word and $ w_{i-k}, \\dots , w_{i-1}, w_{i+1}, \\dots , w_{i+k} $ are context-words. Therefore, the roles in each word pair are symmetric over the whole dataset. When maximizing the likelihood by stochastic gradient descent, we can write the approximate probability of unordered word pair and expand the dot products as:", "where $ _c $ and $ _c $ are the context-embedding and word-embedding vectors of $ c $, respectively, $ _w $ and $ _w $ are the context-embedding and word-embedding vectors of $ w $, respectively, and $ {u_c}_d, {v_c}_d, {u_w}_d $, and $ {v_w}_d $ are their scalar entries, respectively.", "We now return to Eq. DISPLAY_FORM3 of CP$ _h $ to also write the probability in Eq. DISPLAY_FORM5 in proportional format and expand the trilinear products according to Eq. DISPLAY_FORM4 as:", "where $ , ^{(2)} $, $ , ^{(2)} $, $ , ^{(a)} $ are knowledge graph embedding vectors and $ h_d, h^{(2)}_d $, $ t_d, t^{(2)}_d $, $ r_d, r^{(a)}_d $ are the scalar entries.", "Comparing Eq. of word2vec skipgram and Eq. of CP$ _h $, we can see they have essentially the same form and mechanism. Note that the embedding vectors in word2vec skipgram are learned by aligning each target-word to different context-words and vice versa, which is essentially the same for CP$ _h $ by aligning each head entity to different tail entities in different triples and vice versa, with regards to the dimensions weighted by each relation. This result suggests that the semantic structures of CP$ _h $ are similar to those in word2vec skipgram and we can use the head-role-based entity embedding vectors, such as $ $, for semantic applications similarly to word embedding vectors. The tail-role-based entity embedding vectors, such as $ ^{(2)} $, contain almost the same information due to their symmetric roles, thus can be discarded in semantic tasks, which justifies this common practices in word embedding applications BIBREF3." ], [ "We mainly concern with the two following structures of the embedding space.", "Semantic similarity structure: Semantically similar entities are close to each other in the embedding space, and vice versa. This structure can be identified by a vector similarity measure, such as the dot product between two embedding vectors. The similarity between two embedding vectors is computed as:", "Semantic direction structure: There exist semantic directions in the embedding space, by which only one semantic aspect changes while all other aspects stay the same. It can be identified by a vector difference, such as the subtraction between two embedding vectors. The semantic direction between two embedding vectors is computed as:", "The algebraic operations, which include the above dot product and vector subtraction, or their combinations, can be used to approximate some important tasks on the original data. To do this, we first need to convert the data exploration task to the appropriate operations. We then conduct the operations on the embedding vectors and obtain the results. This process is defined as following.", "Definition 1 Semantic queries on knowledge graph embedding space are defined as the algebraic operations between the knowledge graph embedding vectors to approximate a given data exploration task on the original dataset." ], [ "Given the theoretical results, here we design a general framework for scholarly data exploration by using semantic queries on knowledge graph embedding space. Figure FIGREF19 shows the architecture of the proposed framework. There are three main components, namely data processing, task processing, and query processing.", "Data processing: with two steps, (1) constructing the knowledge graph from scholarly data by using the scholarly graph directly with entities such as authors, papers, venues, and relations such as author-write-paper, paper-cite-paper, paper-in-venue, and (2) learning the knowledge graph embeddings as in BIBREF0.", "Task processing: converting data exploration tasks to algebraic operations on the embedding space by following task-specific conversion templates. Some important tasks and their conversion templates are discussed in Section SECREF5.", "Query processing: executing semantic query on the embedding space and return results. Note that the algebraic operations on embedding vectors are linear and can be performed in parallel. Therefore, the semantic query is efficient.", "Note that the proposed semantic query framework makes no assumption on the specific knowledge graph embedding models and the induced embedding spaces. Any embedding space that contains rich semantic information such as the listed semantic structures can be applied in this framework." ], [ "Here we present and discuss the semantic queries for some traditional and newly proposed data exploration tasks on scholarly data." ], [ "Tasks Given an entity $ e \\in $, find entities that are similar to $ e $. For example, given AuthorA, find authors, papers, and venues that are similar to AuthorA. Note that we can restrict to find specific entity types. This is a traditional tasks in scholarly data exploration, whereas other below tasks are new.", "Semantic query We can solve this task by looking for the entities with highest similarity to $ e $. For example, the first result is:" ], [ "Tasks Given an entity $ e \\in $ and some positive bias entities $ A = \\lbrace a_1, \\dots , a_k\\rbrace $ known as expected results, find entities that are similar to $ e $ following the bias in $ A $. For example, given AuthorA and some successfully collaborating authors, find other similar authors that may also result in good collaborations with AuthorA.", "Semantic query We can solve this task by looking for the entities with highest similarity to both $ e $ and $ A $. For example, denoting the arithmetic mean of embedding vectors in $ A $ as $ \\bar{A} $, the first result is:" ], [ "Tasks Given an entity $ e \\in $, positive bias $ A = \\lbrace a_1, \\dots , a_k\\rbrace $, and negative bias $ B = \\lbrace b_1, \\dots , b_k\\rbrace $, find entities that are similar to $ e $ following the biases in $ A $ and $ B $. The essence of this task is tracing along a semantic direction defined by the positive and negative biases. For example, start with AuthorA, we can trace along the expertise direction to find authors that are similar to AuthorA but with higher or lower expertise.", "Semantic query We can solve this task by looking for the entities with highest similarity to $ e $ and $ A $ but not $ B $. For example, denoting the arithmetic mean of embedding vectors in $ A $ and $ B $ as $ \\bar{A} $ and $ \\bar{B} $, respectively, note that $ \\bar{A} - \\bar{B} $ defines the semantic direction along the positive and negative biases, the first result is:" ], [ "Tasks This task is an extension of the above analogy query task, by tracing along multiple semantic directions defined by multiple pairs of positive and negative biases. This task can be implemented as an interactive data analysis tool. For example, start with AuthorA, we can trace to authors with higher expertise, then continue tracing to new domains to find all authors similar to AuthorA with high expertise in the new domain. For another example, start with Paper1, we can trace to papers with higher quality, then continue tracing to new domain to look for papers similar to Paper1 with high quality in the new domain.", "Semantic query We can solve this task by simply repeating the semantic query for analogy query with each pair of positive and negative bias. Note that we can also combine different operations in different order to support flexible browsing." ], [ "In this paper, we studied the application of knowledge graph embedding in exploratory data analysis. We analyzed the CP$ _h $ model and provided understandings to its semantic structures. We then defined the semantic queries on knowledge graph embedding space to efficiently approximate some operations on heterogeneous data such as scholarly data. We designed a general framework to systematically apply semantic queries to solve scholarly data exploration tasks. Finally, we outlined and discussed the solutions to some traditional and pioneering exploration tasks emerged from the semantic structures of the knowledge graph embedding space.", "This paper is dedicated to the theoretical foundation of a new approach and discussions of emerging tasks, whereas experiments and evaluations are left for the future work. There are several other promising directions for future research. One direction is to explore new tasks or new solutions of traditional tasks using the proposed method. Another direction is to implement the proposed exploration tasks on real-life digital libraries for online evaluation." ], [ "This work was supported by “Cross-ministerial Strategic Innovation Promotion Program (SIP) Second Phase, Big-data and AI-enabled Cyberspace Technologies” by New Energy and Industrial Technology Development Organization (NEDO).", "1.0" ] ] }
{ "question": [ "What new interesting tasks can be solved based on the uncanny semantic structures of the embedding space?", "What are the uncanny semantic structures of the embedding space?", "What is the general framework for data exploration by semantic queries?", "What data exploration is supported by the analysis of these semantic structures?" ], "question_id": [ "b124137e62178a2bd3b5570d73b1652dfefa2457", "c6aa8a02597fea802890945f0b4be8d631e4d5cd", "bfad30f51ce3deea8a178944fa4c6e8acdd83a48", "dd9883f4adf7be072d314d7ed13fe4518c5500e0" ], "nlp_background": [ "five", "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "irony", "irony", "irony", "irony" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ " analogy query", "analogy browsing" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Based on our theoretical results, we design a general framework for data exploration on scholarly data by semantic queries on knowledge graph embedding space. The main component in this framework is the conversion between the data exploration tasks and the semantic queries. We first outline the semantic query solutions to some traditional data exploration tasks, such as similar paper prediction and similar author prediction. We then propose a group of new interesting tasks, such as analogy query and analogy browsing, and discuss how they can be used in modern digital libraries." ], "highlighted_evidence": [ "We then propose a group of new interesting tasks, such as analogy query and analogy browsing, and discuss how they can be used in modern digital libraries." ] } ], "annotation_id": [ "6a89e4fff251eedb32dbc90572428157ebdc3879" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Semantic similarity structure", "Semantic direction structure" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We mainly concern with the two following structures of the embedding space.", "Semantic similarity structure: Semantically similar entities are close to each other in the embedding space, and vice versa. This structure can be identified by a vector similarity measure, such as the dot product between two embedding vectors. The similarity between two embedding vectors is computed as:", "Semantic direction structure: There exist semantic directions in the embedding space, by which only one semantic aspect changes while all other aspects stay the same. It can be identified by a vector difference, such as the subtraction between two embedding vectors. The semantic direction between two embedding vectors is computed as:" ], "highlighted_evidence": [ "We mainly concern with the two following structures of the embedding space.\n\nSemantic similarity structure: Semantically similar entities are close to each other in the embedding space, and vice versa.", "Semantic direction structure: There exist semantic directions in the embedding space, by which only one semantic aspect changes while all other aspects stay the same. It can be identified by a vector difference, such as the subtraction between two embedding vectors." ] } ], "annotation_id": [ "6e396414abde796eaf765ac9eb38abb1d9e31edd" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "three main components, namely data processing, task processing, and query processing" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Given the theoretical results, here we design a general framework for scholarly data exploration by using semantic queries on knowledge graph embedding space. Figure FIGREF19 shows the architecture of the proposed framework. There are three main components, namely data processing, task processing, and query processing.", "FLOAT SELECTED: Fig. 1. Architecture of the semantic query framework. Eclipse denotes operation, parallelogram denotes resulting data." ], "highlighted_evidence": [ "Figure FIGREF19 shows the architecture of the proposed framework. There are three main components, namely data processing, task processing, and query processing.", "FLOAT SELECTED: Fig. 1. Architecture of the semantic query framework. Eclipse denotes operation, parallelogram denotes resulting data." ] } ], "annotation_id": [ "32c294b6f0296bd64d1b22ab8fcd0beba10a4c7c" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Task processing: converting data exploration tasks to algebraic operations on the embedding space", "Query processing: executing semantic query on the embedding space and return results" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Task processing: converting data exploration tasks to algebraic operations on the embedding space by following task-specific conversion templates. Some important tasks and their conversion templates are discussed in Section SECREF5.", "Query processing: executing semantic query on the embedding space and return results. Note that the algebraic operations on embedding vectors are linear and can be performed in parallel. Therefore, the semantic query is efficient." ], "highlighted_evidence": [ "Task processing: converting data exploration tasks to algebraic operations on the embedding space by following task-specific conversion templates. Some important tasks and their conversion templates are discussed in Section SECREF5.\n\nQuery processing: executing semantic query on the embedding space and return results. Note that the algebraic operations on embedding vectors are linear and can be performed in parallel. Therefore, the semantic query is efficient." ] } ], "annotation_id": [ "18b78a60710deb95f819804fdfe906e4c9c18df3" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Fig. 1. Architecture of the semantic query framework. Eclipse denotes operation, parallelogram denotes resulting data." ], "file": [ "6-Figure1-1.png" ] }
1910.06701
NumNet: Machine Reading Comprehension with Numerical Reasoning
Numerical reasoning, such as addition, subtraction, sorting and counting is a critical skill in human's reading comprehension, which has not been well considered in existing machine reading comprehension (MRC) systems. To address this issue, we propose a numerical MRC model named as NumNet, which utilizes a numerically-aware graph neural network to consider the comparing information and performs numerical reasoning over numbers in the question and passage. Our system achieves an EM-score of 64.56% on the DROP dataset, outperforming all existing machine reading comprehension models by considering the numerical relations among numbers.
{ "section_name": [ "Introduction", "Related Work ::: Machine Reading Comprehension", "Related Work ::: Arithmetic Word Problem Solving", "Methodology", "Methodology ::: Framework", "Methodology ::: Framework ::: Encoding Module", "Methodology ::: Framework ::: Reasoning Module", "Methodology ::: Framework ::: Prediction Module", "Methodology ::: Framework ::: Comparison with NAQANet", "Methodology ::: Numerically-aware Graph Construction", "Methodology ::: Numerical Reasoning", "Methodology ::: Numerical Reasoning ::: Initialization", "Methodology ::: Numerical Reasoning ::: One-step Reasoning", "Methodology ::: Numerical Reasoning ::: Multi-step Reasoning", "Experiments ::: Dataset and Evaluation Metrics", "Experiments ::: Baselines", "Experiments ::: Experimental Settings", "Experiments ::: Overall Results", "Experiments ::: Effect of GNN Structure", "Experiments ::: Effect of GNN Layer Number", "Experiments ::: Case Study", "Experiments ::: Error Analysis", "Experiments ::: Discussion", "Conclusion and Future Work", "Acknowledgments", "Appendix: Baseline Enhancements" ], "paragraphs": [ [ "Machine reading comprehension (MRC) aims to infer the answer to a question given the document. In recent years, researchers have proposed lots of MRC models BIBREF0, BIBREF1, BIBREF2, BIBREF3 and these models have achieved remarkable results in various public benchmarks such as SQuAD BIBREF4 and RACE BIBREF5. The success of these models is due to two reasons: (1) Multi-layer architectures which allow these models to read the document and the question iteratively for reasoning; (2) Attention mechanisms which would enable these models to focus on the part related to the question in the document.", "However, most of existing MRC models are still weak in numerical reasoning such as addition, subtraction, sorting and counting BIBREF6, which are naturally required when reading financial news, scientific articles, etc. BIBREF6 proposed a numerically-aware QANet (NAQANet) model, which divides the answer generation for numerical MRC into three types: (1) extracting spans; (2) counting; (3) addition or subtraction over numbers. NAQANet makes a pioneering attempt to answer numerical questions but still does not explicitly consider numerical reasoning.", "To tackle this problem, we introduce a novel model NumNet that integrates numerical reasoning into existing MRC models. A key problem to answer questions requiring numerical reasoning is how to perform numerical comparison in MRC systems, which is crucial for two common types of questions:", "(1) Numerical Comparison: The answers of the questions can be directly obtained via performing numerical comparison, such as sorting and comparison, in the documents. For example, in Table TABREF1, for the first question, if the MRC system knows the fact that “$49>47>36>31>22$”, it could easily extract that the second longest field goal is 47-yard.", "(2) Numerical Condition: The answers of the questions cannot be directly obtained through simple numerical comparison in the documents, but often require numerical comparison for understanding the text. For example, for the second question in Table TABREF1, an MRC system needs to know which age group made up more than 7% of the population to count the group number.", "Hence, our NumNet model considers numerical comparing information among numbers when answering numerical questions. As shown in Figure FIGREF3, NumNet first encodes both the question and passages through an encoding module consisting of convolution layers, self-attention layers and feed-forward layers as well as a passage-question attention layer. After that, we feed the question and passage representations into a numerically-aware graph neural network (NumGNN) to further integrate the comparison information among numbers into their representations. Finally, we utilize the numerically-aware representation of passages to infer the answer to the question.", "The experimental results on a public numerical MRC dataset DROP BIBREF6 show that our NumNet model achieves significant and consistent improvement as compared to all baseline methods by explicitly performing numerical reasoning over numbers in the question and passage. In particular, we show that our model could effectively deal with questions requiring sorting with multi-layer NumGNN. The source code of our paper is available at https://github.com/ranqiu92/NumNet." ], [ "Machine reading comprehension (MRC) has become an important research area in NLP. In recent years, researchers have published a large number of annotated MRC datasets such as CNN/Daily Mail BIBREF7, SQuAD BIBREF4, RACE BIBREF5, TriviaQA BIBREF8 and so on. With the blooming of available large-scale MRC datasets, a great number of neural network-based MRC models have been proposed to answer questions for a given document including Attentive Reader BIBREF9, BiDAF BIBREF3, Interactive AoA Reader BIBREF2, Gated Attention Reader BIBREF1, R-Net BIBREF10, DCN BIBREF11, QANet BIBREF12, and achieve promising results in most existing public MRC datasets.", "Despite the success of neural network-based MRC models, researchers began to analyze the data and rethink to what extent we have solved the problem of MRC. Some works BIBREF0, BIBREF13, BIBREF14 classify the reasoning skills required to answer the questions into the following types: (1) Exact matching/Paraphrasing; (2) Summary; (3) Logic reasoning; (4) Utilizing external knowledge; (5) Numerical reasoning. They found that most existing MRC models are focusing on dealing with the first three types of questions. However, all these models suffer from problems when answering the questions requiring numerical reasoning. To the best of our knowledge, our work is the first one that explicitly incorporates numerical reasoning into the MRC system. The most relevant work to ours is NAQANet BIBREF6, which adapts the output layer of QANet BIBREF12 to support predicting answers based on counting and addition/subtraction over numbers. However, it does not consider numerical reasoning explicitly during encoding or inference." ], [ "Recently, understanding and solving arithmetic word problems (AWP) has attracted the growing interest of NLP researchers. BIBREF15 proposed a simple method to address arithmetic word problems, but mostly focusing on subsets of problems which only require addition and subtraction. After that, BIBREF16 proposed an algorithmic approach which could handle arithmetic word problems with multiple steps and operations. BIBREF17 further formalized the AWP problem as that of generating and scoring equation trees via integer linear programming. BIBREF18 and BIBREF19 proposed sequence to sequence solvers for the AWP problems, which are capable of generating unseen expressions and do not rely on sophisticated manual features. BIBREF20 leveraged deep Q-network to solve the AWP problems, achieving a good balance between effectiveness and efficiency. However, all the existing AWP systems are only trained and validated on small benchmark datasets. BIBREF21 found that the performance of these AWP systems sharply degrades on larger datasets. Moreover, from the perspective of NLP, MRC problems are more challenging than AWP since the passages in MRC are mostly real-world texts which require more complex skills to be understood. Above all, it is nontrivial to adapt most existing AWP models to the MRC scenario. Therefore, we focus on enhancing MRC models with numerical reasoning abilities in this work." ], [ "In this section, we will introduce the framework of our model NumNet and provide the details of the proposed numerically-aware graph neural network (NumGNN) for numerical reasoning." ], [ "An overview of our model NumNet is shown in Figure FIGREF3. We compose our model with encoding module, reasoning module and prediction module. Our major contribution is the reasoning module, which leverages a NumGNN between the encoding module and prediction module to explicitly consider the numerical comparison information and perform numerical reasoning. As NAQANet has been shown effective for handling numerical MRC problem BIBREF6, we leverage it as our base model and mainly focus on the design and integration of the NumGNN in this work." ], [ "Without loss of generality, we use the encoding components of QANet and NAQANet to encode the question and passage into vector-space representations. Formally, the question $Q$ and passage $P$ are first encoded as:", "and then the passage-aware question representation and the question-aware passage representation are computed as:", "where $\\texttt {QANet-Emb-Enc}(\\cdot )$ and $\\texttt {QANet-Att}(\\cdot )$ denote the “stacked embedding encoder layer” and “context-query attention layer” of QANet respectively. The former consists of convolution, self-attention and feed-forward layers. The latter is a passage-question attention layer. $\\bar{\\mathbf {Q}}$ and $\\bar{\\mathbf {P}}$ are used by the following components." ], [ "First we build a heterogeneous directed graph $\\mathcal {G}=(\\mathbf {V};\\mathbf {E})$, whose nodes ($\\mathbf {V}$) are corresponding to the numbers in the question and passage, and edges ($\\mathbf {E}$) are used to encode numerical relationships among the numbers. The details will be explained in Sec. SECREF19.", "Then we perform reasoning on the graph based on a graph neural network, which can be formally denoted as:", "where $\\mathbf {W}^M$ is a shared weight matrix, $\\mathbf {U}$ is the representations of the nodes corresponding to the numbers, $\\texttt {QANet-Mod-Enc}(\\cdot )$ is the “model encoder layer” defined in QANet which is similar to $\\texttt {QANet-Emb-Enc}(\\cdot )$, and the definition of $\\texttt {Reasoning}(\\cdot )$ will be given in Sec. SECREF23.", "Finally, as $\\mathbf {U}$ only contains the representations of numbers, to tackle span-style answers containing non-numerical words, we concatenate $\\mathbf {U}$ with $\\mathbf {M}^P$ to produce numerically-aware passage representation $\\mathbf {M}_0$. Formally,", "where $[\\cdot ;\\cdot ]$ denotes matrix concatenation, $\\mathbf {W}[k]$ denotes the $k$-th column of a matrix $\\mathbf {W}$, $\\mathbf {0}$ is a zero vector, $I(i)$ denotes the node index corresponding to the passage word $w_i^p$ which is a number, $\\mathbf {W}_0$ is a weight matrix, and $\\mathbf {b}_0$ is a bias vector." ], [ "Following NAQANet BIBREF6, we divide the answers into four types and use a unique output layer to calculate the conditional answer probability $\\Pr (\\text{answer}|\\text{type})$ for each type :", "Passage span: The answer is a span of the passage, and the answer probability is defined as the product of the probabilities of the start and end positions.", "Question span: The answer is a span of the question, and the answer probability is also defined as the product of the probabilities of the start and end positions.", "Count: The answer is obtained by counting, and it is treated as a multi-class classification problem over ten numbers (0-9), which covers most of the Count type answers in the DROP dataset.", "Arithmetic expression: The answer is the result of an arithmetic expression. The expression is obtained in three steps: (1) extract all numbers from the passage; (2) assign a sign (plus, minus or zero) for each number; (3) sum the signed numbers .", "Meanwhile, an extra output layer is also used to predict the probability $\\Pr (\\text{type})$ of each answer type. At training time, the final answer probability is defined as the joint probability over all feasible answer types, i.e., $\\sum _{\\text{type}}\\Pr (\\text{type})\\Pr (\\text{answer}|\\text{type})$. Here, the answer type annotation is not required and the probability $\\Pr (\\text{type})$ is learnt by the model. At test time, the model first selects the most probable answer type greedily and then predicts the best answer accordingly.", "Without loss of generality, we leverage the definition of the five output layers in BIBREF6, with $\\mathbf {M_0}$ and $\\mathbf {Q}$ as inputs. Please refer to the paper for more details due to space limitation." ], [ "The major difference between our model and NAQANet is that NAQANet does not have the reasoning module, i.e., $\\mathbf {M}_0$ is simply set as $\\mathbf {M}^P$. As a result, numbers are treated as common words in NAQANet except in the prediction module, thus NAQANet may struggle to learn the numerical relationships between numbers, and potentially cannot well generalize to unseen numbers. However, as discussed in Sec. SECREF1, the numerical comparison is essential for answering questions requiring numerical reasoning. In our model, the numerical relationships are explicitly represented with the topology of the graph and a NumGNN is used to perform numerical reasoning. Therefore, our NumNet model can handle questions requiring numerical reasoning more effectively, which is verified by the experiments in Sec. SECREF4." ], [ "We regard all numbers from the question and passage as nodes in the graph for reasoning . The set of nodes corresponding to the numbers occurring in question and passage are denoted as $\\mathbf {V}^Q$ and $\\mathbf {V}^P$ respectively. And we denote all the nodes as $\\mathbf {V}=\\mathbf {V}^Q\\cup \\mathbf {V}^P$, and the number corresponding to a node $v\\in \\mathbf {V}$ as $n(v)$.", "Two sets of edges are considered in this work:", "Greater Relation Edge ($\\overrightarrow{\\mathbf {E}}$): For two nodes $v_i, v_j\\in \\mathbf {V}$, a directed edge $\\overrightarrow{e}_{ij}=(v_i, v_j)$ pointing from $v_i$ to $v_j$ will be added to the graph if $n(v_i)>n(v_j)$, which is denoted as solid arrow in Figure FIGREF3.", "Lower or Equal Relation Edge ($\\overleftarrow{\\mathbf {E}}$): For two nodes $v_i, v_j\\in \\mathbf {V}$, a directed edge $\\overleftarrow{e}_{ij}=(v_j, v_i)$ will be added to the graph if $n(v_i)\\le n(v_j)$, which is denoted as dashed arrow in Figure FIGREF3.", "Theoretically, $\\overrightarrow{\\mathbf {E}}$ and $\\overleftarrow{\\mathbf {E}}$ are complement to each other . However, as a number may occur several times and represent different facts in a document, we add a distinct node for each occurrence in the graph to prevent potential ambiguity. Therefore, it is more reasonable to use both $\\overrightarrow{\\mathbf {E}}$ and $\\overleftarrow{\\mathbf {E}}$ in order to encode the equal information among nodes." ], [ "As we built the graph $\\mathcal {G}=(\\mathbf {V},\\mathbf {E})$, we leverage NumGNN to perform reasoning, which is corresponding to the function $\\texttt {Reasoning}(\\cdot )$ in Eq. DISPLAY_FORM10. The reasoning process is as follows:" ], [ "For each node $v^P_i\\in \\mathbf {V}^P$, its representation is initialized as the corresponding column vector of $\\mathbf {M}^P$. Formally, the initial representation is $\\mathbf {v}_i^P=\\mathbf {M}^P[I^P(v_i^P)]$, where $I^P(v^P_i)$ denotes the word index corresponding to $v_i^P$. Similarly, the initial representation $\\mathbf {v}_j^Q$ for a node $v^Q_j\\in \\mathbf {V}^Q$ is set as the corresponding column vector of $\\mathbf {M}^Q$. We denote all the initial node representations as $\\mathbf {v}^0=\\lbrace \\mathbf {v}_i^P\\rbrace \\cup \\lbrace \\mathbf {v}_j^Q\\rbrace $." ], [ "Given the graph $\\mathcal {G}$ and the node representations $\\mathbf {v}$, we use a GNN to perform reasoning in three steps:", "(1) Node Relatedness Measure: As only a few numbers are relevant for answering a question generally, we compute a weight for each node to by-pass irrelevant numbers in reasoning. Formally, the weight for node $v_i$ is computed as:", "where $\\mathbf {W}_v$ is a weight matrix, and $b_v$ is a bias.", "(2) Message Propagation: As the role a number plays in reasoning is not only decided by itself, but also related to the context, we propagate messages from each node to its neighbors to help to perform reasoning. As numbers in question and passage may play different roles in reasoning and edges corresponding to different numerical relations should be distinguished, we use relation-specific transform matrices in the message propagation. Formally, we define the following propagation function for calculating the forward-pass update of a node:", "where $\\widetilde{\\mathbf {v}}^{\\prime }_i$ is the message representation of node $v_i$, $\\texttt {r}_{ji}$ is the relation assigned to edge $e_{ji}$, $\\mathbf {W}^{\\texttt {r}_{ji}}$ are relation-specific transform matrices, and $\\mathcal {N}_i=\\lbrace j|(v_j,v_i)\\in \\mathbf {E}\\rbrace $ is the neighbors of node $v_i$.", "For each edge $e_{ji}$, $\\texttt {r}_{ji}$ is determined by the following two attributes:", "Number relation: $>$ or $\\le $;", "Node types: the two nodes of the edge corresponding to two numbers that: (1) both from the question ($\\text{q-q}$); (2) both from the passage ($\\text{p-p}$); (3) from the question and the passage respectively ($\\text{q-p}$); (4) from the passage and the question respectively ($\\text{p-q}$).", "Formally, $\\texttt {r}_{ij}\\in \\lbrace >,\\le \\rbrace \\times \\lbrace \\text{q-q},\\text{p-p},\\text{q-p},\\text{p-q}\\rbrace $.", "(3) Node Representation Update: As the message representation obtained in the previous step only contains information from the neighbors, it needs to be fused with the node representation to combine with the information carried by the node itself, which is performed as:", "where $\\mathbf {W}_f$ is a weight matrix, and $\\mathbf {b}_f$ is a bias vector.", "We denote the entire one-step reasoning process (Eq. DISPLAY_FORM26-DISPLAY_FORM30) as a single function", "As the graph $\\mathcal {G}$ constructed in Sec. SECREF19 has encoded the numerical relations via its topology, the reasoning process is numerically-aware." ], [ "By single-step reasoning, we can only infer relations between adjacent nodes. However, relations between multiple nodes may be required for certain tasks, e.g., sorting. Therefore, it is essential to perform multi-step reasoning, which can be done as follows:", "where $t\\ge 1$. Suppose we perform $K$ steps of reasoning, $\\mathbf {v}^K$ is used as $\\mathbf {U}$ in Eq. DISPLAY_FORM10." ], [ "We evaluate our proposed model on DROP dataset BIBREF6, which is a public numerical MRC dataset. The DROP dataset is constructed by crowd-sourcing, which asks the annotators to generate question-answer pairs according to the given Wikipedia passages, which require numerical reasoning such as addition, counting, or sorting over numbers in the passages. There are $77,409$ training samples, $9,536$ development samples and $9,622$ testing samples in the dataset.", "In this paper, we adopt two metrics including Exact Match (EM) and numerically-focused F1 scores to evaluate our model following BIBREF6. The numerically-focused F1 is set to be 0 when the predicted answer is mismatched for those questions with the numeric golden answer." ], [ "For comparison, we select several public models as baselines including semantic parsing models:", "[topsep=2pt, itemsep=0pt]", "Syn Dep BIBREF6, the neural semantic parsing model (KDG) BIBREF22 with Stanford dependencies based sentence representations;", "OpenIE BIBREF6, KDG with open information extraction based sentence representations;", "SRL BIBREF6, KDG with semantic role labeling based sentence representations;", "and traditional MRC models:", "[topsep=2pt, itemsep=0pt]", "BiDAF BIBREF3, an MRC model which utilizes a bi-directional attention flow network to encode the question and passage;", "QANet BIBREF12, which utilizes convolutions and self-attentions as the building blocks of encoders to represent the question and passage;", "BERT BIBREF23, a pre-trained bidirectional Transformer-based language model which achieves state-of-the-art performance on lots of public MRC datasets recently;", "and numerical MRC models:", "[topsep=2pt, itemsep=0pt]", "NAQANet BIBREF6, a numerical version of QANet model.", "NAQANet+, an enhanced version of NAQANet implemented by ourselves, which further considers real number (e.g. “2.5”), richer arithmetic expression, data augmentation, etc. The enhancements are also used in our NumNet model and the details are given in the Appendix." ], [ "In this paper, we tune our model on the development set and use a grid search to determine the optimal parameters. The dimensions of all the representations (e.g., $\\mathbf {Q}$, $\\mathbf {P}$, $\\mathbf {M}^Q$, $\\mathbf {M}^P$, $\\mathbf {U}$, $\\mathbf {M}_0^{\\prime }$, $\\mathbf {M}_0$ and $\\mathbf {v}$) are set to 128. If not specified, the reasoning step $K$ is set to 3. Since other parameters have little effect on the results, we simply follow the settings used in BIBREF6.", "We use the Adam optimizer BIBREF24 with $\\beta _1=0.8$, $\\beta _2=0.999$, $\\epsilon =10^{-7}$ to minimize the objective function. The learning rate is $5 \\times 10^{-4}$, L2 weight decay $\\lambda $ is $10^{-7}$ and the maximum norm value of gradient clipping is 5. We also apply exponential moving average with a decay rate $0.9999$ on all trainable variables. The model is trained with a batch size of 16 for 40 epochs. Passages and questions are trimmed to 400 and 50 tokens respectively during training, and trimmed to $1,000$ and 100 tokens respectively during prediction ." ], [ "The performance of our NumNet model and other baselines on DROP dataset are shown in Table TABREF47. From the results, we can observe that:", "(1) Our NumNet model achieves better results on both the development and testing sets on DROP dataset as compared to semantic parsing-based models, traditional MRC models and even numerical MRC models NAQANet and NAQANet+. The reason is that our NumNet model can make full use of the numerical comparison information over numbers in both question and passage via the proposed NumGNN module.", "(2) Our implemented NAQANet+ has a much better performance compared to the original version of NAQANet. It verifies the effectiveness of our proposed enhancements for baseline." ], [ "In this part, we investigate the effect of different GNN structures on the DROP development set. The results are shown in Table TABREF51. The “Comparison”, “Number” and “ALL” are corresponding to the comparing question subset , the number-type answer subset, and the entire development set, respectively . If we replace the proposed numerically-aware graph (Sec. SECREF19) with a fully connected graph, our model fallbacks to a traditional GNN, denoted as “GNN” in the table. Moreover, “- question num” denotes the numbers in the question is not included in the graph, and “- $\\le $ type edge” and “- $>$ type edge” denote edges of $\\le $ and $>$ types are not adopted respectively.", "As shown in Table TABREF51, our proposed NumGNN leads to statistically significant improvements compared to traditional GNN on both EM and F1 scores especially for comparing questions. It indicates that considering the comparing information over numbers could effectively help the numerical reasoning for comparing questions. Moreover, we find that the numbers in the question are often related to the numerical reasoning for answering the question, thus considering numbers in questions in NumGNN achieves better performance. And the results also justify that encoding “greater relation” and “lower or equal relation” simultaneously in the graph also benefits our model." ], [ "The number of NumGNN layers represents the numerical reasoning ability of our models. A $K$-layer version has the ability for $K$-step numerical inference. In this part, we additionally perform experiments to understand the values of the numbers of NumGNN layers. From Figure FIGREF52, we could observe that:", "(1) The 2-layer version of NumNet achieves the best performance for the comparing questions. From careful analysis, we find that most comparing questions only require at most 2-step reasoning (e.g., “Who was the second oldest player in the MLB, Clemens or Franco?”), and therefore the 3-layer version of NumNet is more complex but brings no gains for these questions.", "(2) The performance of our NumNet model on the overall development set is improved consistently as the number of GNN layers increases. The reason is that some of the numerical questions require reasoning over many numbers in the passage, which could benefit from the multi-step reasoning ability of multi-layer GNN. However, further investigation shows that the performance gain is not stable when $K\\ge 4$. We believe it is due to the intrinsic over smoothing problem of GNNs BIBREF25." ], [ "We further give some examples to show why incorporating comparing information over numbers in the passage could help numerical reasoning in MRC in Table TABREF53. For the first case, we observe that NAQANet+ gives a wrong prediction, and we find that NAQANet+ will give the same prediction for the question “Which age group is smaller: under the age of 18 or 18 and 24?”. The reason is that NAQANet+ cannot distinguish which one is larger for $10.1\\%$ and $56.2\\%$. For the second case, NAQANet+ cannot recognize the second longest field goal is 22-yard and also gives a wrong prediction. For these two cases, our NumNet model could give the correct answer through the numeric reasoning, which indicates the effectiveness of our NumNet model." ], [ "To investigate how well our NumNet model handles sorting/comparison questions and better understand the remaining challenges, we perform an error analysis on a random sample of NumNet predictions. We find that:", "(1) Our NumNet model can answer about 76% of sorting/comparison questions correctly, which indicates that our NumNet model has achieved numerical reasoning ability to some extend.", "(2) Among the incorrectly answered sorting/comparison questions, the most ones (26%) are those whose golden answers are multiple nonadjacent spans (row 1 in Table TABREF54), and the second most ones (19%) are those involving comparison with an intermediate number that does not literally occur in the document/question but has to be derived from counting or arithmetic operation (row 1 in Table TABREF54)." ], [ "By combining the numerically-aware graph and the NumGNN together, our NumNet model achieves the numerical reasoning ability. On one hand, the numerically-aware graph encodes numbers as nodes and relationships between them as the edges, which is required for numerical comparison. On the other hand, through one-step reasoning, our NumGNN could perform comparison and identify the numerical condition. After multiple-step reasoning, our NumGNN could further perform sorting.", "However, since the numerically-aware graph is pre-defined, our NumNet is not applicable to the case where an intermediate number has to be derived (e.g., from arithmetic operation) in the reasoning process, which is a major limitation of our model." ], [ "Numerical reasoning skills such as addition, subtraction, sorting and counting are naturally required by machine reading comprehension (MRC) problems in practice. Nevertheless, these skills are not taken into account explicitly for most existing MRC models. In this work, we propose a numerical MRC model named NumNet which performs explicit numerical reasoning while reading the passages. To be specific, NumNet encodes the numerical relations among numbers in the question and passage into a graph as its topology, and leverages a numerically-aware graph neural network to perform numerical reasoning on the graph. Our NumNet model outperforms strong baselines with a large margin on the DROP dataset. In the future, we will explore the following directions: (1)As we use a pre-defined reasoning graph in our model, it is incapable of handling reasoning process which involves intermediate numbers that not presented in the graph. How to incorporate dynamic graph into our model is an interesting problem. (2) Compared with methods proposed for arithmetic word problems (AWPs), our model has better natural language understanding ability. However, the methods for AWPs can handle much richer arithmetic expressions. Therefore, how to combine both of their abilities to develop a more powerful numerical MRC model is an interesting future direction. (3) Symbolic reasoning plays a crucial role in human reading comprehension. Our work integrates numerical reasoning, which is a special case of symbolic reasoning, into traditional MRC systems. How to incorporate more sophisticated symbolic reasoning abilities into MRC systems is also a valuable future direction." ], [ "We would like to thank all anonymous reviewers for their insightful comments, and thank Yan Zhang for her help on improving the presentation of Figure FIGREF3." ], [ "The major enhancements leveraged by our implemented NAQANet+ model include:", "(1) “real number”: Unlike NAQANet only considers integer numbers, we also consider real numbers.", "(2) “richer arithmetic expression”: We conceptually append an extra number “100” to the passage to support arithmetic expressions like “100-25”, which is required for answering questions such as “How many percent were not American?”.", "(3) “passage-preferred”: If an answer is both a span of the question and the passage, we only propagate gradients through the output layer for processing “Passage span” type answers.", "(4) “data augmentation”: The original questions in the DROP dataset are generated by crowdsourced workers. For the comparing questions which contain answer candidates, we observe that the workers frequently only change the incorrect answer candidate to generate a new question. For example, “How many from the census is bigger: Germans or English?” whose golden answer is “Germans” is modified to “How many from the census is bigger: Germans or Irish?”. This may introduce undesired inductive bias to the model. Therefore, we propose to augment the training dataset with new questions automatically generated by swapping the candidate answers, e.g., “How many from the census is bigger: English or Germans?” is added to the training dataset.", "We further conduct ablation studies on the enhancements. And the validation scores on the development set are shown in Table TABREF59. As can be seen from Table TABREF59:", "(1) The uses of real number and richer arithmetic expression are crucial for answering numerical questions: both EM and F1 drop drastically by up to $15-21$ points if they are removed.", "(2) The passage-preferred strategy and data augmentation are also necessary components that contribute significant improvements for those comparing questions." ] ] }
{ "question": [ "what are the existing models they compared with?" ], "question_id": [ "81669c550d32d756f516dab5d2b76ff5f21c0f36" ], "nlp_background": [ "" ], "topic_background": [ "" ], "paper_read": [ "" ], "search_query": [ "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Syn Dep", "OpenIE", "SRL", "BiDAF", "QANet", "BERT", "NAQANet", "NAQANet+" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Experiments ::: Baselines", "For comparison, we select several public models as baselines including semantic parsing models:", "BiDAF BIBREF3, an MRC model which utilizes a bi-directional attention flow network to encode the question and passage;", "QANet BIBREF12, which utilizes convolutions and self-attentions as the building blocks of encoders to represent the question and passage;", "BERT BIBREF23, a pre-trained bidirectional Transformer-based language model which achieves state-of-the-art performance on lots of public MRC datasets recently;", "and numerical MRC models:", "NAQANet BIBREF6, a numerical version of QANet model.", "NAQANet+, an enhanced version of NAQANet implemented by ourselves, which further considers real number (e.g. “2.5”), richer arithmetic expression, data augmentation, etc. The enhancements are also used in our NumNet model and the details are given in the Appendix.", "Syn Dep BIBREF6, the neural semantic parsing model (KDG) BIBREF22 with Stanford dependencies based sentence representations;", "OpenIE BIBREF6, KDG with open information extraction based sentence representations;", "SRL BIBREF6, KDG with semantic role labeling based sentence representations;", "and traditional MRC models:" ], "highlighted_evidence": [ "Experiments ::: Baselines\nFor comparison, we select several public models as baselines including semantic parsing models:", "BiDAF BIBREF3, an MRC model which utilizes a bi-directional attention flow network to encode the question and passage;\n\nQANet BIBREF12, which utilizes convolutions and self-attentions as the building blocks of encoders to represent the question and passage;\n\nBERT BIBREF23, a pre-trained bidirectional Transformer-based language model which achieves state-of-the-art performance on lots of public MRC datasets recently;\n\nand numerical MRC models:", "NAQANet BIBREF6, a numerical version of QANet model.\n\nNAQANet+, an enhanced version of NAQANet implemented by ourselves, which further considers real number (e.g. “2.5”), richer arithmetic expression, data augmentation, etc.", "Syn Dep BIBREF6, the neural semantic parsing model (KDG) BIBREF22 with Stanford dependencies based sentence representations;\n\nOpenIE BIBREF6, KDG with open information extraction based sentence representations;\n\nSRL BIBREF6, KDG with semantic role labeling based sentence representations;\n\nand traditional MRC models:" ] } ], "annotation_id": [ "18d1d14f23928f54049c4ee69aeb964fb56ebf7e" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Table 1: Example questions from the DROP dataset which require numerical comparison. We highlight the relevant parts in the passage to infer the answer.", "Figure 1: The framework of our NumNet model. Our model consists of an encoding module, a reasoning module and a prediction module. The numerical relations between numbers are encoded with the topology of the graph. For example, the edge pointing from “6” to “5” denotes “6” is greater than “5”. And the reasoning module leverages a numerically-aware graph neural network to perform numerical reasoning on the graph. As numerical comparison is modeled explicitly in our model, it is more effective for answering questions requiring numerical reasoning such as addition, counting, or sorting over numbers.", "Table 2: Overall results on the development and test set. The evaluation metrics are calculated as the maximum over a golden answer set. All the results except “NAQANet+” and “NumNet” are obtained from (Dua et al., 2019).", "Table 3: Performance with different GNN structure. “Comparison”, “Number” and “ALL” denote the comparing question subset, the number-type answer subset, and the entire development set, respectively.", "Figure 2: Effect of GNN layer numbers (# L).", "Table 4: Cases from the DROP dataset. We demonstrate the predictions of NAQANet+ and our NumNet model. Note that the two models only output the arithmetic expressions but we also provide their results for clarity.", "Table 5: Typical error examples. Row 1: the answer is multiple nonadjacent spans; Row 2: Intermediate numbers are involved in reasoning.", "Table 6: Baseline enhancements ablation." ], "file": [ "2-Table1-1.png", "3-Figure1-1.png", "7-Table2-1.png", "7-Table3-1.png", "7-Figure2-1.png", "8-Table4-1.png", "8-Table5-1.png", "10-Table6-1.png" ] }
1904.11942
Contextualized Word Embeddings Enhanced Event Temporal Relation Extraction for Story Understanding
Learning causal and temporal relationships between events is an important step towards deeper story and commonsense understanding. Though there are abundant datasets annotated with event relations for story comprehension, many have no empirical results associated with them. In this work, we establish strong baselines for event temporal relation extraction on two under-explored story narrative datasets: Richer Event Description (RED) and Causal and Temporal Relation Scheme (CaTeRS). To the best of our knowledge, these are the first results reported on these two datasets. We demonstrate that neural network-based models can outperform some strong traditional linguistic feature-based models. We also conduct comparative studies to show the contribution of adopting contextualized word embeddings (BERT) for event temporal relation extraction from stories. Detailed analyses are offered to better understand the results.
{ "section_name": [ "Introduction", "Models", "Data", "Implementation Details", "Result and Analysis", "Temporal Relation Data", "Feature-based Models", "Neural Network Model", "Conclusion", "Acknowledgement" ], "paragraphs": [ [ "Event temporal relation understanding is a major component of story/narrative comprehension. It is an important natural language understanding (NLU) task with broad applications to downstream tasks such as story understanding BIBREF0 , BIBREF1 , BIBREF2 , question answering BIBREF3 , BIBREF4 , and text summarization BIBREF5 , BIBREF6 .", "The goal of event temporal relation extraction is to build a directed graph where nodes correspond to events, and edges reflect temporal relations between the events. Figure FIGREF1 illustrates an example of such a graph for the text shown above. Different types of edges specify different temporal relations: the event assassination is before slaughtered, slaughtered is included in rampage, and the relation between rampage and war is vague.", "Modeling event temporal relations is crucial for story/narrative understanding and storytelling, because a story is typically composed of a sequence of events BIBREF7 . Several story corpora are thus annotated with various event-event relations to understand commonsense event knowledge. CaTeRS BIBREF8 is created by annotating 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. RED BIBREF9 contains annotations of rich relations between event pairs for storyline understanding, including co-reference and partial co-reference relations, temporal; causal, and sub-event relations.", "Despite multiple productive research threads on temporal and causal relation modeling among events BIBREF10 , BIBREF11 , BIBREF12 and event relation annotation for story understanding BIBREF8 , the intersection of these two threads seems flimsy. To the best of our knowledge, no event relation extraction results have been reported on CaTeRS and RED.", "We apply neural network models that leverage recent advances in contextualized embeddings (BERT BIBREF13 ) to event-event relation extraction tasks for CaTeRS and RED. Our goal in this paper is to increase understanding of how well the state-of-the-art event relation models work for story/narrative comprehension.", "In this paper, we report the first results of event temporal relation extraction on two under-explored story comprehension datasets: CaTeRS and RED. We establish strong baselines with neural network models enhanced by recent breakthrough of contextualized embeddings, BERT BIBREF13 . We summarize the contributions of the paper as follows:" ], [ "We investigate both neural network-based models and traditional feature-based models. We briefly introduce them in this section." ], [ "is created by annotating 1600 sentences of 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. CaTeRS contains both temporal and causal relations in an effort to understand and predict commonsense relations between events.", "As demonstrated in Table TABREF16 , we split all stories into 220 training and 80 test. We do not construct the development set because the dataset is small. Note that some relations have compounded labels such as “CAUSE_BEFORE”, “ENABLE_BEFORE”, etc. We only take the temporal portion of the annotations.", "annotates a wide range of relations of event pairs including their coreference and partial coreference relations, and temporal, causal and subevent relationships. We split data according to the standard train, development, test sets, and only focus on the temporal relations.", "The common issue of these two datasets is that they are not densely annotated – not every pair of events is annotated with a relation. We provide one way to handle negative (unannotated) pairs in this paper. When constructing negative examples, we take all event pairs that occur within the same or neighboring sentences with no annotations, labeling them as “NONE”. The negative to positive samples ratio is 1.00 and 11.5 for CaTeRS and RED respectively. Note that RED data has much higher negative ratio (as shown in Table TABREF16 ) because it contains longer articles, more complicated sentence structures, and richer entity types than CaTeRS where all stories consist of 5 (mostly short) sentences.", "In both the development and test sets, we add all negative pairs as candidates for the relation prediction. During training, the number of negative pairs we add is based on a hyper-parameter that we tune to control the negative-to-positive sample ratio.", "To justify our decision of selecting negative pairs within the same or neighboring sentences, we show the distribution of distances across positive sentence pairs in Table TABREF18 . Although CaTeRS data has pair distance more evenly distributed than RED, we observe that the vast majority (85.87% and 93.99% respectively) of positive pairs have sentence distance less than or equal to one.", "To handle negative pairs that are more than two sentences away, we automatically predict all out-of-window pairs as “NONE”. This means that some positive pairs will be automatically labeled as negative pairs. Since the percentage of out-of-window positive pairs is small, we believe the impact on performance is small. We can investigate expanding the prediction window in future research, but the trade-off is that we will get more negative pairs that are hard to predict." ], [ "CAEVO consists of both linguistic-rule-based sieves and feature-based trainable sieves. We train CAEVO sieves with our train set and evaluate them on both dev and test sets. CAEVO is an end-to-end system that automatically annotates both events and relations. In order to resolve label annotation mismatch between CAEVO and our gold data, we create our own final input files to CAEVO system. Default parameter settings are used when running the CAEVO system.", "In an effort of building a general model and reducing the number of hand-crafted features, we leverage pre-trained (GloVe 300) embeddings in place of linguistic features. The only linguistic feature we use in our experiment is token distance. We notice in our experiments that hidden layer size, dropout ratio and negative sample ratio impact model performance significantly. We conduct grid search to find the best hyper-parameter combination according to the performance of the development set.", "Note that since the CaTeRS data is small and there is no standard train, development, and test splits, we conduct cross-validation on training data to choose the best hyper-parameters and predict on test. For RED data, the standard train, development, test splits are used.", "As we mentioned briefly in the introduction, using BERT output as word embeddings could provide an additional performance boost in our NN architecture. We pre-process our raw data by feeding original sentences into a pre-trained BERT model and output the last layer of BERT as token representations. In this experiment, we fix the negative sample ratio according to the result obtained from the previous step and only search for the best hidden layer size and dropout ratio." ], [ "Table TABREF25 contains the best hyper-parameters and Table TABREF26 contains micro-average F1 scores for both datasets on dev and test sets. We only consider positive pairs, i.e. correct predictions on NONE pairs are excluded for evaluation. In general, the baseline model CAEVO is outperformed by both NN models, and NN model with BERT embedding achieves the greatest performance. We now provide more detailed analysis and discussion for each dataset." ], [ "Collecting dense TempRel corpora with event pairs fully annotated has been reported challenging since annotators could easily overlook some pairs BIBREF18 , BIBREF19 , BIBREF10 . TimeBank BIBREF20 is an example with events and their relations annotated sparsely. TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences. However, densely annotated datasets are relatively small both in terms of number of documents and event pairs, which restricts the complexity of machine learning models used in previous research." ], [ "The series of TempEval competitions BIBREF21 , BIBREF22 , BIBREF23 have attracted many research interests in predicting event temporal relations. Early attempts by BIBREF24 , BIBREF21 , BIBREF25 , BIBREF26 only use pair-wise classification models. State-of-the-art local methods, such as ClearTK BIBREF27 , UTTime BIBREF28 , and NavyTime BIBREF29 improve on earlier work by feature engineering with linguistic and syntactic rules. As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets. Additionally, several models BramsenDLB2006, ChambersJ2008, DoLuRo12, NingWuRo18, P18-1212 have successfully incorporated global inference to impose global prediction consistency such as temporal transitivity." ], [ "Neural network-based methods have been employed for event temporal relation extraction BIBREF14 , BIBREF15 , BIBREF16 , BIBREF12 which achieved impressive results. However, the dataset they focus on is TB-Dense. We have explored neural network models on CaTeRS and RED, which are more related to story narrative understanding and generation.", "In our NN model, we also leverage Bidrectional Encoder Representations from Transformers (BERT) BIBREF30 which has shown significant improvement in many NLP tasks by allowing fine-tuning of pre-trained language representations. Unlike the Generative Pre-trained Transformer (OpenAI GPT) BIBREF31 , BERT uses a biderctional Transformer BIBREF32 instead of a unidirectional (left-to-right) Transformer to incorporate context from both directions. As mentioned earlier, we do not fine-tune BERT in our experiments and simply leverage the last layer as our contextualized word representations." ], [ "We established strong baselines for two story narrative understanding datasets: CaTeRS and RED. We have shown that neural network-based models can outperform feature-based models with wide margins, and we conducted an ablation study to show that contextualized representation learning can boost performance of NN models. Further research can focus on more systematic study or build stronger NN models over the same datasets used in this work. Exploring possibilities to directly apply temporal relation extraction to enhance performance of story generation systems is another promising research direction." ], [ "We thank the anonymous reviewers for their constructive comments, as well as the members of the USC PLUS lab for their early feedback. This work is supported by Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA)." ] ] }
{ "question": [ "Do they report results only on English data?", "What conclusions do the authors draw from their detailed analyses?", "Do the BERT-based embeddings improve results?", "What were the traditional linguistic feature-based models?", "What type of baseline are established for the two datasets?" ], "question_id": [ "b0b1ff2d6515fb40d74a4538614a0db537e020ea", "4266aacb575b4be7dbcdb8616766324f8790763c", "191107cd112f7ee6d19c1dc43177e6899452a2c7", "b0dca7b74934f51ff3da0c074ad659c25d84174d", "601e58a3d2c03a0b4cd627c81c6228a714e43903" ], "nlp_background": [ "five", "five", "five", "five", "five" ], "topic_background": [ "", "", "", "", "" ], "paper_read": [ "", "", "", "", "" ], "search_query": [ "", "", "", "", "" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "6885d7f38c0abf2ed42e6de35f34d5ef99508ac3" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "neural network-based models can outperform feature-based models with wide margins", "contextualized representation learning can boost performance of NN models" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We established strong baselines for two story narrative understanding datasets: CaTeRS and RED. We have shown that neural network-based models can outperform feature-based models with wide margins, and we conducted an ablation study to show that contextualized representation learning can boost performance of NN models. Further research can focus on more systematic study or build stronger NN models over the same datasets used in this work. Exploring possibilities to directly apply temporal relation extraction to enhance performance of story generation systems is another promising research direction." ], "highlighted_evidence": [ "We have shown that neural network-based models can outperform feature-based models with wide margins, and we conducted an ablation study to show that contextualized representation learning can boost performance of NN models." ] } ], "annotation_id": [ "18d92d5dfa8f46708dbe64174f5cc796f519c2ec" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Table TABREF25 contains the best hyper-parameters and Table TABREF26 contains micro-average F1 scores for both datasets on dev and test sets. We only consider positive pairs, i.e. correct predictions on NONE pairs are excluded for evaluation. In general, the baseline model CAEVO is outperformed by both NN models, and NN model with BERT embedding achieves the greatest performance. We now provide more detailed analysis and discussion for each dataset." ], "highlighted_evidence": [ "In general, the baseline model CAEVO is outperformed by both NN models, and NN model with BERT embedding achieves the greatest performance." ] } ], "annotation_id": [ "335e327cd95fbc441cd1c70e930e8067707d8676" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "CAEVO" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The series of TempEval competitions BIBREF21 , BIBREF22 , BIBREF23 have attracted many research interests in predicting event temporal relations. Early attempts by BIBREF24 , BIBREF21 , BIBREF25 , BIBREF26 only use pair-wise classification models. State-of-the-art local methods, such as ClearTK BIBREF27 , UTTime BIBREF28 , and NavyTime BIBREF29 improve on earlier work by feature engineering with linguistic and syntactic rules. As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets. Additionally, several models BramsenDLB2006, ChambersJ2008, DoLuRo12, NingWuRo18, P18-1212 have successfully incorporated global inference to impose global prediction consistency such as temporal transitivity." ], "highlighted_evidence": [ "As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets." ] } ], "annotation_id": [ "a0ee39ef826fad3e8c93117657c36e0c1f583659" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "CAEVO" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The series of TempEval competitions BIBREF21 , BIBREF22 , BIBREF23 have attracted many research interests in predicting event temporal relations. Early attempts by BIBREF24 , BIBREF21 , BIBREF25 , BIBREF26 only use pair-wise classification models. State-of-the-art local methods, such as ClearTK BIBREF27 , UTTime BIBREF28 , and NavyTime BIBREF29 improve on earlier work by feature engineering with linguistic and syntactic rules. As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets. Additionally, several models BramsenDLB2006, ChambersJ2008, DoLuRo12, NingWuRo18, P18-1212 have successfully incorporated global inference to impose global prediction consistency such as temporal transitivity." ], "highlighted_evidence": [ "As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets." ] } ], "annotation_id": [ "b51e9beb914a8bd0af00931fd7c0e2384a68dcfa" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: An example paragraph with its (partial) temporal graphs. Some events are removed for clarity.", "Figure 2: Deep neural network architecture for event relation prediction", "Table 2: Token sentence distance breakdown. 0: a pair of events in the same sentence; 1: a pair of events in the neighboring sentence (2 sentence span); 2: a pair of events in 3 sentence span, etc.", "Table 1: Data overview: the number of documents in CaTeRS refers to the number of stories. “Negative” denotes negative pairs (missing annotations) within two sentence span we construct for the whole dataset.", "Table 3: Best hyper-parameters: C: controls for the strength of L1 penalty; balanced: is a binary indicator of whether training on “balanced” labels; max iter: early stopping criteria.", "Table 4: F1 Scores on development and test set for the two datasets. Note for CaTeRS data, we didn’t conduct cross-validation on CAEVO, but instead train the model with default parameter settings. Hence the dev performance doesn’t apply here.", "Figure 3: NN model (with GloVe embedding) performance with different negative sample ratio for CaTeRS.", "Table 5: NN performances with GloVe and BERT embeddings respectively.", "Figure 4: NN model (with GloVe embedding) performance with different negative sample ratio for RED.", "Table 6: Examples of temporal relations misclassified with GloVe embedding but correct with BERT embedding." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "3-Table2-1.png", "3-Table1-1.png", "4-Table3-1.png", "4-Table4-1.png", "4-Figure3-1.png", "5-Table5-1.png", "5-Figure4-1.png", "6-Table6-1.png" ] }
1705.02394
Learning Representations of Emotional Speech with Deep Convolutional Generative Adversarial Networks
Automatically assessing emotional valence in human speech has historically been a difficult task for machine learning algorithms. The subtle changes in the voice of the speaker that are indicative of positive or negative emotional states are often"overshadowed"by voice characteristics relating to emotional intensity or emotional activation. In this work we explore a representation learning approach that automatically derives discriminative representations of emotional speech. In particular, we investigate two machine learning strategies to improve classifier performance: (1) utilization of unlabeled data using a deep convolutional generative adversarial network (DCGAN), and (2) multitask learning. Within our extensive experiments we leverage a multitask annotated emotional corpus as well as a large unlabeled meeting corpus (around 100 hours). Our speaker-independent classification experiments show that in particular the use of unlabeled data in our investigations improves performance of the classifiers and both fully supervised baseline approaches are outperformed considerably. We improve the classification of emotional valence on a discrete 5-point scale to 43.88% and on a 3-point scale to 49.80%, which is competitive to state-of-the-art performance.
{ "section_name": [ "Introduction", "Related Work", "Multitask Deep Convolutional Generative Adversarial Network", "Data Corpus", "Experimental Design", "Results", "Conclusions" ], "paragraphs": [ [ "Machine Learning, in general, and affective computing, in particular, rely on good data representations or features that have a good discriminatory faculty in classification and regression experiments, such as emotion recognition from speech. To derive efficient representations of data, researchers have adopted two main strategies: (1) carefully crafted and tailored feature extractors designed for a particular task BIBREF0 and (2) algorithms that learn representations automatically from the data itself BIBREF1 . The latter approach is called Representation Learning (RL), and has received growing attention in the past few years and is highly reliant on large quantities of data. Most approaches for emotion recognition from speech still rely on the extraction of standard acoustic features such as pitch, shimmer, jitter and MFCCs (Mel-Frequency Cepstral Coefficients), with a few notable exceptions BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In this work we leverage RL strategies and automatically learn representations of emotional speech from the spectrogram directly using a deep convolutional neural network (CNN) architecture.", "To learn strong representations of speech we seek to leverage as much data as possible. However, emotion annotations are difficult to obtain and scarce BIBREF6 . We leverage the USC-IEMOCAP dataset, which comprises of around 12 hours of highly emotional and partly acted data from 10 speakers BIBREF7 . However, we aim to improve the learned representations of emotional speech with unlabeled speech data from an unrelated meeting corpus, which consists of about 100 hours of data BIBREF8 . While the meeting corpus is qualitatively quite different from the highly emotional USC-IEMOCAP data, we believe that the learned representations will improve through the use of these additional data. This combination of two separate data sources leads to a semi-supervised machine learning task and we extend the CNN architecture to a deep convolutional generative neural network (DCGAN) that can be trained in an unsupervised fashion BIBREF9 .", "Within this work, we particularly target emotional valence as the primary task, as it has been shown to be the most challenging emotional dimension for acoustic analyses in a number of studies BIBREF10 , BIBREF11 . Apart from solely targeting valence classification, we further investigate the principle of multitask learning. In multitask learning, a set of related tasks are learned (e.g., emotional activation), along with a primary task (e.g., emotional valence); both tasks share parts of the network topology and are hence jointly trained, as depicted in Figure FIGREF4 . It is expected that data for the secondary task models information, which would also be discriminative in learning the primary task. In fact, this approach has been shown to improve generalizability across corpora BIBREF12 .", "The remainder of this paper is organized as follows: First we introduce the DCGAN model and discuss prior work, in Section SECREF2 . Then we describe our specific multitask DCGAN model in Section SECREF3 , introduce the datasets in Section SECREF4 , and describe our experimental design in Section SECREF5 . Finally, we report our results in Section SECREF6 and discuss our findings in Section SECREF7 ." ], [ "The proposed model builds upon previous results in the field of emotion recognition, and leverages prior work in representation learning.", "Multitask learning has been effective in some prior experiments on emotion detection. In particular, Xia and Liu proposed a multitask model for emotion recognition which, like the investigated model, has activation and valence as targets BIBREF13 . Their work uses a Deep Belief Network (DBN) architecture to classify the emotion of audio input, with valence and activation as secondary tasks. Their experiments indicate that the use of multitask learning produces improved unweighted accuracy on the emotion classification task. Like Xia and Liu, the proposed model uses multitask learning with valence and activation as targets. Unlike them, however, we are primarily interested not in emotion classification, but in valence classification as a primary task. Thus, our multitask model has valence as a primary target and activation as a secondary target. Also, while our experiments use the IEMOCAP database like Xia and Liu do, our method of speaker split differs from theirs. Xia and Liu use a leave-one-speaker-out cross validation scheme with separate train and test sets that have no speaker overlap. This method lacks a distinct validation set; instead, they validate and test on the same set. Our experimental setup, on the other hand, splits the data into distinct train, validation, and test sets, still with no speaker overlap. This is described in greater detail in Section SECREF5 .", "The unsupervised learning part of the investigated model builds upon an architecture known as the deep convolutional generative adversarial network, or DCGAN. DCGAN consists of two components, known as the generator and discriminator, which are trained against each other in a minimax setup. The generator learns to map samples from a random distribution to output matrices of some pre-specified form. The discriminator takes an input which is either a generator output or a “real” sample from a dataset. The discriminator learns to classify the input as either generated or real BIBREF9 .", "For training, the discriminator uses a cross entropy loss function based on how many inputs were correctly classified as real and how many were correctly classified as generated. The cross entropy loss between true labels INLINEFORM0 and predictions INLINEFORM1 is defined as: DISPLAYFORM0 ", "Where INLINEFORM0 is the learned vector of weights, and INLINEFORM1 is the number of samples. For purposes of this computation, labels are represented as numerical values of 1 for real and 0 for generated. Then, letting INLINEFORM2 represent the discriminator's predictions for all real inputs, the cross entropy for correct predictions of “real” simplifies to: INLINEFORM3 ", "Because in this case the correct predictions are all ones. Similarly, letting INLINEFORM0 represent the discriminator's predictions for all generated inputs, the cross entropy for correct predictions of “generated” simplifies to: INLINEFORM1 ", "Because here the correct predictions are all zeroes. The total loss for the discriminator is given by the sum of the previous two terms INLINEFORM0 .", "The generator also uses a cross entropy loss, but its loss is defined in terms of how many generated outputs got incorrectly classified as real: INLINEFORM0 ", "Thus, the generator's loss gets lower the better it is able to produce outputs that the discriminator thinks are real. This leads the generator to eventually produce outputs that look like real samples of speech given sufficient training iterations." ], [ "The investigated multitask model is based upon the DCGAN architecture described in Section SECREF2 and is implemented in TensorFlow. For emotion classification a fully connected layer is attached to the final convolutional layer of the DCGAN's discriminator. The output of this layer is then fed to two separate fully connected layers, one of which outputs a valence label and the other of which outputs an activation label. This setup is shown visually in Figure FIGREF4 . Through this setup, the model is able to take advantage of unlabeled data during training by feeding it through the DCGAN layers in the model, and is also able to take advantage of multitask learning and train the valence and activation outputs simultaneously.", "In particular, the model is trained by iteratively running the generator, discriminator, valence classifier, and activation classifier, and back-propagating the error for each component through the network. The loss functions for the generator and discriminator are unaltered, and remain as shown in Section SECREF2 . Both the valence classifier and activation classifier use cross entropy loss as in Equation EQREF2 .", "Since the valence and activation classifiers share layers with the discriminator the model learns features and convolutional filters that are effective for the tasks of valence classification, activation classification, and discriminating between real and generated samples." ], [ "Due to the semi-supervised nature of the proposed Multitask DCGAN model, we utilize both labeled and unlabeled data. For the unlabeled data, we use audio from the AMI BIBREF8 and IEMOCAP BIBREF7 datasets. For the labeled data, we use audio from the IEMOCAP dataset, which comes with labels for activation and valence, both measured on a 5-point Likert scale from three distinct annotators. Although IEMOCAP provides per-word activation and valence labels, in practice these labels do not generally change over time in a given audio file, and so for simplicity we label each audio clip with the average valence and activation. Since valence and activation are both measured on a 5-point scale, the labels are encoded in 5-element one-hot vectors. For instance, a valence of 5 is represented with the vector INLINEFORM0 . The one-hot encoding can be thought of as a probability distribution representing the likelihood of the correct label being some particular value. Thus, in cases where the annotators disagree on the valence or activation label, this can be represented by assigning probabilities to multiple positions in the label vector. For instance, a label of 4.5 conceptually means that the “correct” valence is either 4 or 5 with equal probability, so the corresponding vector would be INLINEFORM1 . These “fuzzy labels” have been shown to improve classification performance in a number of applications BIBREF14 , BIBREF15 . It should be noted here that we had generally greater success with this fuzzy label method than training the neural network model on the valence label directly, i.e. classification task vs. regression.", "Pre-processing. Audio data is fed to the network models in the form of spectrograms. The spectrograms are computed using a short time Fourier transform with window size of 1024 samples, which at the 16 kHz sampling rate is equivalent to 64 ms. Each spectrogram is 128 pixels high, representing the frequency range 0-11 kHz. Due to the varying lengths of the IEMOCAP audio files, the spectrograms vary in width, which poses a problem for the batching process of the neural network training. To compensate for this, the model randomly crops a region of each input spectrogram. The crop width is determined in advance. To ensure that the selected crop region contains at least some data (i.e. is not entirely silence), cropping occurs using the following procedure: a random word in the transcript of the audio file is selected, and the corresponding time range is looked up. A random point within this time range is selected, which is then treated as the center line of the crop. The crop is then made using the region defined by the center line and crop width.", "Early on, we found that there is a noticeable imbalance in the valence labels for the IEMOCAP data, in that the labels skew heavily towards the neutral (2-3) range. In order to prevent the model from overfitting to this distribution during training, we normalize the training data by oversampling underrepresented valence data, such that the overall distribution of valence labels is more even." ], [ "Investigated Models. We investigate the impact of both unlabeled data for improved emotional speech representations and multitask learning on emotional valence classification performance. To this end, we compared four different neural network models:", "The BasicCNN represents a “bare minimum” valence classifier and thus sets a lower bound for expected performance. Comparison with MultitaskCNN indicates the effect of the inclusion of a secondary task, i.e. emotional activation recognition. Comparison with BasicDCGAN indicates the effect of the incorporation of unlabeled data during training.", "For fairness, the architectures of all three baselines are based upon the full MultitaskDCGAN model. BasicDCGAN for example is simply the MultitaskDCGAN model with the activation layer removed, while the two fully supervised baselines were built by taking the convolutional layers from the discriminator component of the MultitaskDCGAN, and adding fully connected layers for valence and activation output. Specifically, the discriminator contains four convolutional layers; there is no explicit pooling but the kernel stride size is 2 so image size gets halved at each step. Thus, by design, all four models have this same convolutional structure. This is to ensure that potential performance gains do not stem from a larger complexity or higher number of trainable weights within the DCGAN models, but rather stem from improved representations of speech.", "Experimental Procedure. The parameters for each model, including batch size, filter size (for convolution), and learning rate, were determined by randomly sampling different parameter combinations, training the model with those parameters, and computing accuracy on a held-out validation set. For each model, we kept the parameters that yield the best accuracy on the held-out set. This procedure ensures that each model is fairly represented during evaluation. Our hyper-parameters included crop width of the input signal INLINEFORM0 , convolutional layer filter sizes INLINEFORM1 (where INLINEFORM2 is the selected crop width and gets divided by 8 to account for each halving of image size from the 3 convolutional layers leading up to the last one), number of convolutional filters INLINEFORM3 (step size 4), batch size INLINEFORM4 , and learning rates INLINEFORM5 . Identified parameters per model are shown in Table TABREF9 .", "For evaluation, we utilized a 5-fold leave-one-session-out validation. Each fold leaves one of the five sessions in the labeled IEMOCAP data out of the training set entirely. From this left-out conversation, one speaker's audio is used as a validation set, while the other speaker's audio is used as a test set.", "For each fold, the evaluation procedure is as follows: the model being evaluated is trained on the training set, and after each full pass through the training set, accuracy is computed on the validation set. This process continues until the accuracy on the validation set is found to no longer increase; in other words, we locate a local maximum in validation accuracy. To increase the certainty that this local maximum is truly representative of the model's best performance, we continue to run more iterations after a local maximum is found, and look for 5 consecutive iterations with lower accuracy values. If, in the course of these 5 iterations, a higher accuracy value is found, that is treated as the new local maximum and the search restarts from there. Once a best accuracy value is found in this manner, we restore the model's weights to those of the iteration corresponding to the best accuracy, and evaluate the accuracy on the test set.", "We evaluated each model on all 5 folds using the methodology described above, recording test accuracies for each fold.", "Evaluation Strategy. We collected several statistics about our models' performances. We were primarily interested in the unweighted per-class accuracy. In addition, we converted the network's output from probability distributions back into numerical labels by taking the expected value; that is: INLINEFORM0 where INLINEFORM1 is the model's prediction, in its original form as a 5 element vector probability distribution. We then used this to compute the Pearson correlation ( INLINEFORM2 measure) between predicted and actual labels.", "Some pre-processing was needed to obtain accurate measures. In particular, in cases where human annotators were perfectly split on what the correct label for a particular sample should be, both possibilities should be accepted as correct predictions. For instance, if the correct label is 4.5 (vector form INLINEFORM0 ), a correct prediction could be either 4 or 5 (i.e. the maximum index in the output vector should either be 4 or 5).", "The above measures are for a 5-point labeling scale, which is how the IEMOCAP data is originally labeled. However, prior experiments have evaluated performance on valence classification on a 3-point scale BIBREF16 . The authors provide an example of this, with valence levels 1 and 2 being pooled into a single “negative” category, valence level 3 remaining untouched as the “neutral” category, and valence levels 4 and 5 being pooled into a single “positive” category. Thus, to allow for comparison with our models, we also report results on such a 3-point scale. We construct these results by taking the results for 5 class comparison and pooling them as just described." ], [ "Table TABREF10 shows the unweighted per-class accuracies and Pearson correlation coeffecients ( INLINEFORM0 values) between actual and predicted labels for each model. All values shown are average values across the test sets for all 5 folds.", "Results indicate that the use of unsupervised learning yields a clear improvement in performance. Both BasicDCGAN and MultitaskDCGAN have considerably better accuracies and linear correlations compared to the fully supervised CNN models. This is a strong indication that the use of large quantities of task-unrelated speech data improved the filter learning in the CNN layers of the DCGAN discriminator.", "Multitask learning, on the other hand, does not appear to have any positive impact on performance. Comparing the two CNN models, the addition of multitask learning actually appears to impair performance, with MultitaskCNN doing worse than BasicCNN in all three metrics. The difference is smaller when comparing BasicDCGAN and MultitaskDCGAN, and may not be enough to decidedly conclude that the use of multitask learning has a net negative impact there, but certainly there is no indication of a net positive impact. The observed performance of both the BasicDCGAN and MultitaskDCGAN using 3-classes is comparable to the state-of-the-art, with 49.80% compared to 49.99% reported in BIBREF16 . It needs to be noted that in BIBREF16 data from the test speaker's session partner was utilized in the training of the model. Our models in contrast are trained on only four of the five sessions as discussed in SECREF5 . Further, the here presented models are trained on the raw spectrograms of the audio and no feature extraction was employed whatsoever. This representation learning approach is employed in order to allow the DCGAN component of the model to train on vast amounts of unsupervised speech data.", "We further report the confusion matrix of the best performing model BasicDCGAN in Table TABREF11 . It is noted that the “negative” class (i.e., the second row) is classified the best. However, it appears that this class is picked more frequently by the model resulting in high recall = 0.7701 and low precision = 0.3502. The class with the highest F1 score is “very positive” (i.e., the last row) with INLINEFORM0 . The confusion of “very negative” valence with “very positive” valence in the top right corner is interesting and has been previously observed BIBREF4 ." ], [ "We investigated the use of unsupervised and multitask learning to improve the performance of an emotional valence classifier. Overall, we found that unsupervised learning yields considerable improvements in classification accuracy for the emotional valence recognition task. The best performing model achieves 43.88% in the 5-class case and 49.80% in the 3-class case with a significant Pearson correlation between continuous target label and prediction of INLINEFORM0 ( INLINEFORM1 ). There is no indication that multitask learning provides any advantage.", "The results for multitask learning are somewhat surprising. It may be that the valence and activation classification tasks are not sufficiently related for multitask learning to yield improvements in accuracy. Alternatively, a different neural network architecture may be needed for multitask learning to work. Further, the alternating update strategy employed in the present work might not have been the optimal strategy for training. The iterative swapping of target tasks valence/activation might have created instabilities in the weight updates of the backpropagation algorithm. There may yet be other explanations; further investigation may be warranted.", "Lastly, it is important to note that this model's performance is only approaching state-of-the-art, which employs potentially better suited sequential classifiers such as Long Short-term Memory (LSTM) networks BIBREF17 . However, basic LSTM are not suited to learn from entirely unsupervised data, which we leveraged for the proposed DCGAN models. For future work, we hope to adapt the technique of using unlabeled data to sequential models, including LSTM. We expect that combining our work here with the advantages of sequential models may result in further performance gains which may be more competitive with today's leading models and potentially outperform them. For the purposes of this investigation, the key takeaway is that the use of unsupervised learning yields clear performance gains on the emotional valence classification task, and that this represents a technique that may be adapted to other models to achieve even higher classification accuracies." ] ] }
{ "question": [ "What model achieves state of the art performance on this task?", "Which multitask annotated corpus is used?", "What are the tasks in the multitask learning setup?", "What are the subtle changes in voice which have been previously overshadowed?" ], "question_id": [ "a0fbf90ceb520626b80ff0f9160b3cd5029585a5", "e8ca81d5b36952259ef3e0dbeac7b3a622eabe8e", "e75685ef5f58027be44f42f30cb3988b509b2768", "1df24849e50fcf22f0855e0c0937c1288450ed5c" ], "nlp_background": [ "five", "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "BIBREF16" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Multitask learning, on the other hand, does not appear to have any positive impact on performance. Comparing the two CNN models, the addition of multitask learning actually appears to impair performance, with MultitaskCNN doing worse than BasicCNN in all three metrics. The difference is smaller when comparing BasicDCGAN and MultitaskDCGAN, and may not be enough to decidedly conclude that the use of multitask learning has a net negative impact there, but certainly there is no indication of a net positive impact. The observed performance of both the BasicDCGAN and MultitaskDCGAN using 3-classes is comparable to the state-of-the-art, with 49.80% compared to 49.99% reported in BIBREF16 . It needs to be noted that in BIBREF16 data from the test speaker's session partner was utilized in the training of the model. Our models in contrast are trained on only four of the five sessions as discussed in SECREF5 . Further, the here presented models are trained on the raw spectrograms of the audio and no feature extraction was employed whatsoever. This representation learning approach is employed in order to allow the DCGAN component of the model to train on vast amounts of unsupervised speech data." ], "highlighted_evidence": [ "The observed performance of both the BasicDCGAN and MultitaskDCGAN using 3-classes is comparable to the state-of-the-art, with 49.80% compared to 49.99% reported in BIBREF16 " ] } ], "annotation_id": [ "4e3bafddef0d3aae7052c46f138704c9fa36f926" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "IEMOCAP" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Due to the semi-supervised nature of the proposed Multitask DCGAN model, we utilize both labeled and unlabeled data. For the unlabeled data, we use audio from the AMI BIBREF8 and IEMOCAP BIBREF7 datasets. For the labeled data, we use audio from the IEMOCAP dataset, which comes with labels for activation and valence, both measured on a 5-point Likert scale from three distinct annotators. Although IEMOCAP provides per-word activation and valence labels, in practice these labels do not generally change over time in a given audio file, and so for simplicity we label each audio clip with the average valence and activation. Since valence and activation are both measured on a 5-point scale, the labels are encoded in 5-element one-hot vectors. For instance, a valence of 5 is represented with the vector INLINEFORM0 . The one-hot encoding can be thought of as a probability distribution representing the likelihood of the correct label being some particular value. Thus, in cases where the annotators disagree on the valence or activation label, this can be represented by assigning probabilities to multiple positions in the label vector. For instance, a label of 4.5 conceptually means that the “correct” valence is either 4 or 5 with equal probability, so the corresponding vector would be INLINEFORM1 . These “fuzzy labels” have been shown to improve classification performance in a number of applications BIBREF14 , BIBREF15 . It should be noted here that we had generally greater success with this fuzzy label method than training the neural network model on the valence label directly, i.e. classification task vs. regression." ], "highlighted_evidence": [ "For the labeled data, we use audio from the IEMOCAP dataset, which comes with labels for activation and valence, both measured on a 5-point Likert scale from three distinct annotators." ] } ], "annotation_id": [ "1c7da7e435814944f3f70b56334f8cc4448bf0f9" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "set of related tasks are learned (e.g., emotional activation)", "primary task (e.g., emotional valence)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Within this work, we particularly target emotional valence as the primary task, as it has been shown to be the most challenging emotional dimension for acoustic analyses in a number of studies BIBREF10 , BIBREF11 . Apart from solely targeting valence classification, we further investigate the principle of multitask learning. In multitask learning, a set of related tasks are learned (e.g., emotional activation), along with a primary task (e.g., emotional valence); both tasks share parts of the network topology and are hence jointly trained, as depicted in Figure FIGREF4 . It is expected that data for the secondary task models information, which would also be discriminative in learning the primary task. In fact, this approach has been shown to improve generalizability across corpora BIBREF12 ." ], "highlighted_evidence": [ "In multitask learning, a set of related tasks are learned (e.g., emotional activation), along with a primary task (e.g., emotional valence); both tasks share parts of the network topology and are hence jointly trained, as depicted in Figure FIGREF4 ." ] } ], "annotation_id": [ "18f174f0ae07045819bfe1cc85342091b9f9483b" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "a68edc3710ac5281c550f040940ff34e6c353f9e" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Fig. 1. Visual representation of the deep convolutional generative adversarial network with multitask valence and activation classifier.", "Table 1. Final parameters used for each model as found by random parameter search", "Table 2. Evaluation metrics for all four models, averaged across 5 test folds. Speaker-independent unweighted accuracies in % for both 5-class and 3-class valence performance as well as Pearson correlation ρ are reported.", "Table 3. Confusion matrix for 5-class valence classification with the BasicDCGAN model. Predictions are reported in columns and actual targets in rows. Valence classes are sorted from very negative to very positive. These classes correspond to the numeric labels 1 through 5." ], "file": [ "2-Figure1-1.png", "3-Table1-1.png", "4-Table2-1.png", "4-Table3-1.png" ] }
1806.09103
Subword-augmented Embedding for Cloze Reading Comprehension
Representation learning is the foundation of machine reading comprehension. In state-of-the-art models, deep learning methods broadly use word and character level representations. However, character is not naturally the minimal linguistic unit. In addition, with a simple concatenation of character and word embedding, previous models actually give suboptimal solution. In this paper, we propose to use subword rather than character for word embedding enhancement. We also empirically explore different augmentation strategies on subword-augmented embedding to enhance the cloze-style reading comprehension model reader. In detail, we present a reader that uses subword-level representation to augment word embedding with a short list to handle rare words effectively. A thorough examination is conducted to evaluate the comprehensive performance and generalization ability of the proposed reader. Experimental results show that the proposed approach helps the reader significantly outperform the state-of-the-art baselines on various public datasets.
{ "section_name": [ "Introduction", "The Subword-augmented Word Embedding", "BPE Subword Segmentation", "Subword-augmented Word Embedding", "Attention Module", "Dataset and Settings", "Main Results", "Merging Times of BPE", "Filter Mechanism", "Subword-Augmented Representations", "Machine Reading Comprehension", "Augmented Word Embedding", "Conclusion" ], "paragraphs": [ [ "This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/", "A recent hot challenge is to train machines to read and comprehend human languages. Towards this end, various machine reading comprehension datasets have been released, including cloze-style BIBREF0 , BIBREF1 , BIBREF2 and user-query types BIBREF3 , BIBREF4 . Meanwhile, a number of deep learning models are designed to take up the challenges, most of which focus on attention mechanism BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . However, how to represent word in an effective way remains an open problem for diverse natural language processing tasks, including machine reading comprehension for different languages. Particularly, for a language like Chinese with a large set of characters (typically, thousands of), lots of which are semantically ambiguous, using either word-level or character-level embedding alone to build the word representations would not be accurate enough. This work especially focuses on a cloze-style reading comprehension task over fairy stories, which is highly challenging due to diverse semantic patterns with personified expressions and reference.", "In real practice, a reading comprehension model or system which is often called reader in literatures easily suffers from out-of-vocabulary (OOV) word issues, especially for the cloze-style reading comprehension tasks when the ground-truth answers tend to include rare words or named entities (NE), which are hardly fully recorded in the vocabulary. This is more challenging in Chinese. There are over 13,000 characters in Chinese while there are only 26 letters in English without regard to punctuation marks. If a reading comprehension system cannot effectively manage the OOV issues, the performance will not be semantically accurate for the task.", "Commonly, words are represented as vectors using either word embedding or character embedding. For the former, each word is mapped into low dimensional dense vectors from a lookup table. Character representations are usually obtained by applying neural networks on the character sequence of the word, and their hidden states are obtained to form the representation. Intuitively, word-level representation is good at catching global context and dependency relationships between words, while character embedding helps for dealing with rare word representation.", "However, the minimal meaningful unit below word usually is not character, which motivates researchers to explore the potential unit (subword) between character and word to model sub-word morphologies or lexical semantics. In fact, morphological compounding (e.g. sunshine or playground) is one of the most common and productive methods of word formation across human languages, which inspires us to represent word by meaningful sub-word units. Recently, researchers have started to work on morphologically informed word embeddings BIBREF11 , BIBREF12 , aiming at better capturing syntactic, lexical and morphological information. With ready subwords, we do not have to work with characters, and segmentation could be stopped at the subword-level to reach a meaningful representation.", "In this paper, we present various simple yet accurate subword-augmented embedding (SAW) strategies and propose SAW Reader as an instance. Specifically, we adopt subword information to enrich word embedding and survey different SAW operations to integrate word-level and subword-level embedding for a fine-grained representation. To ensure adequate training of OOV and low-frequency words, we employ a short list mechanism. Our evaluation will be performed on three public Chinese reading comprehension datasets and one English benchmark dataset for showing our method is also effective in multi-lingual case." ], [ "The concerned reading comprehension task can be roughly categorized as user-query type and cloze-style according to the answer form. Answers in the former are usually a span of texts while in the cloze-style task, the answers are words or phrases which lets the latter be the harder-hit area of OOV issues, inspiring us to select the cloze-style as our testbed for SAW strategies. Our preliminary study shows even for the advanced word-character based GA reader, OOV answers still account for nearly 1/5 in the error results. This also motivates us to explore better representations to further performance improvement.", "The cloze-style task in this work can be described as a triple INLINEFORM0 , where INLINEFORM1 is a document (context), INLINEFORM2 is a query over the contents of INLINEFORM3 , in which a word or phrase is the right answer INLINEFORM4 . This section will introduce the proposed SAW Reader in the context of cloze-style reading comprehension. Given the triple INLINEFORM5 , the SAW Reader will be built in the following steps." ], [ "Word in most languages usually can be split into meaningful subword units despite of the writing form. For example, “indispensable\" could be split into the following subwords: INLINEFORM0 .", "In our implementation, we adopt Byte Pair Encoding (BPE) BIBREF13 which is a simple data compression technique that iteratively replaces the most frequent pair of bytes in a sequence by a single, unused byte. BPE allows for the representation of an open vocabulary through a fixed-size vocabulary of variable-length character sequences, making it a very suitable word segmentation strategy for neural network models.", "The generalized framework can be described as follows. Firstly, all the input sequences (strings) are tokenized into a sequence of single-character subwords, then we repeat,", "Count all bigrams under the current segmentation status of all sequences.", "Find the bigram with the highest frequency and merge them in all the sequences. Note the segmentation status is updating now.", "If the merging times do not reach the specified number, go back to 1, otherwise the algorithm ends.", "In BIBREF14 , BPE is adopted to segment infrequent words into sub-word units for machine translation. However, there is a key difference between the motivations for subword segmentation. We aim to refine the word representations by using subwords, for both frequent and infrequent words, which is more generally motivated. To this end, we adaptively tokenize words in multi-granularity by controlling the merging times." ], [ "Our subwords are also formed as character n-grams, do not cross word boundaries. After using unsupervised segmentation methods to split each word into a subword sequence, an augmented embedding (AE) is to straightforwardly integrate word embedding INLINEFORM0 and subword embedding INLINEFORM1 for a given word INLINEFORM2 . INLINEFORM3 ", " where INLINEFORM0 denotes the detailed integration operation. In this work, we investigate concatenation (concat), element-wise summation (sum) and element-wise multiplication (mul). Thus, each document INLINEFORM1 and query INLINEFORM2 is represented as INLINEFORM3 matrix where INLINEFORM4 denotes the dimension of word embedding and INLINEFORM5 is the number of words in the input.", "Subword embedding could be useful to refine the word embedding in a finer-grained way, we also consider improving word representation from itself. For quite a lot of words, especially those rare ones, their word embedding is extremely hard to learn due to the data sparse issue. Actually, if all the words in the dataset are used to build the vocabulary, the OOV words from the test set will not obtain adequate training. If they are initiated inappropriately, either with relatively high or low weights, they will harm the answer prediction. To alleviate the OOV issues, we keep a short list INLINEFORM0 for specific words. INLINEFORM1 ", "If INLINEFORM0 is in INLINEFORM1 , the immediate word embedding INLINEFORM2 is indexed from word lookup table INLINEFORM3 where INLINEFORM4 denotes the size (recorded words) of lookup table. Otherwise, it will be represented as the randomly initialized default word (denoted by a specific mark INLINEFORM5 ). Note that, this is intuitively like “guessing” the possible unknown words (which will appear during test) from the vocabulary during training and only the word embedding of the OOV words will be replaced by INLINEFORM6 while their subword embedding INLINEFORM7 will still be processed using the original word. In this way, the OOV words could be tuned sufficiently with expressive meaning after training. During test, the word embedding of unknown words would not severely bias its final representation. Thus, INLINEFORM8 ( INLINEFORM9 ) can be rewritten as INLINEFORM10 ", "In our experiments, the short list is determined according to the word frequency. Concretely, we sort the vocabulary according to the word frequency from high to low. A frequency filter ratio INLINEFORM0 is set to filter out the low-frequency words (rare words) from the lookup table. For example, INLINEFORM1 =0.9 means the least frequent 10% words are replaced with the default UNK notation.", "The subword embedding INLINEFORM0 is generated by taking the final outputs of a bidirectional gated recurrent unit (GRU) BIBREF15 applied to the embeddings from a lookup table of subwords. The structure of GRU used in this paper are described as follows. INLINEFORM1 ", " where INLINEFORM0 denotes the element-wise multiplication. INLINEFORM1 and INLINEFORM2 are the reset and update gates respectively, and INLINEFORM3 are the hidden states. A bi-directional GRU (BiGRU) processes the sequence in both forward and backward directions. Subwords of each word are successively fed to forward GRU and backward GRU to obtain the internal features of two directions. The output for each input is the concatenation of the two vectors from both directions: INLINEFORM4 . Then, the output of BiGRUs is passed to a fully connected layer to obtain the final subword embedding INLINEFORM5 . INLINEFORM6 " ], [ "Our attention module is based on the Gated attention Reader (GA Reader) proposed by BIBREF9 . We choose this model due to its simplicity with comparable performance so that we can focus on the effectiveness of SAW strategies. This module can be described in the following two steps. After augmented embedding, we use two BiGRUs to get contextual representations of the document and query respectively, where the representation of each word is formed by concatenating the forward and backward hidden states. INLINEFORM0 ", " For each word INLINEFORM0 in INLINEFORM1 , we form a word-specific representation of the query INLINEFORM2 using soft attention, and then adopt element-wise product to multiply the query representation with the document word representation. INLINEFORM3 ", " where INLINEFORM0 denotes the multiplication operator to model the interactions between INLINEFORM1 and INLINEFORM2 . Then, the document contextual representation INLINEFORM3 is gated by query representation.", "Suppose the network has INLINEFORM0 layers. At each layer, the document representation INLINEFORM1 is updated through above attention learning. After going through all the layers, our model comes to answer prediction phase. We use all the words in the document to form the candidate set INLINEFORM2 . Let INLINEFORM3 denote the INLINEFORM4 -th intermediate output of query representation INLINEFORM5 and INLINEFORM6 represent the full output of document representation INLINEFORM7 . The probability of each candidate word INLINEFORM8 as being the answer is predicted using a softmax layer over the inner-product between INLINEFORM9 and INLINEFORM10 . INLINEFORM11 ", " where vector INLINEFORM0 denotes the probability distribution over all the words in the document. Note that each word may occur several times in the document. Thus, the probabilities of each candidate word occurring in different positions of the document are summed up for final prediction. INLINEFORM1 ", " where INLINEFORM0 denotes the set of positions that a particular word INLINEFORM1 occurs in the document INLINEFORM2 . The training objective is to maximize INLINEFORM3 where INLINEFORM4 is the correct answer.", "Finally, the candidate word with the highest probability will be chosen as the predicted answer. INLINEFORM0 ", "Different from recent work employing complex attention mechanisms BIBREF5 , BIBREF7 , BIBREF16 , our attention mechanism is much more simple with comparable performance so that we can focus on the effectiveness of SAW strategies." ], [ "To verify the effectiveness of our proposed model, we conduct multiple experiments on three Chinese Machine Reading Comprehension datasets, namely CMRC-2017 BIBREF17 , People's Daily (PD) and Children Fairy Tales (CFT) BIBREF2 . In these datasets, a story containing consecutive sentences is formed as the Document and one of the sentences is either automatically or manually selected as the Query where one token is replaced by a placeholder to indicate the answer to fill in. Table TABREF8 gives data statistics. Different from the current cloze-style datasets for English reading comprehension, such as CBT, Daily Mail and CNN BIBREF0 , the three Chinese datasets do not provide candidate answers. Thus, the model has to find the correct answer from the entire document.", "Besides, we also use the Children's Book Test (CBT) dataset BIBREF1 to test the generalization ability in multi-lingual case. We only focus on subsets where the answer is either a common noun (CN) or NE which is more challenging since the answer is likely to be rare words. We evaluate all the models in terms of accuracy, which is the standard evaluation metric for this task.", "Throughout this paper, we use the same model setting to make fair comparisons. According to our preliminary experiments, we report the results based on the following settings. The default integration strategy is element-wise product. Word embeddings were 200 INLINEFORM0 and pre-trained by word2vec BIBREF18 toolkit on Wikipedia corpus. Subword embedding were 100 INLINEFORM1 and randomly initialized with the uniformed distribution in the interval [-0:05; 0:05]. Our model was implemented using the Theano and Lasagne Python libraries. We used stochastic gradient descent with ADAM updates for optimization BIBREF19 . The batch size was 64 and the initial learning rate was 0.001 which was halved every epoch after the second epoch. We also used gradient clipping with a threshold of 10 to stabilize GRU training BIBREF20 . We use three attention layers for all experiments. The GRU hidden units for both the word and subword representation were 128. The default frequency filter proportion was 0.9 and the default merging times of BPE was 1,000. We also apply dropout between layers with a dropout rate of 0.5 ." ], [ "[7]http://www.hfl-tek.com/cmrc2017/leaderboard.html", "Table TABREF17 shows our results on CMRC-2017 dataset, which shows that our SAW Reader (mul) outperforms all other single models on the test set, with 7.57% improvements compared with Attention Sum Reader (AS Reader) baseline. Although WHU's model achieves the best besides our model on the valid set with only 0.75% below ours, their result on the test set is lower than ours by 2.27%, indicating our model has a satisfactory generalization ability.", "We also list different integration operations for word and subword embeddings. Table TABREF19 shows the comparisons. From the results, we can see that Word + BPE outperforms Word + Char which indicates subword embedding works essentially. We also observe that mul outperforms the other two operations, concat and sum. This reveals that mul might be more informative than concat and sum operations. The superiority might be due to element-wise product being capable of modeling the interactions and eliminating distribution differences between word and subword embedding. Intuitively, this is also similar to endow subword-aware “attention” over the word embedding. In contrast, concatenation operation may cause too high dimension, which leads to serious over-fitting issues, and sum operation is too simple to prevent from detailed information losing.", "Since there is no training set for CFT dataset, our model is trained on PD training set. Note that the CFT dataset is harder for the machine to answer because the test set is further processed by human evaluation, and may not be accordance with the pattern of PD dataset. The results on PD and CFT datasets are listed in Table TABREF20 . As we see that, our SAW Reader significantly outperforms the CAS Reader in all types of testing, with improvements of 7.0% on PD and 8.8% on CFT test sets, respectively. Although the domain and topic of PD and CFT datasets are quite different, the results indicate that our model also works effectively for out-of-domain learning.", "To verify if our method can only work for Chinese, we also evaluate the effectiveness of the proposed method on benchmark English dataset. We use CBT dataset as our testbed to evaluate the performance. For a fair comparison, we simply set the same parameters as before. Table TABREF22 shows the results. We observe that our model outperforms most of the previously public works, with 2.4 % gains on the CBT-NE test set compared with GA Reader which adopts word and character embedding concatenation. Our SAW Reader also achieves comparable performance with FG Reader who adopts neural gates to combine word-level and character-level representations with assistance of extra features including NE, POS and word frequency while our model is much simpler and faster. This result shows our SAW Reader is not restricted to Chinese reading comprehension, but also for other languages." ], [ "The vocabulary size could seriously involve the segmentation granularity. For BPE segmentation, the resulted subword vocabulary size is equal to the merging times plus the number of single-character types. To have an insight of the influence, we adopt merge times from 0 to 20 INLINEFORM0 , and conduct quantitative study on CMRC-2017 for BPE segmentation. Figure FIGREF25 shows the results. We observe that when the vocabulary size is 1 INLINEFORM1 , the models could obtain the best performance. The results indicate that for a task like reading comprehension the subwords, being a highly flexible grained representation between character and word, tends to be more like characters instead of words. However, when the subwords completely fall into characters, the model performs the worst. This indicates that the balance between word and character is quite critical and an appropriate grain of character-word segmentation could essentially improve the word representation." ], [ "To investigate the impact of the short list to the model performance, we conduct quantitative study on the filter ratio from [0.1, 0.2, ..., 1]. The results on the CMRC-2017 dataset are depicted in Figure FIGREF25 . As we can see that when INLINEFORM0 our SAW reader can obtain the best performance, showing that building the vocabulary among all the training set is not optimal and properly reducing the frequency filter ratio can boost the accuracy. This is partially attributed to training the model from the full vocabulary would cause serious over-fitting as the rare words representations can not obtain sufficient tuning. If the rare words are not initialized properly, they would also bias the whole word representations. Thus a model without OOV mechanism will fail to precisely represent those inevitable OOV words from test sets." ], [ "In text understanding tasks, if the ground-truth answer is OOV word or contains OOV word(s), the performance of deep neural networks would severely drop due to the incomplete representation, especially for cloze-style reading comprehension task where the answer is only one word or phrase. In CMRC-2017, we observe questions with OOV answers (denoted as “OOV questions\") account for 17.22% in the error results of the best Word + Char embedding based model. With BPE subword embedding, 12.17% of these “OOV questions\" could be correctly answered. This shows the subword representations could be essentially useful for modeling rare and unseen words.", "To analyze the reading process of SAW Reader, we draw the attention distributions at intermediate layers as shown in Figure FIGREF28 . We observe the salient candidates in the document can be focused after the pair-wise matching of document and query and the right answer (“The mole\") could obtain a high weight at the very beginning. After attention learning, the key evidence of the answer would be collected and irrelevant parts would be ignored. This shows our SAW Reader is effective at selecting the vital points at the fundamental embedding layer, guiding the attention layers to collect more relevant pieces." ], [ "Recently, many deep learning models have been proposed for reading comprehension BIBREF16 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF9 , BIBREF26 , BIBREF27 . Notably, Chen2016A conducted an in-depth and thoughtful examination on the comprehension task based on an attentive neural network and an entity-centric classifier with a careful analysis based on handful features. kadlec2016text proposed the Attention Sum Reader (AS Reader) that uses attention to directly pick the answer from the context, which is motivated by the Pointer Network BIBREF28 . Instead of summing the attention of query-to-document, GA Reader BIBREF9 defined an element-wise product to endowing attention on each word of the document using the entire query representation to build query-specific representations of words in the document for accurate answer selection. Wang2017Gated employed gated self-matching networks (R-net) on passage against passage itself to refine passage representation with information from the whole passage. Cui2016Attention introduced an “attended attention\" mechanism (AoA) where query-to-document and document-to-query are mutually attentive and interactive to each other." ], [ "Distributed word representation plays a fundamental role in neural models BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . Recently, character embeddings are widely used to enrich word representations BIBREF37 , BIBREF21 , BIBREF38 , BIBREF39 . Yang2016Words explored a fine-grained gating mechanism (FG Reader) to dynamically combine word-level and character-level representations based on properties of the words. However, this method is computationally complex and it is not end-to-end, requiring extra labels such as NE and POS tags. Seo2016Bidirectional concatenated the character and word embedding to feed a two-layer Highway Network.", "Not only for machine reading comprehension tasks, character embedding has also benefit other natural language process tasks, such as word segmentation BIBREF40 , machine translation BIBREF38 , tagging BIBREF41 , BIBREF42 and language modeling BIBREF43 , BIBREF44 . However, character embedding only shows marginal improvement due to a lack internal semantics. Lexical, syntactic and morphological information are also considered to improve word representation BIBREF12 , BIBREF45 . Bojanowski2016Enriching proposed to learn representations for character INLINEFORM0 -gram vectors and represent words as the sum of the INLINEFORM1 -gram vectors. Avraham2017The built a model inspired by BIBREF46 , who used morphological tags instead of INLINEFORM2 -grams. They jointly trained their morphological and semantic embeddings, implicitly assuming that morphological and semantic information should live in the same space. However, the linguistic knowledge resulting subwords, typically, morphological suffix, prefix or stem, may not be suitable for different kinds of languages and tasks. Sennrich2015Neural introduced the byte pair encoding (BPE) compression algorithm into neural machine translation for being capable of open-vocabulary translation by encoding rare and unknown words as subword units. Instead, we consider refining the word representations for both frequent and infrequent words from a computational perspective. Our proposed subword-augmented embedding approach is more general, which can be adopted to enhance the representation for each word by adaptively altering the segmentation granularity in multiple NLP tasks." ], [ "This paper presents an effective neural architecture, called subword-augmented word embedding to enhance the model performance for the cloze-style reading comprehension task. The proposed SAW Reader uses subword embedding to enhance the word representation and limit the word frequency spectrum to train rare words efficiently. With the help of the short list, the model size will also be reduced together with training speedup. Unlike most existing works, which introduce either complex attentive architectures or many manual features, our model is much more simple yet effective. Giving state-of-the-art performance on multiple benchmarks, the proposed reader has been proved effective for learning joint representation at both word and subword level and alleviating OOV difficulties." ] ] }
{ "question": [ "how are rare words defined?", "which public datasets were used?", "what are the baselines?" ], "question_id": [ "859e0bed084f47796417656d7a68849eb9cb324f", "04e90c93d046cd89acef5a7c58952f54de689103", "f513e27db363c28d19a29e01f758437d7477eb24" ], "nlp_background": [ "", "", "" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "low-frequency words" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In our experiments, the short list is determined according to the word frequency. Concretely, we sort the vocabulary according to the word frequency from high to low. A frequency filter ratio INLINEFORM0 is set to filter out the low-frequency words (rare words) from the lookup table. For example, INLINEFORM1 =0.9 means the least frequent 10% words are replaced with the default UNK notation." ], "highlighted_evidence": [ "A frequency filter ratio INLINEFORM0 is set to filter out the low-frequency words (rare words) from the lookup table" ] } ], "annotation_id": [ "60c9b737810c6bf6d0978eadbb33409f3b4734ff" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "CMRC-2017", "People's Daily (PD)", "Children Fairy Tales (CFT) ", "Children's Book Test (CBT)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "To verify the effectiveness of our proposed model, we conduct multiple experiments on three Chinese Machine Reading Comprehension datasets, namely CMRC-2017 BIBREF17 , People's Daily (PD) and Children Fairy Tales (CFT) BIBREF2 . In these datasets, a story containing consecutive sentences is formed as the Document and one of the sentences is either automatically or manually selected as the Query where one token is replaced by a placeholder to indicate the answer to fill in. Table TABREF8 gives data statistics. Different from the current cloze-style datasets for English reading comprehension, such as CBT, Daily Mail and CNN BIBREF0 , the three Chinese datasets do not provide candidate answers. Thus, the model has to find the correct answer from the entire document.", "Besides, we also use the Children's Book Test (CBT) dataset BIBREF1 to test the generalization ability in multi-lingual case. We only focus on subsets where the answer is either a common noun (CN) or NE which is more challenging since the answer is likely to be rare words. We evaluate all the models in terms of accuracy, which is the standard evaluation metric for this task." ], "highlighted_evidence": [ "To verify the effectiveness of our proposed model, we conduct multiple experiments on three Chinese Machine Reading Comprehension datasets, namely CMRC-2017 BIBREF17 , People's Daily (PD) and Children Fairy Tales (CFT) BIBREF2 ", "Besides, we also use the Children's Book Test (CBT) dataset BIBREF1 to test the generalization ability in multi-lingual case." ] } ], "annotation_id": [ "5aaa03e0f41c9ea0f27c3e28b771d586e12ba858" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "AS Reader, GA Reader, CAS Reader", "evidence": [ "Table TABREF17 shows our results on CMRC-2017 dataset, which shows that our SAW Reader (mul) outperforms all other single models on the test set, with 7.57% improvements compared with Attention Sum Reader (AS Reader) baseline. Although WHU's model achieves the best besides our model on the valid set with only 0.75% below ours, their result on the test set is lower than ours by 2.27%, indicating our model has a satisfactory generalization ability.", "FLOAT SELECTED: Table 2: Accuracy on CMRC-2017 dataset. Results marked with † are from the latest official CMRC2017 Leaderboard 7. The best results are in bold face.", "FLOAT SELECTED: Table 3: Case study on CMRC-2017." ], "highlighted_evidence": [ "Table TABREF17 shows our results on CMRC-2017 dataset, which shows that our SAW Reader (mul) outperforms all other single models on the test set, with 7.57% improvements compared with Attention Sum Reader (AS Reader) baseline", "FLOAT SELECTED: Table 2: Accuracy on CMRC-2017 dataset. Results marked with † are from the latest official CMRC2017 Leaderboard 7. The best results are in bold face.", "FLOAT SELECTED: Table 3: Case study on CMRC-2017." ] } ], "annotation_id": [ "a701def54b5c6fae9f04b640fde1eb6fae682fe0" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] } ] }
{ "caption": [ "Figure 1: Architecture of the proposed Subword-augmented Embedding Reader (SAW Reader).", "Table 1: Data statistics of CMRC-2017, PD and CFT.", "Table 2: Accuracy on CMRC-2017 dataset. Results marked with † are from the latest official CMRC2017 Leaderboard 7. The best results are in bold face.", "Table 3: Case study on CMRC-2017.", "Table 5: Accuracy on CBT dataset. Results marked with ‡ are of previously published works (Dhingra et al., 2017; Cui et al., 2016; Yang et al., 2017).", "Figure 2: Case study of the subword vocabulary size of BPE.", "Figure 3: Quantitative study on the influence of the short list.", "Figure 4: Pair-wise attention visualization." ], "file": [ "2-Figure1-1.png", "4-Table1-1.png", "6-Table2-1.png", "7-Table3-1.png", "8-Table5-1.png", "8-Figure2-1.png", "8-Figure3-1.png", "9-Figure4-1.png" ] }
1911.13087
Kurdish (Sorani) Speech to Text: Presenting an Experimental Dataset
We present an experimental dataset, Basic Dataset for Sorani Kurdish Automatic Speech Recognition (BD-4SK-ASR), which we used in the first attempt in developing an automatic speech recognition for Sorani Kurdish. The objective of the project was to develop a system that automatically could recognize simple sentences based on the vocabulary which is used in grades one to three of the primary schools in the Kurdistan Region of Iraq. We used CMUSphinx as our experimental environment. We developed a dataset to train the system. The dataset is publicly available for non-commercial use under the CC BY-NC-SA 4.0 license.
{ "section_name": [ "Introduction", "Related work", "The BD-4SK-ASR Dataset", "The BD-4SK-ASR Dataset ::: Phoeset", "The BD-4SK-ASR Dataset ::: Filler phones", "The BD-4SK-ASR Dataset ::: The File IDs", "The BD-4SK-ASR Dataset ::: The Transcription", "The BD-4SK-ASR Dataset ::: The Corpus", "The BD-4SK-ASR Dataset ::: The Narration Files", "The BD-4SK-ASR Dataset ::: The Language Model", "Conclusion" ], "paragraphs": [ [ "Kurdish language processing requires endeavor by interested researchers and scholars to overcome with a large gap which it has regarding the resource scarcity. The areas that need attention and the efforts required have been addressed in BIBREF0.", "The Kurdish speech recognition is an area which has not been studied so far. We were not able to retrieve any resources in the literature regarding this subject.", "In this paper, we present a dataset based on CMUShpinx BIBREF1 for Sorani Kurdish. We call it a Dataset for Sorani Kurdish Automatic Speech Recognition (BD-4SK-ASR). Although other technologies are emerging, CMUShpinx could still be used for experimental studies.", "The rest of this paper is organized as follows. Section SECREF2 reviews the related work. Section SECREF3 presents different parts of the dataset, such as the dictionary, phoneset, transcriptions, corpus, and language model. Finally, Section SECREF4 concludes the paper and suggests some areas for future work." ], [ "The work on Automatic Speech Recognition (ASR) has a long history, but we could not retrieve any literature on Kurdish ASR at the time of compiling this article. However, the literature on ASR for different languages is resourceful. Also, researchers have widely used CMUSphinx for ASR though other technologies have been emerging in recent years BIBREF1.", "We decided to use CMUSphinx because we found it a proper and well-established environment to start Kurdish ASR." ], [ "To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences.", "In the following sections, we present the available items in the dataset. The dataset ia available on https://github.com/KurdishBLARK/BD-4SK-ASR." ], [ "The phoneset includes 34 phones for Sorani Kurdish. A sample of the file content is given below.", "R", "RR", "S", "SIL", "SH", "T", "V", "W", "WW", "Figure FIGREF3 shows the Sorani letters in Persian-Arabic script, the suggested phoneme (capital English letters), and an example of the transformation of words in the developed corpus." ], [ "The filler phone file usually contains fillers in spoken sentences. In our basic sentences, we have only considered silence. Therefore it only includes three lines to indicate the possible pauses at the beginning and end of the sentences and also after each word." ], [ "This file includes the list of files in which the narrated sentences have been recorded. The recorded files are in wav formats. However, in the file IDs, the extension is omitted. A sample of the file content is given below. The test directory is the directory in which the files are located.", "test/T1-1-50-01", "test/T1-1-50-02", "test/T1-1-50-03", "test/T1-1-50-04", "test/T1-1-50-05", "test/T1-1-50-06" ], [ "This file contains the transcription of each sentence based on the phoneset along with the file ID in which the equivalent narration has been saved. The following is a sample of the content of the file.", "<s> BYR RRAAMAAN DAARISTAANA AMAANAY </s> (T1-1-50-18)", "<s> DWWRA HAWLER CHIRAAYA SARDAAN NABWW </s> (T1-1-50-19)", "<s> SAALL DYWAAR QWTAABXAANA NACHIN </s> (T1-1-50-20)", "<s> XWENDIN ANDAAMAANY GASHA </s> (T1-1-50-21)", "<s> NAMAAM WRYAA KIRD PSHWWDAA </s> (T1-1-50-22)", "<s> DARCHWWY DAKAN DAKAWET </s> (T1-1-50-23)", "<s> CHAND BIRAAT MAQAST </s> (T1-1-50-24)", "<s> BAAXCHAKAY DAAYK DARCHWWY </s> (T1-1-50-25)", "<s> RROZH JWAAN DAKAWET ZYAANYAAN </s> (T1-1-50-26)", "" ], [ "The corpus includes 2000 sentences. Theses sentence are random renderings of 200 sentences, which we have taken from Sorani Kurdish books of the grades one to three of the primary school in the Kurdistan Region of Iraq. The reason that we have taken only 200 sentences is to have a smaller dictionary and also to increase the repetition of each word in the narrated speech. We transformed the corpus sentences, which are in Persian-Arabic script, into the format which complies with the suggested phones for the related Sorani letters (see Section SECREF6)." ], [ "Two thousand narration files were created. We used Audacity to record the narrations. We used a normal laptop in a quiet room and minimized the background noise. However, we could not manage to avoid the noise of the fan of the laptop. A single speaker narrated the 2000 sentences, which took several days. We set the Audacity software to have a sampling rate of 16, 16-bit bit rate, and a mono (single) channel. The noise reduction db was set to 6, the sensitivity to 4.00, and the frequency smoothing to 0." ], [ "We created the language from the transcriptions. The model was created using CMUSphinx in which (fixed) discount mass is 0.5, and backoffs are computed using the ratio method. The model includes 283 unigrams, 5337 bigrams, and 6935 trigrams." ], [ "We presented a dataset, BD-4SK-ASR, that could be used in training and developing an acoustic model for Automatic Speech Recognition in CMUSphinx environment for Sorani Kurdish. The Kurdish books of grades one to three of primary schools in the Kurdistan Region of Iraq were used to extract 200 sample sentences. The dataset includes the dictionary, the phoneset, the transcriptions of the corpus sentences using the suggested phones, the recorded narrations of the sentences, and the acoustic model. The dataset could be used to start experiments on Sorani Kurdish ASR.", "As it was mentioned before, research and development on Kurdish ASR require a huge amount of effort. A variety of areas must be explored, and various resources must be collected and developed. The multi-dialect characteristic of Kurdish makes these tasks rather demanding. To participate in these efforts, we are interested in the expansion of Kurdish ASR by developing a larger dataset based on larger Sorani corpora, working on the other Kurdish dialects, and using new environments for ASR such as Kaldi." ] ] }
{ "question": [ "What are the results of the experiment?", "How was the dataset collected?", "What is the size of the dataset?", "How many different subjects does the dataset contain?", "How many annotators participated?", "How long is the dataset?" ], "question_id": [ "eb5ed1dd26fd9adb587d29225c7951a476c6ec28", "0828cfcf0e9e02834cc5f279a98e277d9138ffd9", "7b2de0109b68f78afa9e6190c82ca9ffaf62f9bd", "482ac96ff675975227b6d7058b9b87aeab6f81fe", "3f3c09c1fd542c1d9acf197957c66b79ea1baf6e", "0a82534ec6e294ab952103f11f56fd99137adc1f" ], "nlp_background": [ "", "", "", "five", "five", "five" ], "topic_background": [ "", "", "", "familiar", "familiar", "familiar" ], "paper_read": [ "", "", "", "no", "no", "no" ], "search_query": [ "dataset", "dataset", "dataset", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "They were able to create a language model from the dataset, but did not test.", "evidence": [ "The BD-4SK-ASR Dataset ::: The Language Model", "We created the language from the transcriptions. The model was created using CMUSphinx in which (fixed) discount mass is 0.5, and backoffs are computed using the ratio method. The model includes 283 unigrams, 5337 bigrams, and 6935 trigrams." ], "highlighted_evidence": [ "The BD-4SK-ASR Dataset ::: The Language Model\nWe created the language from the transcriptions. The model was created using CMUSphinx in which (fixed) discount mass is 0.5, and backoffs are computed using the ratio method. The model includes 283 unigrams, 5337 bigrams, and 6935 trigrams." ] } ], "annotation_id": [ "da0fc36116ec0c88876ce022a0c985ce91bedf28" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "extracted text from Sorani Kurdish books of primary school and randomly created sentences", "evidence": [ "To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences." ], "highlighted_evidence": [ "To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences." ] } ], "annotation_id": [ "19041484d3b9b018a43dda76cda73c122af29409" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "2000 sentences" ], "yes_no": null, "free_form_answer": "", "evidence": [ "To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences." ], "highlighted_evidence": [ "To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences." ] } ], "annotation_id": [ "1e944910b2cf0cf2f08c17a36761cd1f98e8ce6d" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "e23a5aaf6306bad1b6967aff6e406cbf8971b298" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "1", "evidence": [ "Two thousand narration files were created. We used Audacity to record the narrations. We used a normal laptop in a quiet room and minimized the background noise. However, we could not manage to avoid the noise of the fan of the laptop. A single speaker narrated the 2000 sentences, which took several days. We set the Audacity software to have a sampling rate of 16, 16-bit bit rate, and a mono (single) channel. The noise reduction db was set to 6, the sensitivity to 4.00, and the frequency smoothing to 0." ], "highlighted_evidence": [ "A single speaker narrated the 2000 sentences, which took several days. " ] } ], "annotation_id": [ "6e7a28a48be66a416bfa8421a6d91bb2f601935f" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "2000" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The corpus includes 2000 sentences. Theses sentence are random renderings of 200 sentences, which we have taken from Sorani Kurdish books of the grades one to three of the primary school in the Kurdistan Region of Iraq. The reason that we have taken only 200 sentences is to have a smaller dictionary and also to increase the repetition of each word in the narrated speech. We transformed the corpus sentences, which are in Persian-Arabic script, into the format which complies with the suggested phones for the related Sorani letters (see Section SECREF6)." ], "highlighted_evidence": [ "The corpus includes 2000 sentences. " ] } ], "annotation_id": [ "b84d0ea71de51ad7340cb2e31a1f903ae9c0fe52" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ] }
{ "caption": [ "Figure 1: The Sorani sounds along with their phoneme representation." ], "file": [ "3-Figure1-1.png" ] }
1608.04917
Cohesion and Coalition Formation in the European Parliament: Roll-Call Votes and Twitter Activities
We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014--2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns.
{ "section_name": [ "Abstract", "Introduction", "Related work", "Methods", "Co-voting measured by agreement", "A network-based measure of co-voting", "Measuring cohesion and coalitions on Twitter", "Cohesion of political groups", "Coalitions in the European Parliament", "Discussion", "Conclusions", "Acknowledgments" ], "paragraphs": [ [ "We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective.", "We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns." ], [ "Social-media activities often reflect phenomena that occur in other complex systems. By observing social networks and the content propagated through these networks, we can describe or even predict the interplay between the observed social-media activities and another complex system that is more difficult, if not impossible, to monitor. There are numerous studies reported in the literature that successfully correlate social-media activities to phenomena like election outcomes BIBREF0 , BIBREF1 or stock-price movements BIBREF2 , BIBREF3 .", "In this paper we study the cohesion and coalitions exhibited by political groups in the Eighth European Parliament (2014–2019). We analyze two entirely different aspects of how the Members of the European Parliament (MEPs) behave in policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting (i.e., endorsing) behavior.", "We use two diverse datasets in the analysis: the roll-call votes and the Twitter data. A roll-call vote (RCV) is a vote in the parliament in which the names of the MEPs are recorded along with their votes. The RCV data is available as part of the minutes of the parliament's plenary sessions. From this perspective, cohesion is seen as the tendency to co-vote (i.e., cast the same vote) within a group, and a coalition is formed when members of two or more groups exhibit a high degree of co-voting on a subject. The second dataset comes from Twitter. It captures the retweeting behavior of MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between the groups) from a completely different perspective.", "With over 300 million monthly active users and 500 million tweets posted daily, Twitter is one of the most popular social networks. Twitter allows its users to post short messages (tweets) and to follow other users. A user who follows another user is able to read his/her public tweets. Twitter also supports other types of interaction, such as user mentions, replies, and retweets. Of these, retweeting is the most important activity as it is used to share and endorse content created by other users. When a user retweets a tweet, the information about the original author as well as the tweet's content are preserved, and the tweet is shared with the user's followers. Typically, users retweet content that they agree with and thus endorse the views expressed by the original tweeter.", "We apply two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 which measures the agreement among observers, or voters in our case. The second one is based on Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach and is often used in social-network analyses. Even though these two methodologies come with two different sets of techniques and are based on different assumptions, they provide consistent results.", "The main contributions of this paper are as follows:", "(i) We give general insights into the cohesion of political groups in the Eighth European Parliament, both overall and across different policy areas.", "(ii) We explore whether coalitions are formed in the same way for different policy areas.", "(iii) We explore to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns.", "(iv) We employ two statistically sound methodologies and examine the extent to which the results are sensitive to the choice of methodology. While the results are mostly consistent, we show that the difference are due to the different treatment of non-attending and abstaining MEPs by INLINEFORM0 and ERGM.", "The most novel and interesting aspect of our work is the relationship between the co-voting and the retweeting patterns. The increased use of Twitter by MEPs on days with a roll-call vote session (see Fig FIGREF1 ) is an indicator that these two processes are related. In addition, the force-based layouts of the co-voting network and the retweet network reveal a very similar structure on the left-to-center side of the political spectrum (see Fig FIGREF2 ). They also show a discrepancy on the far-right side of the spectrum, which calls for a more detailed analysis." ], [ "In this paper we study and relate two very different aspects of how MEPs behave in policy-making processes. First, we look at their co-voting behavior, and second, we examine their retweeting patterns. Thus, we draw related work from two different fields of science. On one hand, we look at how co-voting behavior is analyzed in the political-science literature and, on the other, we explore how Twitter is used to better understand political and policy-making processes. The latter has been more thoroughly explored in the field of data mining (specifically, text mining and network analysis).", "To the best of our knowledge, this is the first paper that studies legislative behavior in the Eighth European Parliament. The legislative behavior of the previous parliaments was thoroughly studied by Hix, Attina, and others BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . These studies found that voting behavior is determined to a large extent—and when viewed over time, increasingly so—by affiliation to a political group, as an organizational reflection of the ideological position. The authors found that the cohesion of political groups in the parliament has increased, while nationality has been less and less of a decisive factor BIBREF12 . The literature also reports that a split into political camps on the left and right of the political spectrum has recently replaced the `grand coalition' between the two big blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament. The authors conclude that coalitions are to a large extent formed along the left-to-right axis BIBREF12 .", "In this paper we analyze the roll-call vote data published in the minutes of the parliament's plenary sessions. For a given subject, the data contains the vote of each MEP present at the respective sitting. Roll-call vote data from the European Parliament has already been extensively studied by other authors, most notably by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 . To be able to study the cohesion and coalitions, authors like Hix, Attina, and Rice BIBREF6 , BIBREF13 , BIBREF14 defined and employed a variety of agreement measures. The most prominent measure is the Agreement Index proposed by Hix et al. BIBREF13 . This measure computes the agreement score from the size of the majority class for a particular vote. The Agreement Index, however, exhibits two drawbacks: (i) it does not account for co-voting by chance, and (ii) without a proper adaptation, it does not accommodate the scenario in which the agreement is to be measured between two different political groups.", "We employ two statistically sound methodologies developed in two different fields of science. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 . INLINEFORM1 is a measure of the agreement among observers, coders, or measuring instruments that assign values to items or phenomena. It compares the observed agreement to the agreement expected by chance. INLINEFORM2 is used to measure the inter- and self-annotator agreement of human experts when labeling data, and the performance of classification models in machine learning scenarios BIBREF15 . In addition to INLINEFORM3 , we employ Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach, often used in social-network analyses. ERGM can be employed to investigate how different network statistics (e.g., number of edges and triangles) or external factors (e.g., political group membership) govern the network-formation process.", "The second important aspect of our study is related to analyzing the behavior of participants in social networks, specifically Twitter. Twitter is studied by researchers to better understand different political processes, and in some cases to predict their outcomes. Eom et al. BIBREF1 consider the number of tweets by a party as a proxy for the collective attention to the party, explore the dynamics of the volume, and show that this quantity contains information about an election's outcome. Other studies BIBREF16 reach similar conclusions. Conover et al. BIBREF17 predicted the political alignment of Twitter users in the run-up to the 2010 US elections based on content and network structure. They analyzed the polarization of the retweet and mention networks for the same elections BIBREF18 . Borondo et al. BIBREF19 analyzed user activity during the Spanish presidential elections. They additionally analyzed the 2012 Catalan elections, focusing on the interplay between the language and the community structure of the network BIBREF20 . Most existing research, as Larsson points out BIBREF21 , focuses on the online behavior of leading political figures during election campaigns.", "This paper continues our research on communities that MEPs (and their followers) form on Twitter BIBREF22 . The goal of our research was to evaluate the role of Twitter in identifying communities of influence when the actual communities are known. We represent the influence on Twitter by the number of retweets that MEPs “receive”. We construct two networks of influence: (i) core, which consists only of MEPs, and (ii) extended, which also involves their followers. We compare the detected communities in both networks to the groups formed by the political, country, and language membership of MEPs. The results show that the detected communities in the core network closely match the political groups, while the communities in the extended network correspond to the countries of residence. This provides empirical evidence that analyzing retweet networks can reveal real-world relationships and can be used to uncover hidden properties of the networks.", "Lazer BIBREF23 highlights the importance of network-based approaches in political science in general by arguing that politics is a relational phenomenon at its core. Some researchers have adopted the network-based approach to investigate the structure of legislative work in the US Congress, including committee and sub-committee membership BIBREF24 , bill co-sponsoring BIBREF25 , and roll-call votes BIBREF26 . More recently, Dal Maso et al. BIBREF27 examined the community structure with respect to political coalitions and government structure in the Italian Parliament. Scherpereel et al. BIBREF28 examined the constituency, personal, and strategic characteristics of MEPs that influence their tweeting behavior. They suggested that Twitter's characteristics, like immediacy, interactivity, spontaneity, personality, and informality, are likely to resonate with political parties across Europe. By fitting regression models, the authors find that MEPs from incohesive groups have a greater tendency to retweet.", "In contrast to most of these studies, we focus on the Eighth European Parliament, and more importantly, we study and relate two entirely different behavioral aspects, co-voting and retweeting. The goal of this research is to better understand the cohesion and coalition formation processes in the European Parliament by quantifying and comparing the co-voting patterns and social behavior." ], [ "In this section we present the methods to quantify cohesion and coalitions from the roll-call votes and Twitter activities." ], [ "We first show how the co-voting behaviour of MEPs can be quantified by a measure of the agreement between them. We treat individual RCVs as observations, and MEPs as independent observers or raters. When they cast the same vote, there is a high level of agreement, and when they vote differently, there is a high level of disagreement. We define cohesion as the level of agreement within a political group, a coalition as a voting agreement between political groups, and opposition as a disagreement between different groups.", "There are many well-known measures of agreement in the literature. We selected Krippendorff's Alpha-reliability ( INLINEFORM0 ) BIBREF4 , which is a generalization of several specialized measures. It works for any number of observers, and is applicable to different variable types and metrics (e.g., nominal, ordered, interval, etc.). In general, INLINEFORM1 is defined as follows: INLINEFORM2 ", "where INLINEFORM0 is the actual disagreement between observers (MEPs), and INLINEFORM1 is disagreement expected by chance. When observers agree perfectly, INLINEFORM2 INLINEFORM3 , when the agreement equals the agreement by chance, INLINEFORM4 INLINEFORM5 , and when the observers disagree systematically, INLINEFORM6 INLINEFORM7 .", "The two disagreement measures are defined as follows: INLINEFORM0 ", " INLINEFORM0 ", "The arguments INLINEFORM0 , and INLINEFORM1 are defined below and refer to the values in the coincidence matrix that is constructed from the RCVs data. In roll-call votes, INLINEFORM2 (and INLINEFORM3 ) is a nominal variable with two possible values: yes and no. INLINEFORM4 is a difference function between the values of INLINEFORM5 and INLINEFORM6 , defined as: INLINEFORM7 ", "The RCVs data has the form of a reliability data matrix: INLINEFORM0 ", "where INLINEFORM0 is the number of RCVs, INLINEFORM1 is the number of MEPs, INLINEFORM2 is the number of votes cast in the voting INLINEFORM3 , and INLINEFORM4 is the actual vote of an MEP INLINEFORM5 in voting INLINEFORM6 (yes or no).", "A coincidence matrix is constructed from the reliability data matrix, and is in general a INLINEFORM0 -by- INLINEFORM1 square matrix, where INLINEFORM2 is the number of possible values of INLINEFORM3 . In our case, where only yes/no votes are relevant, the coincidence matrix is a 2-by-2 matrix of the following form: INLINEFORM4 ", "A cell INLINEFORM0 accounts for all coincidences from all pairs of MEPs in all RCVs where one MEP has voted INLINEFORM1 and the other INLINEFORM2 . INLINEFORM3 and INLINEFORM4 are the totals for each vote outcome, and INLINEFORM5 is the grand total. The coincidences INLINEFORM6 are computed as: INLINEFORM7 ", "where INLINEFORM0 is the number of INLINEFORM1 pairs in vote INLINEFORM2 , and INLINEFORM3 is the number of MEPs that voted in INLINEFORM4 . When computing INLINEFORM5 , each pair of votes is considered twice, once as a INLINEFORM6 pair, and once as a INLINEFORM7 pair. The coincidence matrix is therefore symmetrical around the diagonal, and the diagonal contains all the equal votes.", "The INLINEFORM0 agreement is used to measure the agreement between two MEPs or within a group of MEPs. When applied to a political group, INLINEFORM1 corresponds to the cohesion of the group. The closer INLINEFORM2 is to 1, the higher the agreement of the MEPs in the group, and hence the higher the cohesion of the group.", "We propose a modified version of INLINEFORM0 to measure the agreement between two different groups, INLINEFORM1 and INLINEFORM2 . In the case of a voting agreement between political groups, high INLINEFORM3 is interpreted as a coalition between the groups, whereas negative INLINEFORM4 indicates political opposition.", "Suppose INLINEFORM0 and INLINEFORM1 are disjoint subsets of all the MEPs, INLINEFORM2 , INLINEFORM3 . The respective number of votes cast by both group members in vote INLINEFORM4 is INLINEFORM5 and INLINEFORM6 . The coincidences are then computed as: INLINEFORM7 ", "where the INLINEFORM0 pairs come from different groups, INLINEFORM1 and INLINEFORM2 . The total number of such pairs in vote INLINEFORM3 is INLINEFORM4 . The actual number INLINEFORM5 of the pairs is multiplied by INLINEFORM6 so that the total contribution of vote INLINEFORM7 to the coincidence matrix is INLINEFORM8 ." ], [ "In this section we describe a network-based approach to analyzing the co-voting behavior of MEPs. For each roll-call vote we form a network, where the nodes in the network are MEPs, and an undirected edge between two MEPs is formed when they cast the same vote.", "We are interested in the factors that determine the cohesion within political groups and coalition formation between political groups. Furthermore, we investigate to what extent communication in a different social context, i.e., the retweeting behavior of MEPs, can explain the co-voting of MEPs. For this purpose we apply an Exponential Random Graph Model BIBREF5 to individual roll-call vote networks, and aggregate the results by means of the meta-analysis.", "ERGMs allow us to investigate the factors relevant for the network-formation process. Network metrics, as described in the abundant literature, serve to gain information about the structural properties of the observed network. A model investigating the processes driving the network formation, however, has to take into account that there can be a multitude of alternative networks. If we are interested in the parameters influencing the network formation we have to consider all possible networks and measure their similarity to the originally observed network. The family of ERGMs builds upon this idea.", "Assume a random graph INLINEFORM0 , in the form of a binary adjacency matrix, made up of a set of INLINEFORM1 nodes and INLINEFORM2 edges INLINEFORM3 where, similar to a binary choice model, INLINEFORM4 if the nodes INLINEFORM5 are connected and INLINEFORM6 if not. Since network data is by definition relational and thus violates assumptions of independence, classical binary choice models, like logistic regression, cannot be applied in this context. Within an ERGM, the probability for a given network is modelled by DISPLAYFORM0 ", " where INLINEFORM0 is the vector of parameters and INLINEFORM1 is the vector of network statistics (counts of network substructures), which are a function of the adjacency matrix INLINEFORM2 . INLINEFORM3 is a normalization constant corresponding to the sample of all possible networks, which ensures a proper probability distribution. Evaluating the above expression allows us to make assertions if and how specific nodal attributes influence the network formation process. These nodal attributes can be endogenous (dyad-dependent parameters) to the network, like the in- and out-degrees of a node, or exogenous (dyad-independent parameters), as the party affiliation, or the country of origin in our case.", "An alternative formulation of the ERGM provides the interpretation of the coefficients. We introduce the change statistic, which is defined as the change in the network statistics when an edge between nodes INLINEFORM0 and INLINEFORM1 is added or not. If INLINEFORM2 and INLINEFORM3 denote the vectors of counts of network substructures when the edge is added or not, the change statistics is defined as follows: INLINEFORM4 ", "With this at hand it can be shown that the distribution of the variable INLINEFORM0 , conditional on the rest of the graph INLINEFORM1 , corresponds to: INLINEFORM2 ", "This implies on the one hand that the probability depends on INLINEFORM0 via the change statistic INLINEFORM1 , and on the other hand, that each coefficient within the vector INLINEFORM2 represents an increase in the conditional log-odds ( INLINEFORM3 ) of the graph when the corresponding element in the vector INLINEFORM4 increases by one. The need to condition the probability on the rest of the network can be illustrated by a simple example. The addition (removal) of a single edge alters the network statistics. If a network has only edges INLINEFORM5 and INLINEFORM6 , the creation of an edge INLINEFORM7 would not only add an additional edge but would also alter the count for other network substructures included in the model. In this example, the creation of the edge INLINEFORM8 also increases the number of triangles by one. The coefficients are transformed into probabilities with the logistic function: INLINEFORM9 ", "For example, in the context of roll-call votes, the probability that an additional co-voting edge is formed between two nodes (MEPs) of the same political group is computed with that equation. In this context, the nodematch (nodemix) coefficients of the ERGM (described in detail bellow) therefore refer to the degree of homophilous (heterophilous) matching of MEPs with regard to their political affiliation, or, expressed differently, the propensity of MEPs to co-vote with other MEPs of their respective political group or another group.", "A positive coefficient reflects an increased chance that an edge between two nodes with respective properties, like group affiliation, given all other parameters unchanged, is formed. Or, put differently, a positive coefficient implies that the probability of observing a network with a higher number of corresponding pairs relative to the hypothetical baseline network, is higher than to observe the baseline network itself BIBREF31 . For an intuitive interpretation, log-odds value of 0 corresponds to the even chance probability of INLINEFORM0 . Log-odds of INLINEFORM1 correspond to an increase of probability by INLINEFORM2 , whereas log-odds of INLINEFORM3 correspond to a decrease of probability by INLINEFORM4 .", "The computational challenges of estimating ERGMs is to a large degree due to the estimation of the normalizing constant. The number of possible networks is already extremely large for very small networks and the computation is simply not feasible. Therefore, an appropriate sample has to be found, ideally covering the most probable areas of the probability distribution. For this we make use of a method from the Markov Chain Monte Carlo (MCMC) family, namely the Metropolis-Hastings algorithm.", "The idea behind this algorithm is to generate and sample highly weighted random networks departing from the observed network. The Metropolis-Hastings algorithm is an iterative algorithm which samples from the space of possible networks by randomly adding or removing edges from the starting network conditional on its density. If the likelihood, in the ERGM context also denoted as weights, of the newly generated network is higher than that of the departure network it is retained, otherwise it is discarded. In the former case, the algorithm starts anew from the newly generated network. Otherwise departure network is used again. Repeating this procedure sufficiently often and summing the weights associated to the stored (sampled) networks allows to compute an approximation of the denominator in equation EQREF18 (normalizing constant).", "The algorithm starts sampling from the originally observed network INLINEFORM0 . The optimization of the coefficients is done simultaneously, equivalently with the Metropolis-Hastings algorithm. At the beginning starting values have to be supplied. For the study at hand we used the “ergm” library from the statistical R software package BIBREF5 implementing the Gibbs-Sampling algorithm BIBREF32 which is a special case of the Metropolis-Hastings algorithm outlined.", "In order to answer our question of the importance of the factors which drive the network formation process in the roll-call co-voting network, the ERGM is specified with the following parameters:", "nodematch country: This parameter adds one network statistic to the model, i.e., the number of edges INLINEFORM0 where INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of MEPs with respect to their country of origin. In other words, this coefficient indicates how relevant nationality is in the formation of edges in the co-voting network.", "nodematch national party: This parameter adds one network statistic to the model: the number of edges INLINEFORM0 with INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of the MEPs with regard to their party affiliation at the national level. In the context of this study, this coefficient can be interpreted as an indicator for within-party cohesion at the national level.", "nodemix EP group: This parameter adds one network statistic for each pair of European political groups. These coefficients shed light on the degree of coalitions between different groups as well as the within group cohesion . Given that there are nine groups in the European Parliament, this coefficient adds in total 81 statistics to the model.", "edge covariate Twitter: This parameter corresponds to a square matrix with the dimension of the adjacency matrix of the network, which corresponds to the number of mutual retweets between the MEPs. It provides an insight about the extent to which communication in one social context (Twitter), can explain cooperation in another social context (co-voting in RCVs).", "An ERGM as specified above is estimated for each of the 2535 roll-call votes. Each roll-call vote is thereby interpreted as a binary network and as an independent study. It is assumed that a priori each MEP could possibly form an edge with each other MEP in the context of a roll-call vote. Assumptions over the presence or absence of individual MEPs in a voting session are not made. In other words the dimensions of the adjacency matrix (the node set), and therefore the distribution from which new networks are drawn, is kept constant over all RCVs and therefore for every ERGM. The ERGM results therefore implicitly generalize to the case where potentially all MEPs are present and could be voting. Not voting is incorporated implicitly by the disconnectedness of a node.", "The coefficients of the 2535 roll-call vote studies are aggregated by means of a meta-analysis approach proposed by Lubbers BIBREF33 and Snijders et al. BIBREF34 . We are interested in average effect sizes of different matching patterns over different topics and overall. Considering the number of RCVs, it seems straightforward to interpret the different RCV networks as multiplex networks and collapse them into one weighted network, which could then be analysed by means of a valued ERGM BIBREF35 . There are, however, two reasons why we chose the meta-analysis approach instead. First, aggregating the RCV data results into an extremely dense network, leading to severe convergence (degeneracy) problems for the ERGM. Second, the RCV data contains information about the different policy areas the individual votes were about. Since we are interested in how the coalition formation in the European Parliament differs over different areas, a method is needed that allows for an ex-post analysis of the corresponding results. We therefore opted for the meta-analysis approach by Lubbers and Snijders et al. This approach allows us to summarize the results by decomposing the coefficients into average effects and (class) subject-specific deviations. The different ERGM runs for each RCV are thereby regarded as different studies with identical samples that are combined to obtain a general overview of effect sizes. The meta-regression model is defined as: INLINEFORM0 ", "Here INLINEFORM0 is a parameter estimate for class INLINEFORM1 , and INLINEFORM2 is the average coefficient. INLINEFORM3 denotes the normally distributed deviation of the class INLINEFORM4 with a mean of 0 and a variance of INLINEFORM5 . INLINEFORM6 is the estimation error of the parameter value INLINEFORM7 from the ERGM. The meta-analysis model is fitted by an iterated, weighted, least-squares model in which the observations are weighted by the inverse of their variances. For the overall nodematch between political groups, we weighted the coefficients by group sizes. The results from the meta analysis can be interpreted as if they stemmed from an individual ERGM run.", "In our study, the meta-analysis was performed using the RSiena library BIBREF36 , which implements the method proposed by Lubbers and Snijders et al. BIBREF33 , BIBREF34 ." ], [ "The retweeting behavior of MEPs is captured by their retweet network. Each MEP active on Twitter is a node in this network. An edge in the network between two MEPs exists when one MEP retweeted the other. The weight of the edge is the number of retweets between the two MEPs. The resulting retweet network is an undirected, weighted network.", "We measure the cohesion of a political group INLINEFORM0 as the average retweets, i.e., the ratio of the number of retweets between the MEPs in the group INLINEFORM1 to the number of MEPs in the group INLINEFORM2 . The higher the ratio, the more each MEP (on average) retweets the MEPs from the same political group, hence, the higher the cohesion of the political group. The definition of the average retweeets ( INLINEFORM3 ) of a group INLINEFORM4 is: INLINEFORM5 ", "This measure of cohesion captures the aggregate retweeting behavior of the group. If we consider retweets as endorsements, a larger number of retweets within the group is an indicator of agreement between the MEPs in the group. It does not take into account the patterns of retweeting within the group, thus ignoring the social sub-structure of the group. This is a potentially interesting direction and we leave it for future work.", "We employ an analogous measure for the strength of coalitions in the retweet network. The coalition strength between two groups INLINEFORM0 and INLINEFORM1 is the ratio of the number of retweets from one group to the other (but not within groups) INLINEFORM2 to the total number of MEPs in both groups, INLINEFORM3 . The definition of the average retweeets ( INLINEFORM4 ) between groups INLINEFORM5 and INLINEFORM6 is: INLINEFORM7 " ], [ "In this section we first report on the level of cohesion of the European Parliament's groups by analyzing the co-voting through the agreement and ERGM measures. Next, we explore two important policy areas, namely Economic and monetary system and State and evolution of the Union. Finally, we analyze the cohesion of the European Parliament's groups on Twitter.", "Existing research by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 shows that the cohesion of the European political groups has been rising since the 1990s, and the level of cohesion remained high even after the EU's enlargement in 2004, when the number of MEPs increased from 626 to 732.", "We measure the co-voting cohesion of the political groups in the Eighth European Parliament using Krippendorff's Alpha—the results are shown in Fig FIGREF30 (panel Overall). The Greens-EFA have the highest cohesion of all the groups. This finding is in line with an analysis of previous compositions of the Fifth and Sixth European Parliaments by Hix and Noury BIBREF11 , and the Seventh by VoteWatch BIBREF37 . They are closely followed by the S&D and EPP. Hix and Noury reported on the high cohesion of S&D in the Fifth and Sixth European Parliaments, and we also observe this in the current composition. They also reported a slightly less cohesive EPP-ED. This group split in 2009 into EPP and ECR. VoteWatch reports EPP to have cohesion on a par with Greens-EFA and S&D in the Seventh European Parliament. The cohesion level we observe in the current European Parliament is also similar to the level of Greens-EFA and S&D.", "The catch-all group of the non-aligned (NI) comes out as the group with the lowest cohesion. In addition, among the least cohesive groups in the European Parliament are the Eurosceptics EFDD, which include the British UKIP led by Nigel Farage, and the ENL whose largest party are the French National Front, led by Marine Le Pen. Similarly, Hix and Noury found that the least cohesive groups in the Seventh European Parliament are the nationalists and Eurosceptics. The Eurosceptic IND/DEM, which participated in the Sixth European Parliament, transformed into the current EFDD, while the nationalistic UEN was dissolved in 2009.", "We also measure the voting cohesion of the European Parliament groups using an ERGM, a network-based method—the results are shown in Fig FIGREF31 (panel Overall). The cohesion results obtained with ERGM are comparable to the results based on agreement. In this context, the parameters estimated by the ERGM refer to the matching of MEPs who belong to the same political group (one parameter per group). The parameters measure the homophilous matching between MEPs who have the same political affiliation. A positive value for the estimated parameter indicates that the co-voting of MEPs from that group is greater than what is expected by chance, where the expected number of co-voting links by chance in a group is taken to be uniformly random. A negative value indicates that there are fewer co-voting links within a group than expected by chance.", "Even though INLINEFORM0 and ERGM compute scores relative to what is expected by chance, they refer to different interpretations of chance. INLINEFORM1 's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group, knowing the votes of these MEPs on all RCVs. ERGM's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group on a given RCV, knowing the network-related properties of the co-voting network on that particular RCV. The main difference between INLINEFORM2 and ERGM, though, is the treatment of non-voting and abstained MEPs. INLINEFORM3 considers only the yes/no votes, and consequently, agreements by the voting MEPs of the same groups are considerably higher than co-voting by chance. ERGM, on the other hand, always considers all MEPs, and non-voting and abstained MEPs are treated as disconnected nodes. The level of co-voting by chance is therefore considerably lower, since there is often a large fraction of MEPs that do not attand or abstain.", "As with INLINEFORM0 , Greens-EFA, S&D, and EPP exhibit the highest cohesion, even though their ranking is permuted when compared to the ranking obtained with INLINEFORM1 . At the other end of the scale, we observe the same situation as with INLINEFORM2 . The non-aligned members NI have the lowest cohesion, followed by EFDD and ENL.", "The only place where the two methods disagree is the level of cohesion of GUE-NGL. The Alpha attributes GUE-NGL a rather high level of cohesion, on a par with ALDE, whereas the ERGM attributes them a much lower cohesion. The reason for this difference is the relatively high abstention rate of GUE-NGL. Whereas the overall fraction of non-attending and abstaining MEPs across all RCVs and all political groups is 25%, the GUE-NGL abstention rate is 34%. This is reflected in an above average cohesion by INLINEFORM0 where only yes/no votes are considered, and in a relatively lower, below average cohesion by ERGM. In the later case, the non-attendance is interpreted as a non-cohesive voting of a political groups as a whole.", "In addition to the overall cohesion, we also focus on two selected policy areas. The cohesion of the political groups related to these two policy areas is shown in the first two panels in Fig FIGREF30 ( INLINEFORM0 ) and Fig FIGREF31 (ERGM).", "The most important observation is that the level of cohesion of the political groups is very stable across different policy areas. These results are corroborated by both methodologies. Similar to the overall cohesion, the most cohesive political groups are the S&D, Greens-EFA, and EPP. The least cohesive group is the NI, followed by the ENL and EFDD. The two methodologies agree on the level of cohesion for all the political groups, except for GUE-NGL, due to a lower attendance rate.", "We determine the cohesion of political groups on Twitter by using the average number of retweets between MEPs within the same group. The results are shown in Fig FIGREF33 . The right-wing ENL and EFDD come out as the most cohesive groups, while all the other groups have a far lower average number of retweets. MEPs from ENL and EFDD post by far the largest number of retweets (over 240), and at the same time over 94% of their retweets are directed to MEPs from the same group. Moreover, these two groups stand out in the way the retweets are distributed within the group. A large portion of the retweets of EFDD (1755) go to Nigel Farage, the leader of the group. Likewise, a very large portion of retweets of ENL (2324) go to Marine Le Pen, the leader of the group. Farage and Le Pen are by far the two most retweeted MEPs, with the third one having only 666 retweets." ], [ "Coalition formation in the European Parliament is largely determined by ideological positions, reflected in the degree of cooperation of parties at the national and European levels. The observation of ideological inclinations in the coalition formation within the European Parliament was already made by other authors BIBREF11 and is confirmed in this study. The basic patterns of coalition formation in the European Parliament can already be seen in the co-voting network in Fig FIGREF2 A. It is remarkable that the degree of attachment between the political groups, which indicates the degree of cooperation in the European Parliament, nearly exactly corresponds to the left-to-right seating order.", "The liberal ALDE seems to have an intermediator role between the left and right parts of the spectrum in the parliament. Between the extreme (GUE-NGL) and center left (S&D) groups, this function seems to be occupied by Greens-EFA. The non-aligned members NI, as well as the Eurosceptic EFFD and ENL, seem to alternately tip the balance on both poles of the political spectrum. Being ideologically more inclined to vote with other conservative and right-wing groups (EPP, ECR), they sometimes also cooperate with the extreme left-wing group (GUE-NGL) with which they share their Euroscepticism as a common denominator.", "Figs FIGREF36 and FIGREF37 give a more detailed understanding of the coalition formation in the European Parliament. Fig FIGREF36 displays the degree of agreement or cooperation between political groups measured by Krippendorff's INLINEFORM0 , whereas Fig FIGREF37 is based on the result from the ERGM. We first focus on the overall results displayed in the right-hand plots of Figs FIGREF36 and FIGREF37 .", "The strongest degrees of cooperation are observed, with both methods, between the two major parties (EPP and S&D) on the one hand, and the liberal ALDE on the other. Furthermore, we see a strong propensity for Greens-EFA to vote with the Social Democrats (5th strongest coalition by INLINEFORM0 , and 3rd by ERGM) and the GUE-NGL (3rd strongest coalition by INLINEFORM1 , and 5th by ERGM). These results underline the role of ALDE and Greens-EFA as intermediaries for the larger groups to achieve a majority. Although the two largest groups together have 405 seats and thus significantly more than the 376 votes needed for a simple majority, the degree of cooperation between the two major groups is ranked only as the fourth strongest by both methods. This suggests that these two political groups find it easier to negotiate deals with smaller counterparts than with the other large group. This observation was also made by Hix et al. BIBREF12 , who noted that alignments on the left and right of the political spectrum have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament.", "Next, we focus on the coalition formation within the two selected policy areas. The area State and Evolution of the Union is dominated by cooperation between the two major groups, S&D and EPP, as well as ALDE. We also observe a high degree of cooperation between groups that are generally regarded as integration friendly, like Greens-EFA and GUE-NGL. We see, particularly in Fig FIGREF36 , a relatively high degree of cooperation between groups considered as Eurosceptic, like ECR, EFFD, ENL, and the group of non-aligned members.", "The dichotomy between supporters and opponents of European integration is even more pronounced within the policy area Economic and Monetary System. In fact, we are taking a closer look specifically at these two areas as they are, at the same time, both contentious and important. Both methods rank the cooperation between S&D and EPP on the one hand, and ALDE on the other, as the strongest.", "We also observe a certain degree of unanimity among the Eurosceptic and right-wing groups (EFDD, ENL, and NI) in this policy area. This seems plausible, as these groups were (especially in the aftermath of the global financial crisis and the subsequent European debt crisis) in fierce opposition to further payments to financially troubled member states. However, we also observe a number of strong coalitions that might, at first glance, seem unusual, specifically involving the left-wing group GUE-NGL on the one hand, and the right-wing EFDD, ENL, and NI on the other. These links also show up in the network plot in Fig FIGREF2 A. This might be attributable to a certain degree of Euroscepticism on both sides: rooted in criticism of capitalism on the left, and at least partly a raison d'être on the right. Hix et al. BIBREF11 discovered this pattern as well, and proposed an additional explanation—these coalitions also relate to a form of government-opposition dynamic that is rooted at the national level, but is reflected in voting patterns at the European level.", "In general, we observe two main differences between the INLINEFORM0 and ERGM results: the baseline cooperation as estimated by INLINEFORM1 is higher, and the ordering of coalitions from the strongest to the weakest is not exactly the same. The reason is the same as for the cohesion, namely different treatment of non-voting and abstaining MEPs. When they are ignored, as by INLINEFORM2 , the baseline level of inter-group co-voting is higher. When non-attending and abstaining is treated as voting differently, as by ERGM, it is considerably more difficult to achieve co-voting coalitions, specially when there are on average 25% MEPs that do not attend or abstain. Groups with higher non-attendance rates, such as GUE-NGL (34%) and NI (40%) are less likely to form coalitions, and therefore have relatively lower ERGM coefficients (Fig FIGREF37 ) than INLINEFORM3 scores (Fig FIGREF36 ).", "The first insight into coalition formation on Twitter can be observed in the retweet network in Fig FIGREF2 B. The ideological left to right alignment of the political groups is reflected in the retweet network. Fig FIGREF40 shows the strength of the coalitions on Twitter, as estimated by the number of retweets between MEPs from different groups. The strongest coalitions are formed between the right-wing groups EFDD and ECR, as well as ENL and NI. At first, this might come as a surprise, since these groups do not form strong coalitions in the European Parliament, as can be seen in Figs FIGREF36 and FIGREF37 . On the other hand, the MEPs from these groups are very active Twitter users. As previously stated, MEPs from ENL and EFDD post the largest number of retweets. Moreover, 63% of the retweets outside of ENL are retweets of NI. This effect is even more pronounced with MEPs from EFDD, whose retweets of ECR account for 74% of their retweets from other groups.", "In addition to these strong coalitions on the right wing, we find coalition patterns to be very similar to the voting coalitions observed in the European Parliament, seen in Figs FIGREF36 and FIGREF37 . The strongest coalitions, which come immediately after the right-wing coalitions, are between Greens-EFA on the one hand, and GUE-NGL and S&D on the other, as well as ALDE on the one hand, and EPP and S&D on the other. These results corroborate the role of ALDE and Greens-EFA as intermediaries in the European Parliament, not only in the legislative process, but also in the debate on social media.", "To better understand the formation of coalitions in the European Parliament and on Twitter, we examine the strongest cooperation between political groups at three different thresholds. For co-voting coalitions in the European Parliament we choose a high threshold of INLINEFORM0 , a medium threshold of INLINEFORM1 , and a negative threshold of INLINEFORM2 (which corresponds to strong oppositions). In this way we observe the overall patterns of coalition and opposition formation in the European Parliament and in the two specific policy areas. For cooperation on Twitter, we choose a high threshold of INLINEFORM3 , a medium threshold of INLINEFORM4 , and a very low threshold of INLINEFORM5 .", "The strongest cooperations in the European Parliament over all policy areas are shown in Fig FIGREF42 G. It comes as no surprise that the strongest cooperations are within the groups (in the diagonal). Moreover, we again observe GUE-NGL, S&D, Greens-EFA, ALDE, and EPP as the most cohesive groups. In Fig FIGREF42 H, we observe coalitions forming along the diagonal, which represents the seating order in the European Parliament. Within this pattern, we observe four blocks of coalitions: on the left, between GUE-NGL, S&D, and Greens-EFA; in the center, between S&D, Greens-EFA, ALDE, and EPP; on the right-center between ALDE, EPP, and ECR; and finally, on the far-right between ECR, EFDD, ENL, and NI. Fig FIGREF42 I shows the strongest opposition between groups that systematically disagree in voting. The strongest disagreements are between left- and right-aligned groups, but not between the left-most and right-most groups, in particular, between GUE-NGL and ECR, but also between S&D and Greens-EFA on one side, and ENL and NI on the other.", "In the area of Economic and monetary system we see a strong cooperation between EPP and S&D (Fig FIGREF42 A), which is on a par with the cohesion of the most cohesive groups (GUE-NGL, S&D, Greens-EFA, ALDE, and EPP), and is above the cohesion of the other groups. As pointed out in the section “sec:coalitionpolicy”, there is a strong separation in two blocks between supporters and opponents of European integration, which is even more clearly observed in Fig FIGREF42 B. On one hand, we observe cooperation between S&D, ALDE, EPP, and ECR, and on the other, cooperation between GUE-NGL, Greens-EFA, EFDD, ENL, and NI. This division in blocks is seen again in Fig FIGREF42 C, which shows the strongest disagreements. Here, we observe two blocks composed of S&D, EPP, and ALDE on one hand, and GUE-NGL, EFDD, ENL, and NI on the other, which are in strong opposition to each other.", "In the area of State and Evolution of the Union we again observe a strong division in two blocks (see Fig FIGREF42 E). This is different to the Economic and monetary system, however, where we observe a far-left and far-right cooperation, where the division is along the traditional left-right axis.", "The patterns of coalitions forming on Twitter closely resemble those in the European Parliament. In Fig FIGREF42 J we see that the strongest degrees of cooperation on Twitter are within the groups. The only group with low cohesion is the NI, whose members have only seven retweets between them. The coalitions on Twitter follow the seating order in the European Parliament remarkably well (see Fig FIGREF42 K). What is striking is that the same blocks form on the left, center, and on the center-right, both in the European Parliament and on Twitter. The largest difference between the coalitions in the European Parliament and on Twitter is on the far-right, where we observe ENL and NI as isolated blocks.", "The results shown in Fig FIGREF44 quantify the extent to which communication in one social context (Twitter) can explain cooperation in another social context (co-voting in the European Parliament). A positive value indicates that the matching behavior in the retweet network is similar to the one in the co-voting network, specific for an individual policy area. On the other hand, a negative value implies a negative “correlation” between the retweeting and co-voting of MEPs in the two different contexts.", "The bars in Fig FIGREF44 correspond to the coefficients from the edge covariate terms of the ERGM, describing the relationship between the retweeting and co-voting behavior of MEPs. The coefficients are aggregated for individual policy areas by means of a meta-analysis.", "Overall, we observe a positive correlation between retweeting and co-voting, which is significantly different from zero. The strongest positive correlations are in the areas Area of freedom, security and justice, External relations of the Union, and Internal markets. Weaker, but still positive, correlations are observed in the areas Economic, social and territorial cohesion, European citizenship, and State and evolution of the Union. The only exception, with a significantly negative coefficient, is the area Economic and monetary system. This implies that in the area Economic and monetary system we observe a significant deviation from the usual co-voting patterns. Results from section “sec:coalitionpolicy”, confirm that this is indeed the case. Especially noteworthy are the coalitions between GUE-NGL and Greens-EFA on the left wing, and EFDD and ENL on the right wing. In the section “sec:coalitionpolicy” we interpret these results as a combination of Euroscepticism on both sides, motivated on the left by a skeptical attitude towards the market orientation of the EU, and on the right by a reluctance to give up national sovereignty." ], [ "We study cohesion and coalitions in the Eighth European Parliament by analyzing, on one hand, MEPs' co-voting tendencies and, on the other, their retweeting behavior.", "We reveal that the most cohesive political group in the European Parliament, when it comes to co-voting, is Greens-EFA, closely followed by S&D and EPP. This is consistent with what VoteWatch BIBREF37 reported for the Seventh European Parliament. The non-aligned (NI) come out as the least cohesive group, followed by the Eurosceptic EFDD. Hix and Noury BIBREF11 also report that nationalists and Eurosceptics form the least cohesive groups in the Sixth European Parliament. We reaffirm most of these results with both of the two employed methodologies. The only point where the two methodologies disagree is in the level of cohesion for the left-wing GUE-NGL, which is portrayed by ERGM as a much less cohesive group, due to their relatively lower attendance rate. The level of cohesion of the political groups is quite stable across different policy areas and similar conclusions apply.", "On Twitter we can see results that are consistent with the RCV results for the left-to-center political spectrum. The exception, which clearly stands out, is the right-wing groups ENL and EFDD that seem to be the most cohesive ones. This is the direct opposite of what was observed in the RCV data. We speculate that this phenomenon can be attributed to the fact that European right-wing groups, on a European but also on a national level, rely to a large degree on social media to spread their narratives critical of European integration. We observed the same phenomenon recently during the Brexit campaign BIBREF38 . Along our interpretation the Brexit was “won” to some extent due to these social media activities, which are practically non-existent among the pro-EU political groups. The fact that ENL and EFDD are the least cohesive groups in the European Parliament can be attributed to their political focus. It seems more important for the group to agree on its anti-EU stance and to call for independence and sovereignty, and much less important to agree on other issues put forward in the parliament.", "The basic pattern of coalition formation, with respect to co-voting, can already be seen in Fig FIGREF2 A: the force-based layout almost completely corresponds to the seating order in the European Parliament (from the left- to the right-wing groups). A more thorough examination shows that the strongest cooperation can be observed, for both methodologies, between EPP, S&D, and ALDE, where EPP and S&D are the two largest groups, while the liberal ALDE plays the role of an intermediary in this context. On the other hand, the role of an intermediary between the far-left GUE-NGL and its center-left neighbor, S&D, is played by the Greens-EFA. These three parties also form a strong coalition in the European Parliament. On the far right of the spectrum, the non-aligned, EFDD, and ENL form another coalition. This behavior was also observed by Hix et al. BIBREF12 , stating that alignments on the left and right have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the European Parliament. When looking at the policy area Economic and monetary system, we see the same coalitions. However, interestingly, EFDD, ENL, and NI often co-vote with the far-left GUE-NGL. This can be attributed to a certain degree of Euroscepticism on both sides: as a criticism of capitalism, on one hand, and as the main political agenda, on the other. This pattern was also discovered by Hix et al. BIBREF12 , who argued that these coalitions emerge from a form of government-opposition dynamics, rooted at the national level, but also reflected at the European level.", "When studying coalitions on Twitter, the strongest coalitions can be observed on the right of the spectrum (between EFDD, ECR, ENL, and NI). This is, yet again, in contrast to what was observed in the RCV data. The reason lies in the anti-EU messages they tend to collectively spread (retweet) across the network. This behavior forms strong retweet ties, not only within, but also between, these groups. For example, MEPs of EFDD mainly retweet MEPs from ECR (with the exception of MEPs from their own group). In contrast to these right-wing coalitions, we find the other coalitions to be consistent with what is observed in the RCV data. The strongest coalitions on the left-to-center part of the axis are those between GUE-NGL, Greens-EFA, and S&D, and between S&D, ALDE, and EPP. These results reaffirm the role of Greens-EFA and ALDE as intermediaries, not only in the European Parliament but also in the debates on social media.", "Last, but not least, with the ERGM methodology we measure the extent to which the retweet network can explain the co-voting activities in the European Parliament. We compute this for each policy area separately and also over all RCVs. We conclude that the retweet network indeed matches the co-voting behavior, with the exception of one specific policy area. In the area Economic and monetary system, the links in the (overall) retweet network do not match the links in the co-voting network. Moreover, the negative coefficients imply a radically different formation of coalitions in the European Parliament. This is consistent with the results in Figs FIGREF36 and FIGREF37 (the left-hand panels), and is also observed in Fig FIGREF42 (the top charts). From these figures we see that in this particular case, the coalitions are also formed between the right-wing groups and the far-left GUE-NGL. As already explained, we attribute this to the degree of Euroscepticism that these groups share on this particular policy issue." ], [ "In this paper we analyze (co-)voting patterns and social behavior of members of the European Parliament, as well as the interaction between these two systems. More precisely, we analyze a set of 2535 roll-call votes as well as the tweets and retweets of members of the MEPs in the period from October 2014 to February 2016. The results indicate a considerable level of correlation between these two complex systems. This is consistent with previous findings of Cherepnalkoski et al. BIBREF22 , who reconstructed the adherence of MEPs to their respective political or national group solely from their retweeting behavior.", "We employ two different methodologies to quantify the co-voting patterns: Krippendorff's INLINEFORM0 and ERGM. They were developed in different fields of research, use different techniques, and are based on different assumptions, but in general they yield consistent results. However, there are some differences which have consequences for the interpretation of the results.", " INLINEFORM0 is a measure of agreement, designed as a generalization of several specialized measures, that can compare different numbers of observations, in our case roll-call votes. It only considers yes/no votes. Absence and abstention by MEPs is ignored. Its baseline ( INLINEFORM1 ), i.e., co-voting by chance, is computed from the yes/no votes of all MEPs on all RCVs.", "ERGMs are used in social-network analyses to determine factors influencing the edge formation process. In our case an edge between two MEPs is formed when they cast the same yes/no vote within a RCV. It is assumed that a priori each MEP can form a link with any other MEP. No assumptions about the presence or absence of individual MEPs in a voting session are made. Each RCV is analyzed as a separate binary network. The node set is thereby kept constant for each RCV network. While the ERGM departs from the originally observed network, where MEPs who didn't vote or abstained appear as isolated nodes, links between these nodes are possible within the network sampling process which is part of the ERGM optimization process. The results of several RCVs are aggregated by means of the meta-analysis approach. The baseline (ERGM coefficients INLINEFORM0 ), i.e., co-voting by chance, is computed from a large sample of randomly generated networks.", "These two different baselines have to be taken into account when interpreting the results of INLINEFORM0 and ERGM. In a typical voting session, 25% of the MEPs are missing or abstaining. When assessing cohesion of political groups, all INLINEFORM1 values are well above the baseline, and the average INLINEFORM2 . The average ERGM cohesion coefficients, on the other hand, are around the baseline. The difference is even more pronounced for groups with higher non-attendance/abstention rates like GUE-NGL (34%) and NI (40%). When assessing strength of coalitions between pairs of groups, INLINEFORM3 values are balanced around the baseline, while the ERGM coefficients are mostly negative. The ordering of coalitions from the strongest to the weakest is therefor different when groups with high non-attendance/abstention rates are involved.", "The choice of the methodology to asses cohesion and coalitions is not obvious. Roll-call voting is used for decisions which demand a simple majority only. One might however argue that non-attendance/abstention corresponds to a no vote, or that absence is used strategically. Also, the importance of individual votes, i.e., how high on the agenda of a political group is the subject, affects their attendance, and consequently the perception of their cohesion and the potential to act as a reliable coalition partner." ], [ "This work was supported in part by the EC projects SIMPOL (no. 610704) and DOLFINS (no. 640772), and by the Slovenian ARRS programme Knowledge Technologies (no. P2-103)." ] ] }
{ "question": [ "Do the authors mention any possible confounds in their study?", "What is the relationship between the co-voting and retweeting patterns?", "Does the analysis find that coalitions are formed in the same way for different policy areas?", "What insights does the analysis give about the cohesion of political groups in the European parliament?", "Do they authors account for differences in usage of Twitter amongst MPs into their model?", "Did the authors examine if any of the MEPs used the disclaimer that retweeting does not imply endorsement on their twitter profile?" ], "question_id": [ "938688871913862c9f8a28b42165237b7324e0de", "4170ed011b02663f5b1b1a3c1f0415b7abfaa85d", "fd08dc218effecbe5137a7e3b73d9e5e37ace9c1", "a85c2510f25c7152940b5ac4333a80e0f91ade6e", "fa572f1f3f3ce6e1f9f4c9530456329ffc2677ca", "5e057e115f8976bf9fe70ab5321af72eb4b4c0fc" ], "nlp_background": [ "five", "five", "five", "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no" ], "search_query": [ "twitter", "twitter", "twitter", "twitter", "twitter", "twitter" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "On Twitter we can see results that are consistent with the RCV results for the left-to-center political spectrum. The exception, which clearly stands out, is the right-wing groups ENL and EFDD that seem to be the most cohesive ones. This is the direct opposite of what was observed in the RCV data. We speculate that this phenomenon can be attributed to the fact that European right-wing groups, on a European but also on a national level, rely to a large degree on social media to spread their narratives critical of European integration. We observed the same phenomenon recently during the Brexit campaign BIBREF38 . Along our interpretation the Brexit was “won” to some extent due to these social media activities, which are practically non-existent among the pro-EU political groups. The fact that ENL and EFDD are the least cohesive groups in the European Parliament can be attributed to their political focus. It seems more important for the group to agree on its anti-EU stance and to call for independence and sovereignty, and much less important to agree on other issues put forward in the parliament." ], "highlighted_evidence": [ "On Twitter we can see results that are consistent with the RCV results for the left-to-center political spectrum. The exception, which clearly stands out, is the right-wing groups ENL and EFDD that seem to be the most cohesive ones. This is the direct opposite of what was observed in the RCV data. We speculate that this phenomenon can be attributed to the fact that European right-wing groups, on a European but also on a national level, rely to a large degree on social media to spread their narratives critical of European integration." ] } ], "annotation_id": [ "191e5f8266a54d93bcfa718bbc817bae04d8c2c0" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "we observe a positive correlation between retweeting and co-voting", "strongest positive correlations are in the areas Area of freedom, security and justice, External relations of the Union, and Internal markets", "Weaker, but still positive, correlations are observed in the areas Economic, social and territorial cohesion, European citizenship, and State and evolution of the Union", "significantly negative coefficient, is the area Economic and monetary system" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Overall, we observe a positive correlation between retweeting and co-voting, which is significantly different from zero. The strongest positive correlations are in the areas Area of freedom, security and justice, External relations of the Union, and Internal markets. Weaker, but still positive, correlations are observed in the areas Economic, social and territorial cohesion, European citizenship, and State and evolution of the Union. The only exception, with a significantly negative coefficient, is the area Economic and monetary system. This implies that in the area Economic and monetary system we observe a significant deviation from the usual co-voting patterns. Results from section “sec:coalitionpolicy”, confirm that this is indeed the case. Especially noteworthy are the coalitions between GUE-NGL and Greens-EFA on the left wing, and EFDD and ENL on the right wing. In the section “sec:coalitionpolicy” we interpret these results as a combination of Euroscepticism on both sides, motivated on the left by a skeptical attitude towards the market orientation of the EU, and on the right by a reluctance to give up national sovereignty." ], "highlighted_evidence": [ "Overall, we observe a positive correlation between retweeting and co-voting, which is significantly different from zero. The strongest positive correlations are in the areas Area of freedom, security and justice, External relations of the Union, and Internal markets. Weaker, but still positive, correlations are observed in the areas Economic, social and territorial cohesion, European citizenship, and State and evolution of the Union. The only exception, with a significantly negative coefficient, is the area Economic and monetary system. This implies that in the area Economic and monetary system we observe a significant deviation from the usual co-voting patterns." ] } ], "annotation_id": [ "5a6c013d5c21b353c894be70adf4eab084cda0f8" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "In the area of Economic and monetary system we see a strong cooperation between EPP and S&D (Fig FIGREF42 A), which is on a par with the cohesion of the most cohesive groups (GUE-NGL, S&D, Greens-EFA, ALDE, and EPP), and is above the cohesion of the other groups. As pointed out in the section “sec:coalitionpolicy”, there is a strong separation in two blocks between supporters and opponents of European integration, which is even more clearly observed in Fig FIGREF42 B. On one hand, we observe cooperation between S&D, ALDE, EPP, and ECR, and on the other, cooperation between GUE-NGL, Greens-EFA, EFDD, ENL, and NI. This division in blocks is seen again in Fig FIGREF42 C, which shows the strongest disagreements. Here, we observe two blocks composed of S&D, EPP, and ALDE on one hand, and GUE-NGL, EFDD, ENL, and NI on the other, which are in strong opposition to each other.", "In the area of State and Evolution of the Union we again observe a strong division in two blocks (see Fig FIGREF42 E). This is different to the Economic and monetary system, however, where we observe a far-left and far-right cooperation, where the division is along the traditional left-right axis.", "The patterns of coalitions forming on Twitter closely resemble those in the European Parliament. In Fig FIGREF42 J we see that the strongest degrees of cooperation on Twitter are within the groups. The only group with low cohesion is the NI, whose members have only seven retweets between them. The coalitions on Twitter follow the seating order in the European Parliament remarkably well (see Fig FIGREF42 K). What is striking is that the same blocks form on the left, center, and on the center-right, both in the European Parliament and on Twitter. The largest difference between the coalitions in the European Parliament and on Twitter is on the far-right, where we observe ENL and NI as isolated blocks." ], "highlighted_evidence": [ "As pointed out in the section “sec:coalitionpolicy”, there is a strong separation in two blocks between supporters and opponents of European integration, which is even more clearly observed in Fig FIGREF42 B.", "In the area of State and Evolution of the Union we again observe a strong division in two blocks (see Fig FIGREF42 E). This is different to the Economic and monetary system, however, where we observe a far-left and far-right cooperation, where the division is along the traditional left-right axis.\n\nThe patterns of coalitions forming on Twitter closely resemble those in the European Parliament." ] } ], "annotation_id": [ "750e4331e64911ba3d97694c2591cbcd2e9c74f9" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Greens-EFA, S&D, and EPP exhibit the highest cohesion", "non-aligned members NI have the lowest cohesion, followed by EFDD and ENL", "two methods disagree is the level of cohesion of GUE-NGL" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As with INLINEFORM0 , Greens-EFA, S&D, and EPP exhibit the highest cohesion, even though their ranking is permuted when compared to the ranking obtained with INLINEFORM1 . At the other end of the scale, we observe the same situation as with INLINEFORM2 . The non-aligned members NI have the lowest cohesion, followed by EFDD and ENL.", "The only place where the two methods disagree is the level of cohesion of GUE-NGL. The Alpha attributes GUE-NGL a rather high level of cohesion, on a par with ALDE, whereas the ERGM attributes them a much lower cohesion. The reason for this difference is the relatively high abstention rate of GUE-NGL. Whereas the overall fraction of non-attending and abstaining MEPs across all RCVs and all political groups is 25%, the GUE-NGL abstention rate is 34%. This is reflected in an above average cohesion by INLINEFORM0 where only yes/no votes are considered, and in a relatively lower, below average cohesion by ERGM. In the later case, the non-attendance is interpreted as a non-cohesive voting of a political groups as a whole." ], "highlighted_evidence": [ "As with INLINEFORM0 , Greens-EFA, S&D, and EPP exhibit the highest cohesion, even though their ranking is permuted when compared to the ranking obtained with INLINEFORM1 . At the other end of the scale, we observe the same situation as with INLINEFORM2 . The non-aligned members NI have the lowest cohesion, followed by EFDD and ENL.", "The only place where the two methods disagree is the level of cohesion of GUE-NGL. The Alpha attributes GUE-NGL a rather high level of cohesion, on a par with ALDE, whereas the ERGM attributes them a much lower cohesion." ] } ], "annotation_id": [ "634d1e70fec7f88b0d4078ce84daa0fc2689e54f" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "The retweeting behavior of MEPs is captured by their retweet network. Each MEP active on Twitter is a node in this network. An edge in the network between two MEPs exists when one MEP retweeted the other. The weight of the edge is the number of retweets between the two MEPs. The resulting retweet network is an undirected, weighted network.", "We measure the cohesion of a political group INLINEFORM0 as the average retweets, i.e., the ratio of the number of retweets between the MEPs in the group INLINEFORM1 to the number of MEPs in the group INLINEFORM2 . The higher the ratio, the more each MEP (on average) retweets the MEPs from the same political group, hence, the higher the cohesion of the political group. The definition of the average retweeets ( INLINEFORM3 ) of a group INLINEFORM4 is: INLINEFORM5" ], "highlighted_evidence": [ "The retweeting behavior of MEPs is captured by their retweet network. Each MEP active on Twitter is a node in this network. An edge in the network between two MEPs exists when one MEP retweeted the other. The weight of the edge is the number of retweets between the two MEPs", "We measure the cohesion of a political group INLINEFORM0 as the average retweets, i.e., the ratio of the number of retweets between the MEPs in the group INLINEFORM1 to the number of MEPs in the group INLINEFORM2 " ] } ], "annotation_id": [ "3dd62fd49bb198984e924711d1e89fee857442b0" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "8256a726a22f10271869427510d4eb87bc4ff9ce" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Fig 1. Timeline of Twitter activity of MEPs and roll-call voting sessions. Twitter activity is represented by the total number of tweets posted by all MEPs on a given day. The volume is standardized and the solid blue line represents the standard score. The peaks in Twitter activity that are above two standard deviations are highlighted in red. The shaded regions correspond to days with roll-call voting sessions.", "Fig 2. Networks of roll-call votes and retweets. (A) Co-voting agreement within and between political groups. (B) Average retweets within and between political groups.", "Table 1. Distribution of Members of the European Parliament (MEPs) by political groups. There is the number of MEPs in each group, the number of MEPs with Twitter accounts, and the number of MEPs in the retweet network. The numbers are cumulative for the period from October 1, 2014 to February 29, 2016.", "Table 2. Distribution of roll-call votes (RCVs) by policy areas. The time period is from October 1, 2014 to February 29, 2016.", "Fig 3. Cohesion of political groups in terms of RCVs as measured by Krippendorff’s Alpha. There are two selected policy areas (the left-hand panels), and overall cohesion across all policy areas (the right-hand panel). The Alpha-agreement of 1 indicates perfect co-voting agreement, and 0 indicates co-voting by chance. The overall average Alpha across all nine political groups is 0.7.", "Fig 4. Cohesion of political groups in terms of RCVs as measured by ERGM. There are two selected policy areas (the left-hand panels), and overall cohesion across all policy areas (the right-hand panel). The coefficients of 0 indicate baseline, i.e., an even chance of the same votes within a group. Logodds of +1 (−1) correspond to an increase (decrease) of probability for 0.23 of co-voting together.", "Fig 5. Cohesion of political groups as estimated by retweeting within groups. The average number of retweets within the nine groups is 99.", "Fig 6. Coalitions between political groups in terms of RCVs as measured by Krippendorff’s Alpha. For nine groups, there are 36 pairs for which we measure their (dis)agreement. Positive values of Alpha indicate co-voting agreement, negative values correspond to systematic disagreement, while Alpha = 0 indicates co-voting by chance. The average Alpha across all 36 pairs is 0.02, close to co-voting by chance. Note, however, that Alpha considers just yes/no votes, and ignores abstentions.", "Fig 7. Coalitions between political groups in terms of RCVs as measured by ERGM. There are 36 possible pairwise coalitions of the nine groups. The coefficients of 0 indicate baseline, i.e., an even chance (0.5) of the two groups to co-vote together. For most group pairs, the probability to co-vote is lower than the chance: log-odds of −1 correspond to the probability of 0.27, log-odds of −2 to the probability of 0.12, and log-odds of −3 to the probability of 0.05. Note that ERGM does take abstentions into account, and therefore the baseline of co-voting by chance is considerably higher than for Alpha.", "Fig 8. Coalitions between political groups as estimated by retweeting between members of different groups. The average number of retweets between different political groups is 0.8.", "Fig 9. Formation of coalitions (Alpha > 0) and oppositions (Alpha < 0) in terms of the co-voting agreement (RCV). In the case of Twitter (the bottom charts), coalitions are indicated by higher average retweets (RT), and oppositions by lower average retweets.", "Fig 10. Influence of retweeting on the co-voting of MEPs as computed by ERGM. The influence is mostly positive, except for one policy area (Economic and monetary system), but very low. The ERGM coefficients of 0.01 correspond to an increase of probability from the chance of 0.5 to 0.503." ], "file": [ "3-Figure1-1.png", "3-Figure2-1.png", "6-Table1-1.png", "7-Table2-1.png", "14-Figure3-1.png", "14-Figure4-1.png", "16-Figure5-1.png", "17-Figure6-1.png", "18-Figure7-1.png", "19-Figure8-1.png", "21-Figure9-1.png", "22-Figure10-1.png" ] }
1711.02013
Neural Language Modeling by Jointly Learning Syntax and Lexicon
We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information. On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation. In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network. Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks.
{ "section_name": [ "Introduction", "Related Work", "Motivation", "Modeling Local Structure", "Parsing Network", "Reading Network", "Predict Network", "Experiments", "Character-level Language Model", "Word-level Language Model", "Unsupervised Constituency Parsing", "Conclusion", "Acknowledgement" ], "paragraphs": [ [ "Linguistic theories generally regard natural language as consisting of two part: a lexicon, the complete set of all possible words in a language; and a syntax, the set of rules, principles, and processes that govern the structure of sentences BIBREF0 . To generate a proper sentence, tokens are put together with a specific syntactic structure. Understanding a sentence also requires lexical information to provide meanings, and syntactical knowledge to correctly combine meanings. Current neural language models can provide meaningful word represent BIBREF1 , BIBREF2 , BIBREF3 . However, standard recurrent neural networks only implicitly model syntax, thus fail to efficiently use structure information BIBREF4 .", "Developing a deep neural network that can leverage syntactic knowledge to form a better semantic representation has received a great deal of attention in recent years BIBREF5 , BIBREF4 , BIBREF6 . Integrating syntactic structure into a language model is important for different reasons: 1) to obtain a hierarchical representation with increasing levels of abstraction, which is a key feature of deep neural networks and of the human brain BIBREF7 , BIBREF8 , BIBREF9 ; 2) to capture complex linguistic phenomena, like long-term dependency problem BIBREF4 and the compositional effects BIBREF5 ; 3) to provide shortcut for gradient back-propagation BIBREF6 .", "A syntactic parser is the most common source for structure information. Supervised parsers can achieve very high performance on well constructed sentences. Hence, parsers can provide accurate information about how to compose word semantics into sentence semantics BIBREF5 , or how to generate the next word given previous words BIBREF10 . However, only major languages have treebank data for training parsers, and it request expensive human expert annotation. People also tend to break language rules in many circumstances (such as writing a tweet). These defects limit the generalization capability of supervised parsers.", "Unsupervised syntactic structure induction has been among the longstanding challenges of computational linguistic BIBREF11 , BIBREF12 , BIBREF13 . Researchers are interested in this problem for a variety of reasons: to be able to parse languages for which no annotated treebanks exist BIBREF14 ; to create a dependency structure to better suit a particular NLP application BIBREF10 ; to empirically argue for or against the poverty of the stimulus BIBREF15 , BIBREF16 ; and to examine cognitive issues in language learning BIBREF17 .", "In this paper, we propose a novel neural language model: Parsing-Reading-Predict Networks (PRPN), which can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to form a better language model. With our model, we assume that language can be naturally represented as a tree-structured graph. The model is composed of three parts:", "We evaluate our model on three tasks: word-level language modeling, character-level language modeling, and unsupervised constituency parsing. The proposed model achieves (or is close to) the state-of-the-art on both word-level and character-level language modeling. The model's unsupervised parsing outperforms some strong baseline models, demonstrating that the structure found by our model is similar to the intrinsic structure provided by human experts." ], [ "The idea of introducing some structures, especially trees, into language understanding to help a downstream task has been explored in various ways. For example, BIBREF5 , BIBREF4 learn a bottom-up encoder, taking as an input a parse tree supplied from an external parser. There are models that are able to infer a tree during test time, while still need supervised signal on tree structure during training. For example, BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , etc. Moreover, BIBREF22 did an in-depth analysis of recursive models that are able to learn tree structure without being exposed to any grammar trees. Our model is also able to infer tree structure in an unsupervised setting, but different from theirs, it is a recurrent network that implicitly models tree structure through attention.", "Apart from the approach of using recursive networks to capture structures, there is another line of research which try to learn recurrent features at multiple scales, which can be dated back to 1990s (e.g. BIBREF23 , BIBREF24 , BIBREF25 ). The NARX RNN BIBREF25 is another example which used a feed forward net taking different inputs with predefined time delays to model long-term dependencies. More recently, BIBREF26 also used multiple layers of recurrent networks with different pre-defined updating frequencies. Instead, our model tries to learn the structure from data, rather than predefining it. In that respect, BIBREF6 relates to our model since it proposes a hierarchical multi-scale structure with binary gates controlling intra-layer connections, and the gating mechanism is learned from data too. The difference is that their gating mechanism controls the updates of higher layers directly, while ours control it softly through an attention mechanism.", "In terms of language modeling, syntactic language modeling can be dated back to BIBREF27 . BIBREF28 , BIBREF29 have also proposed language models with a top-down parsing mechanism. Recently BIBREF30 , BIBREF31 have introduced neural networks into this space. It learns both a discriminative and a generative model with top-down parsing, trained with a supervision signal from parsed sentences in the corpus. There are also dependency-based approaches using neural networks, including BIBREF32 , BIBREF33 , BIBREF34 .", "Parsers are also related to our work since they are all inferring grammatical tree structure given a sentence. For example, SPINN BIBREF35 is a shift-reduce parser that uses an LSTM as its composition function. The transition classifier in SPINN is supervisedly trained on the Stanford PCFG Parser BIBREF36 output. Unsupervised parsers are more aligned with what our model is doing. BIBREF12 presented a generative model for the unsupervised learning of dependency structures. BIBREF11 is a generative distributional model for the unsupervised induction of natural language syntax which explicitly models constituent yields and contexts. We compare our parsing quality with the aforementioned two papers in Section SECREF43 ." ], [ "Suppose we have a sequence of tokens INLINEFORM0 governed by the tree structure showed in Figure FIGREF4 . The leafs INLINEFORM1 are observed tokens. Node INLINEFORM2 represents the meaning of the constituent formed by its leaves INLINEFORM3 , where INLINEFORM4 and INLINEFORM5 stands for the leftmost child and right most child. Root INLINEFORM6 represents the meaning of the whole sequence. Arrows represent the dependency relations between nodes. The underlying assumption is that each node depends only on its parent and its left siblings.", "Directly modeling the tree structure is a challenging task, usually requiring supervision to learn BIBREF4 . In addition, relying on tree structures can result in a model that is not sufficiently robust to face ungrammatical sentences BIBREF37 . In contrast, recurrent models provide a convenient way to model sequential data, with the current hidden state only depends on the last hidden state. This makes models more robust when facing nonconforming sequential data, but it suffers from neglecting the real dependency relation that dominates the structure of natural language sentences.", "In this paper, we use skip-connection to integrate structured dependency relations with recurrent neural network. In other words, the current hidden state does not only depend on the last hidden state, but also on previous hidden states that have a direct syntactic relation to the current one.", "Figure FIGREF5 shows the structure of our model. The non-leaf node INLINEFORM0 is represented by a set of hidden states INLINEFORM1 , where INLINEFORM2 is the left most descendant leaf and INLINEFORM3 is the right most one. Arrows shows skip connections built by our model according to the latent structure. Skip connections are controlled by gates INLINEFORM4 . In order to define INLINEFORM5 , we introduce a latent variable INLINEFORM6 to represent local structural context of INLINEFORM7 :", "and gates are defined as: DISPLAYFORM0 ", "Given this architecture, the siblings dependency relation is modeled by at least one skip-connect. The skip connection will directly feed information forward, and pass gradient backward. The parent-to-child relation will be implicitly modeled by skip-connect relation between nodes.", "The model recurrently updates the hidden states according to: DISPLAYFORM0 ", "and the probability distribution for next word is approximated by: DISPLAYFORM0 ", " where INLINEFORM0 are gates that control skip-connections. Both INLINEFORM1 and INLINEFORM2 have a structured attention mechanism that takes INLINEFORM3 as input and forces the model to focus on the most related information. Since INLINEFORM4 is an unobserved latent variable, We explain an approximation for INLINEFORM5 in the next section. The structured attention mechanism is explained in section SECREF21 ." ], [ "In this section we give a probabilistic view on how to model the local structure of language. A detailed elaboration for this section is given in Appendix . At time step INLINEFORM0 , INLINEFORM1 represents the probability of choosing one out of INLINEFORM2 possible local structures. We propose to model the distribution by the Stick-Breaking Process: DISPLAYFORM0 ", "The formula can be understood by noting that after the time step INLINEFORM0 have their probabilities assigned, INLINEFORM1 is remaining probability, INLINEFORM2 is the portion of remaining probability that we assign to time step INLINEFORM3 . Variable INLINEFORM4 is parametrized in the next section.", "As shown in Appendix , the expectation of gate value INLINEFORM0 is the Cumulative Distribution Function (CDF) of INLINEFORM1 . Thus, we can replace the discrete gate value by its expectation: DISPLAYFORM0 ", "With these relaxations, Eq. EQREF9 and EQREF10 can be approximated by using a soft gating vector to update the hidden state and predict the next token." ], [ "In Eq. EQREF12 , INLINEFORM0 is the portion of the remaining probability that we assign to position INLINEFORM1 . Because the stick-breaking process should assign high probability to INLINEFORM2 , which is the closest constituent-beginning word. The model should assign large INLINEFORM3 to words beginning new constituents. While INLINEFORM4 itself is a constituent-beginning word, the model should assign large INLINEFORM5 to words beginning larger constituents. In other words, the model will consider longer dependency relations for the first word in constituent. Given the sentence in Figure FIGREF4 , at time step INLINEFORM6 , both INLINEFORM7 and INLINEFORM8 should be close to 1, and all other INLINEFORM9 should be close to 0.", "In order to parametrize INLINEFORM0 , our basic hypothesis is that words in the same constituent should have a closer syntactic relation within themselves, and that this syntactical proximity can be represented by a scalar value. From the tree structure point of view, the shortest path between leafs in same subtree is shorter than the one between leafs in different subtree.", "To model syntactical proximity, we introduce a new feature Syntactic Distance. For a sentence with length INLINEFORM0 , we define a set of INLINEFORM1 real valued scalar variables INLINEFORM2 , with INLINEFORM3 representing a measure of the syntactic relation between the pair of adjacent words INLINEFORM4 . INLINEFORM5 could be the last word in previous sentence or a padding token. For time step INLINEFORM6 , we want to find the closest words INLINEFORM7 , that have larger syntactic distance than INLINEFORM8 . Thus INLINEFORM9 can be defined as: DISPLAYFORM0 ", "where INLINEFORM0 . INLINEFORM1 is the temperature parameter that controls the sensitivity of INLINEFORM2 to the differences between distances.", "The Syntactic Distance has some nice properties that both allow us to infer a tree structure from it and be robust to intermediate non-valid tree structures that the model may encounter during learning. In Appendix and we list these properties and further explain the meanings of their values.", " BIBREF38 shows that it's possible to identify the beginning and ending words of a constituent using local information. In our model, the syntactic distance between a given token (which is usually represented as a vector word embedding INLINEFORM0 ) and its previous token INLINEFORM1 , is provided by a convolutional kernel over a set of consecutive previous tokens INLINEFORM2 . This convolution is depicted as the gray triangles shown in Figure FIGREF20 . Each triangle here represent 2 layers of convolution. Formally, the syntactic distance INLINEFORM3 between token INLINEFORM4 and INLINEFORM5 is computed by DISPLAYFORM0 DISPLAYFORM1 ", "where INLINEFORM0 , INLINEFORM1 are the kernel parameters. INLINEFORM2 and INLINEFORM3 can be seen as another convolutional kernel with window size 1, convolved over INLINEFORM4 's. Here the kernel window size INLINEFORM5 determines how far back into the history node INLINEFORM6 can reach while computing its syntactic distance INLINEFORM7 . Thus we call it the look-back range.", "Convolving INLINEFORM0 and INLINEFORM1 on the whole sequence with length INLINEFORM2 yields a set of distances. For the tokens in the beginning of the sequence, we simply pad INLINEFORM3 zero vectors to the front of the sequence in order to get INLINEFORM4 outputs." ], [ "The Reading Network generate new states INLINEFORM0 considering on input INLINEFORM1 , previous memory states INLINEFORM2 , and gates INLINEFORM3 , as shown in Eq. EQREF9 .", "Similar to Long Short-Term Memory-Network (LSTMN) BIBREF39 , the Reading Network maintains the memory states by maintaining two sets of vectors: a hidden tape INLINEFORM0 , and a memory tape INLINEFORM1 , where INLINEFORM2 is the upper bound for the memory span. Hidden states INLINEFORM3 is now represented by a tuple of two vectors INLINEFORM4 . The Reading Network captures the dependency relation by a modified attention mechanism: structured attention. At each step of recurrence, the model summarizes the previous recurrent states via the structured attention mechanism, then performs a normal LSTM update, with hidden and cell states output by the attention mechanism.", "At each time step INLINEFORM0 , the read operation attentively links the current token to previous memories with a structured attention layer: DISPLAYFORM0 ", " where, INLINEFORM0 is the dimension of the hidden state. Modulated by the gates in Eq. EQREF13 , the structured intra-attention weight is defined as: DISPLAYFORM0 ", " This yields a probability distribution over the hidden state vectors of previous tokens. We can then compute an adaptive summary vector for the previous hidden tape and memory denoting by INLINEFORM0 and INLINEFORM1 : DISPLAYFORM0 ", "Structured attention provides a way to model the dependency relations shown in Figure FIGREF4 .", "The Reading Network takes INLINEFORM0 , INLINEFORM1 and INLINEFORM2 as input, computes the values of INLINEFORM3 and INLINEFORM4 by the LSTM recurrent update BIBREF40 . Then the write operation concatenates INLINEFORM5 and INLINEFORM6 to the end of hidden and memory tape." ], [ "Predict Network models the probability distribution of next word INLINEFORM0 , considering on hidden states INLINEFORM1 , and gates INLINEFORM2 . Note that, at time step INLINEFORM3 , the model cannot observe INLINEFORM4 , a temporary estimation of INLINEFORM5 is computed considering on INLINEFORM6 : DISPLAYFORM0 ", "From there we compute its corresponding INLINEFORM0 and INLINEFORM1 for Eq. EQREF10 . We parametrize INLINEFORM2 function as: DISPLAYFORM0 ", " where INLINEFORM0 is an adaptive summary of INLINEFORM1 , output by structured attention controlled by INLINEFORM2 . INLINEFORM3 could be a simple feed-forward MLP, or more complex architecture, like ResNet, to add more depth to the model." ], [ "We evaluate the proposed model on three tasks, character-level language modeling, word-level language modeling, and unsupervised constituency parsing." ], [ "From a character-level view, natural language is a discrete sequence of data, where discrete symbols form a distinct and shallow tree structure: the sentence is the root, words are children of the root, and characters are leafs. However, compared to word-level language modeling, character-level language modeling requires the model to handle longer-term dependencies. We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets.", "When training, we use truncated back-propagation, and feed the final memory position from the previous batch as the initial memory of next one. At the beginning of training and test time, the model initial hidden states are filled with zero. Optimization is performed with Adam using learning rate INLINEFORM0 , weight decay INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . We carry out gradient clipping with maximum norm 1.0. The learning rate is multiplied by 0.1 whenever validation performance does not improve during 2 checkpoints. These checkpoints are performed at the end of each epoch. We also apply layer normalization BIBREF41 to the Reading Network and batch normalization to the Predict Network and parsing network. For all of the character-level language modeling experiments, we apply the same procedure, varying only the number of hidden units, mini-batch size and dropout rate.", "we process the Penn Treebank dataset BIBREF42 by following the procedure introduced in BIBREF43 . For character-level PTB, Reading Network has two recurrent layers, Predict Network has one residual block. Hidden state size is 1024 units. The input and output embedding size are 128, and not shared. Look-back range INLINEFORM0 , temperature parameter INLINEFORM1 , upper band of memory span INLINEFORM2 . We use a batch size of 64, truncated back-propagation with 100 timesteps. The values used of dropout on input/output embeddings, between recurrent layers, and on recurrent states were (0, 0.25, 0.1) respectively.", "In Figure FIGREF32 , we visualize the syntactic distance estimated by the Parsing Network, while reading three different sequences from the PTB test set. We observe that the syntactic distance tends to be higher between the last character of a word and a space, which is a reasonable breakpoint to separate between words. In other words, if the model sees a space, it will attend on all previous step. If the model sees a letter, it will attend no further then the last space step. The model autonomously discovered to avoid inter-word attention connection, and use the hidden states of space (separator) tokens to summarize previous information. This is strong proof that the model can understand the latent structure of data. As a result our model achieve state-of-the-art performance and significantly outperform baseline models. It is worth noting that HM-LSTM BIBREF6 also unsupervisedly induce similar structure from data. But discrete operations in HM-LSTM make their training procedure more complicated then ours." ], [ "Comparing to character-level language modeling, word-level language modeling needs to deal with complex syntactic structure and various linguistic phenomena. But it has less long-term dependencies. We evaluate the word-level variant of our language model on a preprocessed version of the Penn Treebank (PTB) BIBREF42 and Text8 BIBREF49 dataset.", "We apply the same procedure and hyper-parameters as in character-level language model. Except optimization is performed with Adam with INLINEFORM0 . This turns off the exponential moving average for estimates of the means of the gradients BIBREF50 . We also adapt the number of hidden units, mini-batch size and the dropout rate according to the different tasks.", "we process the Penn Treebank dataset BIBREF43 by following the procedure introduced in BIBREF51 . For word-level PTB, the Reading Network has two recurrent layers and the Predict Network do not have residual block. The hidden state size is 1200 units and the input and output embedding sizes are 800, and shared BIBREF52 , BIBREF53 . Look-back range INLINEFORM0 , temperature parameter INLINEFORM1 and the upper band of memory span INLINEFORM2 . We use a batch size of 64, truncated back-propagation with 35 time-steps. The values used of dropout on input/output embeddings, between recurrent layers, and on recurrent states were (0.7, 0.5, 0.5) respectively.", "dataset contains 17M training tokens and has a vocabulary size of 44k words. The dataset is partitioned into a training set (first 99M characters) and a development set (last 1M characters) that is used to report performance. As this dataset contains various articles from Wikipedia, the longer term information (such as current topic) plays a bigger role than in the PTB experiments BIBREF61 . We apply the same procedure and hyper-parameters as in character-level PTB, except we use a batch size of 128. The values used of dropout on input/output embeddings, between Recurrent Layers and on recurrent states were (0.4, 0.2, 0.2) respectively.", "In Table TABREF39 , our results are comparable to the state-of-the-art methods. Since we do not have the same computational resource used in BIBREF50 to tune hyper-parameters at large scale, we expect that our model could achieve better performance after an aggressive hyperparameter tuning process. As shown in Table TABREF42 , our method outperform baseline methods. It is worth noticing that the continuous cache pointer can also be applied to output of our Predict Network without modification. Visualizations of tree structure generated from learned PTB language model are included in Appendix . In Table TABREF40 , we show the value of test perplexity for different variants of PRPN, each variant remove part of the model. By removing Parsing Network, we observe a significant drop of performance. This stands as empirical evidence regarding the benefit of having structure information to control attention." ], [ "The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset. WSJ10 is the 7422 sentences in the Penn Treebank Wall Street Journal section which contained 10 words or less after the removal of punctuation and null elements. Evaluation was done by seeing whether proposed constituent spans are also in the Treebank parse, measuring unlabeled F1 ( INLINEFORM0 ) of unlabeled constituent precision and recall. Constituents which could not be gotten wrong (those of span one and those spanning entire sentences) were discarded. Given the mechanism discussed in Section SECREF14 , our model generates a binary tree. Although standard constituency parsing tree is not limited to binary tree. Previous unsupervised constituency parsing model also generate binary trees BIBREF11 , BIBREF13 . Our model is compared with the several baseline methods, that are explained in Appendix .", "Different from the previous experiment setting, the model treat each sentence independently during train and test time. When training, we feed one batch of sentences at each iteration. In a batch, shorter sentences are padded with 0. At the beginning of the iteration, the model's initial hidden states are filled with zero. When testing, we feed on sentence one by one to the model, then use the gate value output by the model to recursively combine tokens into constituents, as described in Appendix .", "Table TABREF44 summarizes the results. Our model significantly outperform the RANDOM baseline indicate a high consistency with human annotation. Our model also shows a comparable performance with CCM model. In fact our parsing network and CCM both focus on the relation between successive tokens. As described in Section SECREF14 , our model computes syntactic distance between all successive pair of tokens, then our parsing algorithm recursively assemble tokens into constituents according to the learned distance. CCM also recursively model the probability whether a contiguous subsequences of a sentence is a constituent. Thus, one can understand how our model is outperformed by DMV+CCM and UML-DOP models. The DMV+CCM model has extra information from a dependency parser. The UML-DOP approach captures both contiguous and non-contiguous lexical dependencies BIBREF13 ." ], [ "In this paper, we propose a novel neural language model that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. We introduce a new neural parsing network: Parsing-Reading-Predict Network, that can make differentiable parsing decisions. We use a new structured attention mechanism to control skip connections in a recurrent neural network. Hence induced syntactic structure information can be used to improve the model's performance. Via this mechanism, the gradient can be directly back-propagated from the language model loss function into the neural Parsing Network. The proposed model achieve (or is close to) the state-of-the-art on both word/character-level language modeling tasks. Experiment also shows that the inferred syntactic structure highly correlated to human expert annotation." ], [ "The authors would like to thank Timothy J. O'Donnell and Chris Dyer for the helpful discussions." ] ] }
{ "question": [ "How do they show their model discovers underlying syntactic structure?", "Which dataset do they experiment with?", "How do they measure performance of language model tasks?" ], "question_id": [ "d824f837d8bc17f399e9b8ce8b30795944df0d51", "2ff3898fbb5954aa82dd2f60b37dd303449c81ba", "3070d6d6a52aa070f0c0a7b4de8abddd3da4f056" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "By visualizing syntactic distance estimated by the parsing network", "evidence": [ "In Figure FIGREF32 , we visualize the syntactic distance estimated by the Parsing Network, while reading three different sequences from the PTB test set. We observe that the syntactic distance tends to be higher between the last character of a word and a space, which is a reasonable breakpoint to separate between words. In other words, if the model sees a space, it will attend on all previous step. If the model sees a letter, it will attend no further then the last space step. The model autonomously discovered to avoid inter-word attention connection, and use the hidden states of space (separator) tokens to summarize previous information. This is strong proof that the model can understand the latent structure of data. As a result our model achieve state-of-the-art performance and significantly outperform baseline models. It is worth noting that HM-LSTM BIBREF6 also unsupervisedly induce similar structure from data. But discrete operations in HM-LSTM make their training procedure more complicated then ours." ], "highlighted_evidence": [ "In Figure FIGREF32 , we visualize the syntactic distance estimated by the Parsing Network, while reading three different sequences from the PTB test set. We observe that the syntactic distance tends to be higher between the last character of a word and a space, which is a reasonable breakpoint to separate between words. ", "The model autonomously discovered to avoid inter-word attention connection, and use the hidden states of space (separator) tokens to summarize previous information. This is strong proof that the model can understand the latent structure of data." ] } ], "annotation_id": [ "22b7cf887e3387634b67deae37c4d197a85c1f98" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Penn Treebank", "Text8", "WSJ10" ], "yes_no": null, "free_form_answer": "", "evidence": [ "From a character-level view, natural language is a discrete sequence of data, where discrete symbols form a distinct and shallow tree structure: the sentence is the root, words are children of the root, and characters are leafs. However, compared to word-level language modeling, character-level language modeling requires the model to handle longer-term dependencies. We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets.", "The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset. WSJ10 is the 7422 sentences in the Penn Treebank Wall Street Journal section which contained 10 words or less after the removal of punctuation and null elements. Evaluation was done by seeing whether proposed constituent spans are also in the Treebank parse, measuring unlabeled F1 ( INLINEFORM0 ) of unlabeled constituent precision and recall. Constituents which could not be gotten wrong (those of span one and those spanning entire sentences) were discarded. Given the mechanism discussed in Section SECREF14 , our model generates a binary tree. Although standard constituency parsing tree is not limited to binary tree. Previous unsupervised constituency parsing model also generate binary trees BIBREF11 , BIBREF13 . Our model is compared with the several baseline methods, that are explained in Appendix ." ], "highlighted_evidence": [ "We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets.", "The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset." ] } ], "annotation_id": [ "19730a5d76cf81f3614aa41243672f3eab75e322" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "BPC, Perplexity", "evidence": [ "In Table TABREF39 , our results are comparable to the state-of-the-art methods. Since we do not have the same computational resource used in BIBREF50 to tune hyper-parameters at large scale, we expect that our model could achieve better performance after an aggressive hyperparameter tuning process. As shown in Table TABREF42 , our method outperform baseline methods. It is worth noticing that the continuous cache pointer can also be applied to output of our Predict Network without modification. Visualizations of tree structure generated from learned PTB language model are included in Appendix . In Table TABREF40 , we show the value of test perplexity for different variants of PRPN, each variant remove part of the model. By removing Parsing Network, we observe a significant drop of performance. This stands as empirical evidence regarding the benefit of having structure information to control attention.", "FLOAT SELECTED: Table 1: BPC on the Penn Treebank test set", "Word-level Language Model" ], "highlighted_evidence": [ "In Table TABREF40 , we show the value of test perplexity for different variants of PRPN, each variant remove part of the model. ", "FLOAT SELECTED: Table 1: BPC on the Penn Treebank test set", "Word-level Language Model" ] } ], "annotation_id": [ "42cdfe407e54aa6c90a61c0943fb456f7f75d7b8" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Figure 1: Hard arrow represents syntactic tree structure and parent-to-child dependency relation, dash arrow represents dependency relation between siblings", "Figure 2: Proposed model architecture, hard line indicate valid connection in Reading Network, dash line indicate valid connection in Predict Network.", "Figure 3: Convolutional network for computing syntactic distance. Gray triangles represent 2 layers of convolution, d0 to d7 are the syntactic distance output by each of the kernel position. The blue bars indicate the amplitude of di’s, and yi’s are the inferred constituents.", "Figure 4: Syntactic distance estimated by Parsing Network. The model is trained on PTB dataset at the character level. Each blue bar is positioned between two characters, and represents the syntactic distance between them. From these distances we can infer a tree structure according to Section 4.2.", "Table 1: BPC on the Penn Treebank test set", "Table 2: PPL on the Penn Treebank test set", "Table 3: Ablation test on the Penn Treebank. “- Parsing Net” means that we remove Parsing Network and replace Structured Attention with normal attention mechanism; “- Reading Net Attention” means that we remove Structured Attention from Reading Network, that is equivalent to replace Reading Network with a normal 2-layer LSTM; “- Predict Net Attention” means that we remove Structured Attention from Predict Network, that is equivalent to have a standard projection layer; “Our 2-layer LSTM” is equivalent to remove Parsing Network and remove Structured Attention from both Reading and Predict Network.", "Table 4: PPL on the Text8 valid set", "Table 5: Parsing Performance on the WSJ10 dataset", "Figure 5: Syntactic structures of two different sentences inferred from {di} given by Parsing Network." ], "file": [ "3-Figure1-1.png", "3-Figure2-1.png", "6-Figure3-1.png", "7-Figure4-1.png", "8-Table1-1.png", "8-Table2-1.png", "9-Table3-1.png", "9-Table4-1.png", "10-Table5-1.png", "15-Figure5-1.png" ] }
1909.00183
Extracting information from free text through unsupervised graph-based clustering: an application to patient incident records
The large volume of text in electronic healthcare records often remains underused due to a lack of methodologies to extract interpretable content. Here we present an unsupervised framework for the analysis of free text that combines text-embedding with paragraph vectors and graph-theoretical multiscale community detection. We analyse text from a corpus of patient incident reports from the National Health Service in England to find content-based clusters of reports in an unsupervised manner and at different levels of resolution. Our unsupervised method extracts groups with high intrinsic textual consistency and compares well against categories hand-coded by healthcare personnel. We also show how to use our content-driven clusters to improve the supervised prediction of the degree of harm of the incident based on the text of the report. Finally, we discuss future directions to monitor reports over time, and to detect emerging trends outside pre-existing categories.
{ "section_name": [ "Introduction", "Introduction ::: Data description", "Graph-based framework for text analysis and clustering", "Graph-based framework for text analysis and clustering ::: Text Preprocessing", "Graph-based framework for text analysis and clustering ::: Text Vector Embedding", "Graph-based framework for text analysis and clustering ::: Similarity graph of documents from text similarities", "Graph-based framework for text analysis and clustering ::: Multiscale Graph Partitioning", "Graph-based framework for text analysis and clustering ::: Visualisation and interpretation of the results", "Graph-based framework for text analysis and clustering ::: Quantitative benchmarking of topic clusters", "Graph-based framework for text analysis and clustering ::: Supervised Classification for Degree of Harm", "Application to the clustering of hospital incident text reports", "Application to the clustering of hospital incident text reports ::: Markov Stability extracts content clusters at different levels of granularity", "Application to the clustering of hospital incident text reports ::: Robustness of the results and comparison with other methods", "Using free-text descriptions to predict the degree of harm of patient safety incidents with a supervised classifier", "Using free-text descriptions to predict the degree of harm of patient safety incidents with a supervised classifier ::: Supervised classification of degree of harm", "Discussion" ], "paragraphs": [ [ "", "The vast amounts of data collected by healthcare providers in conjunction with modern data analytics present a unique opportunity to improve the quality and safety of medical care for patient benefit BIBREF1. Much recent research in this area has been on personalised medicine, with the aim to deliver improved diagnostic and treatment through the synergistic integration of datasets at the level of the individual. A different source of healthcare data pertains to organisational matters. In the United Kingdom, the National Health Service (NHS) has a long history of documenting the different aspects of healthcare provision, and is currently in the process of making available properly anonymised datasets, with the aim of leveraging advanced analytics to improve NHS services.", "One such database is the National Reporting and Learning System (NRLS), a repository of patient safety incident reports from the NHS in England and Wales set up in 2003, which now contains over 13 million records. The incidents are reported under standardised categories and contain both organisational and spatio-temporal information (structured data) and a substantial component of free text (unstructured data) where incidents are described in the `voice' of the person reporting. The incidents are wide ranging: from patient accidents to lost forms or referrals; from delays in admission or discharge to serious untoward incidents, such as retained foreign objects after operations. The review and analysis of such data provides critical insight into complex processes in healthcare with a view towards service improvement.", "Although statistical analyses are routinely performed on the structured data (dates, locations, hand-coded categories, etc), free text is typically read manually and often ignored in practice, unless a detailed review of a case is undertaken because of the severity of harm that resulted. These limitations are due to a lack of methodologies that can provide content-based groupings across the large volume of reports submitted nationally for organisational learning. Automatic categorisation of incidents from free text would sidestep human error and difficulties in assigning incidents to a priori pre-defined lists in the reporting system. Such tools can also offer unbiased insight into the root cause analysis of incidents that could improve the safety and quality of care and efficiency of healthcare services.", "In this work, we showcase an algorithmic methodology that detects content-based groups of records in an unsupervised manner, based only on the free (unstructured) textual descriptions of the incidents. To do so, we combine deep neural-network high-dimensional text-embedding algorithms with graph-theoretical methods for multiscale clustering. Specifically, we apply the framework of Markov Stability (MS), a multiscale community detection algorithm, to sparsified graphs of documents obtained from text vector similarities. Our method departs both from traditional natural language processing tools, which have generally used bag-of-words (BoW) representation of documents and statistical methods based on Latent Dirichlet Allocation (LDA) to cluster documents BIBREF2, and from more recent approaches that have used deep neural network based language models, but have used k-means clustering without a graph-based analysis BIBREF3. Previous applications of network theory to text analysis have included the work of Lanchichinetti and co-workers BIBREF4, who proposed a probabilistic graph construction analysed with the InfoMap algorithm BIBREF5; however, their community detection was carried out at a single-scale and the BoW representation of text lacks the power of text embeddings. The application of multiscale community detection allows us to find groups of records with consistent content at different levels of resolution; hence the content categories emerge from the textual data, rather than from pre-designed classifications. The obtained results can help mitigate human error or effort in finding the right category in complex classification trees. We illustrate in our analysis the insight gained from this unsupervised, multi-resolution approach in this specialised corpus of medical records.", "As an additional application, we use machine learning methods for the prediction of the degree of harm of incidents directly from the text in the NRLS incident reports. Although the degree of harm is recorded by the reporting person for every event, this information can be unreliable as reporters have been known to game the system, or to give different answers depending on their professional status BIBREF6. Previous work on predicting the severity of adverse events BIBREF7, BIBREF8 used reports submitted to the Advanced Incident Management System by Australian public hospitals, and used BoW and Support Vector Machines (SVMs) to detect extreme-risk events. Here we demonstrate that publicly reported measures derived from NHS Staff Surveys can help select ground truth labels that allow supervised training of machine learning classifiers to predict the degree of harm directly from text embeddings. Further, we show that the unsupervised clusters of content derived with our method improve the classification results significantly.", "An a posteriori manual labelling by three clinicians agree with our predictions based purely on text almost as much as with the original hand-coded labels. These results indicate that incidents can be automatically classified according to their degree of harm based only on their textual descriptions, and underlines the potential of automatic document analysis to help reduce human workload." ], [ "The full dataset includes more than 13 million confidential reports of patient safety incidents reported to the National Reporting and Learning System (NRLS) between 2004 and 2016 from NHS trusts and hospitals in England and Wales. Each record has more than 170 features, including organisational details (e.g., time, trust code and location), anonymised patient information, medication and medical devices, among many other details. In most records, there is also a detailed description of the incident in free text, although the quality of the text is highly variable.", "The records are manually classified by operators according to a two-level system of incident types. The top level contains 15 categories including general classes such as `Patient accident', `Medication', `Clinical assessment', `Documentation', `Admissions/Transfer' or `Infrastructure', alongside more specific groups such as `Aggressive behaviour', `Patient abuse', `Self-harm' or `Infection control'.", "Each record is also labelled based on the degree of harm to the patients as one of: `No Harm', `Low Harm', `Moderate Harm', `Severe Harm' or `Death'. These degrees are precisely defined by the WHO BIBREF9 and the NHS BIBREF10." ], [ "Our framework combines text-embedding, geometric graph construction and multi-resolution community detection to identify, rather than impose, content-based clusters from free, unstructured text in an unsupervised manner.", "Figure FIGREF2 shows a summary of our pipeline. First, we pre-process each document to transform text into consecutive word tokens, with words in their most normalised forms and some words removed if they have no distinctive meaning when used out of context BIBREF11, BIBREF12. We then train a paragraph vector model using the Document to Vector (Doc2Vec) framework BIBREF13 on the full set (13 million) of pre-processed text records. (Training a vector model on smaller sets of 1 million records also produces good results as seen in Table TABREF5). This training step of the text model is only done once.", "The trained Doc2Vec model is subsequently used to infer high-dimensional vector descriptions for the text of each document in our target analysis set. We then compute a matrix containing all the pairwise (cosine) similarities between the Doc2Vec document vectors. This similarity matrix can be thought of as the adjacency matrix of a full, weighted graph with documents as nodes and edges weighted by their similarity. We sparsify this graph to the union of a minimum spanning tree and a k-Nearest Neighbors (MST-kNN) graph BIBREF14, a geometric construction that removes less important similarities but preserves global connectivity for the graph and, hence, for the dataset. The MST-kNN graph is then analysed with Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18, a multi-resolution graph partitioning method that identifies relevant subgraphs (i.e., clusters of documents) at different levels of granularity. MS uses a diffusive process on the graph to reveal the multiscale organisation at different resolutions without the need to choose a priori the number or type of clusters.", "The partitions found by MS across levels of resolution are analysed a posteriori through visualisations and quantitative scores. The visualisations include: (i) word clouds to summarise the main content; (ii) graph layouts; and (iii) Sankey diagrams and contingency tables that capture correspondences between partitions. The quantitative scores include: (i) the intrinsic topic coherence (measured by the pairwise mutual information BIBREF19, BIBREF20); and (ii) the similarity to hand-coded categories (measured by the normalised mutual information BIBREF21).", "Our framework also covers prediction of the degree of harm (DoH) caused to the patient usig text embeddings and the unsupervised cluster assignments obtaind from our multiscale graph partitioning. To perform this task, we use the hand-coded DoH from the NRLS to train three commonly used classifiers BIBREF22, BIBREF23 (Ridge, Support Vector Machine with a linear kernel, Random Forest) to predict the DoH using TF-iDF and Doc2Vec embeddings of the text and our MS cluster assignments. The classifiers are then evaluated in predicting the DoH using cross-validation.", "We now explain the steps of the methodological pipeline in more detail." ], [ "Text preprocessing is important to enhance the performance of text embedding techniques. We applied standard preprocessing to the raw text of all 13 million records in our corpus, as follows. We divide our documents into iterative word tokens using the NLTK library BIBREF11 and remove punctuation and digit-only tokens. We then apply word stemming using the Porter algorithm BIBREF12, BIBREF24. If the Porter method cannot find a stemmed version for a token, we apply the Snowball algorithm BIBREF25. Finally, we remove any stop-words (repeat words with low content) using NLTK's stop-word list. Although pre-processing reduces some of the syntactic information, it consolidates the semantic information of the vocabulary. We note that the incident descriptions contain typos and acronyms, which have been left uncorrected to avoid manual intervention or the use of spell checkers, so as to mimic as closely as possible a realistic scenario." ], [ "Computational text analysis relies on a mathematical representation of the base units of text (character $n$-grams, words or documents). Since our methodology is unsupervised, we avoid the use of labelled data, in contrast to supervised or semi-supervised classification methods BIBREF26, BIBREF27. In our work, we use a representation of text documents as vectors following recent developments in the field.", "Traditionally, bag-of-words (BoW) methods represented documents as vectors of word frequencies weighted by inverse document frequency (TF-iDF). Such methods provide a statistical description of documents but they do not carry information about the order or proximity of words to each other and hence disregard semantic or syntactic relationships between words. In addition, BoW representations carry little information content as they tend to be high-dimensional and very sparse, due to the large size of word dictionaries and low frequencies of many terms.", "Recently, deep neural network language models have successfully overcome the limitations of BoW methods by incorporating neighbourhoods in the mathematical description of each term. Distributed Bag of Words (DBOW), better known as Doc2Vec BIBREF13, is a form of Paragraph Vectors (PV) which creates a model that represents any word sequence (i.e. sentences, paragraphs, documents) as $d$-dimensional vectors, where $d$ is user-defined (typically $d=300$). Training a Doc2Vec model starts with a random $d$-dimensional vector assignment for each document in the corpus. A stochastic gradient descent algorithm iterates over the corpus with the objective of predicting a randomly sampled set of words from each document by using only the document's $d$-dimensional vector BIBREF13. The objective function being optimised by PV-DBOW is similar to the skip-gram model in Refs. BIBREF28, BIBREF29. Doc2Vec has been shown BIBREF30 to capture both semantic and syntactic characterisations of the input text, and outperforms BoW-based models such as LDA BIBREF2.", "Benchmarking the Doc2Vec training: Here, we use the Gensim Python library BIBREF31 to train the PV-DBOW model. The Doc2Vec training was repeated several times with a variety of training hyper-parameters (chosen based on our own numerical experiments and the general guidelines provided by BIBREF32) in order to optimise the output. To characterise the usability and quality of models, we trained Doc2Vec models using text corpora of different sizes and content with different sets of hyper-parameters. . In particular, we checked the effect of corpus size by training Doc2Vec models on the full 13 million NRLS records and on randomly sampled subsets of 1 million and 2 million records.", "Since our target analysis has heavy medical content and specific use of words, we also tested the importance of the training corpus by generating an additional Doc2Vec model using a set of 5 million articles from the English Wikipedia representing standard, generic English usage, which works well in the analysis of news articles BIBREF33.", "The results in Table TABREF5 show that training on the highly specific text from the NRLS records is an important ingredient in the successful vectorisation of the documents, as shown by the degraded performance for the Wikipedia model across a variety of training hyper-parameters. On the other hand, reducing the size of the corpus from 13 million to 1 million records did not affect the benchmarking dramatically. This robustness of the results to the size of the training corpus was confirmed further with the use of more detailed metrics, as discussed below in Section SECREF27 (see e.g., Figure FIGREF29).", "Based on our benchmarking, henceforth we use the Doc2Vec model trained on the 13+ million NRLS records with the following hyper-parameters: {training method = dbow, number of dimensions for feature vectors size = 300, number of epochs = 10, window size = 15, minimum count = 5, number of negative samples = 5, random down-sampling threshold for frequent words = 0.001 }. As an indication of computational cost, the training of this model takes approximately 11 hours (run in parallel with 7 threads) on shared servers." ], [ "Once the Doc2Vec model is trained, we use it to infer a vector for each record in our analysis subset and construct $\\hat{S}$, a similarity matrix between the vectors by: computing the matrix of cosine similarities between all pairs of records, $S_\\text{cos}$; transforming it into a distance matrix $D_{cos} = 1-S_{cos}$; applying element-wise max norm to obtain $\\hat{D}=\\Vert D_{cos}\\Vert _{max}$; and normalising the similarity matrix $\\hat{S} = 1-\\hat{D}$ which has elements in the interval $[0,1]$.", "This similarity matrix can be thought of as the adjacency matrix of a fully connected weighted graph. However, such a graph contains many edges with small weights reflecting the fact that in high-dimensional noisy data even the least similar nodes present a substantial degree of similarity. Indeed, such weak similarities are in most cases redundant and can be explained through stronger pairwise similarities. These weak, redundant edges obscure the graph structure, as shown by the diffuse visualisation in Figure FIGREF7A.", "To reveal the graph structure, we sparsify the similarity matrix to obtain a MST-kNN graph BIBREF14 based on a geometric heuristic that preserves the global connectivity of the graph while retaining details about the local geometry of the dataset. The MST-kNN algorithm starts by computing the minimum spanning tree (MST) of the full matrix $\\hat{D}$, i.e., the tree with $(N-1)$ edges connecting all nodes in the graph with minimal sum of edge weights (distances). The MST is computed using the Kruskal algorithm implemented in SciPy BIBREF34. To this MST, we add edges connecting each node to its $k$ nearest nodes (kNN) if they are not already in the MST. Here $k$ is an user-defined parameter that regulates the sparsity of the resulting graph. The binary adjacency matrix of the MST-kNN graph is Hadamard-multiplied with $\\hat{S}$ to give the adjacency matrix $A$ of the weighted, undirected sparsified graph.", "The network visualisations in Figure FIGREF7 give an intuitive picture of the effect of sparsification as $k$ is decreased. If $k$ is very small, the graph is very sparse but not robust to noise. As $k$ is increased, the local similarities between documents induce the formation of dense subgraphs (which appear closer in the graph visualisation layout). When the number of neighbours becomes too large, the local structure becomes diffuse and the subgraphs lose coherence, signalling the degradation of the local graph structure. Relatively sparse graphs that preserve important edges and global connectivity of the dataset (guaranteed here by the MST) have computational advantages when using community detection algorithms.", "Although we use here the MST-kNN construction due to its simplicity and robustness, network inference, graph sparsification and graph construction from data is an active area of research, and several alternatives exist based on different heuristics, e.g., Graphical Lasso BIBREF35, Planar Maximally Filtered Graph BIBREF36, spectral sparsification BIBREF37, or the Relaxed Minimum Spanning Tree (RMST) BIBREF38. We have experimented with some of those methods and obtained comparable results. A detailed comparison of sparsification methods as well as the choice of distance in defining the similarity matrix $\\hat{S}$ is left for future work." ], [ "Community detection encompasses various graph partitioning approaches which aim to find `good' partitions into subgraphs (or communities) according to different cost functions, without imposing the number of communities a priori BIBREF39. The notion of community depends on the choice of cost function. Commonly, communities are taken to be subgraphs whose nodes are connected strongly within the community with relatively weak inter-community edges. Such structural notion is related to balanced cuts. Other cost functions are posed in terms of transitions inside and outside of the communities, usually as one-step processes BIBREF5. When transition paths of all lengths are considered, the concept of community becomes intrinsically multi-scale, i.e., different partitions are relevant at different time scales leading to a multi-level description dictated by the transition dynamics BIBREF15, BIBREF40, BIBREF16. This leads to the framework of Markov Stability (MS), a dynamics-based, multi-scale community detection methodology, which recovers several well-known heuristics as particular cases BIBREF15, BIBREF17, BIBREF18.", "MS is an unsupervised community detection method that finds robust and stable partitions of a graph (and the associated communities) under the evolution of a continuous-time diffusion process without a priori choice of the number or type of communities or their relative relationships BIBREF15, BIBREF40, BIBREF16, BIBREF41 . In simple terms, MS can be understood by analogy to a drop of ink diffusing on the graph: the ink diffuses homogeneously unless the graph has intrinsic sub-structures, in which case the ink gets transiently contained, over particular time scales, within groups of nodes. The existence of such transients indicates a natural scale to partition the graph along the subgraphs (or communities) where the diffusion is transiently trapped. As the process continues to evolve, the ink diffuses out of those communities but might get transiently contained in other, larger subgraphs, if such multi-level structure exists. By analysing the Markov dynamics over time, MS detects the structure of the graph across scales. If a graph has no natural scales for partitioning, then MS returns no communities. The Markov time $t$ thus acts as a resolution parameter that allows us to extract robust partitions that persist over particular time scales, in an unsupervised manner.", "Mathematically, given the adjacency matrix $A_{N \\times N}$ of the graph obtained as described previously, let us define the diagonal matrix $D=\\text{diag}(\\mathbf {d})$, where $\\mathbf {d}=A \\mathbf {1}$ is the degree vector. The random walk Laplacian matrix is defined as $L_\\text{RW}=I_N-D^{-1}A$, where $I_N$ is the identity matrix of size $N$ and the transition matrix (or kernel) of the associated continuous-time Markov process is $P(t)=e^{-t L_\\text{RW}}, \\, t>0$ BIBREF16. Any partition $\\mathcal {H}$ into $C$ clusters is associated with a binary membership matrix $H_{N \\times C}$ that maps the $N$ nodes into the clusters. Below, we will use the matrix $H$ to denote the corresponding partition $\\mathcal {H}$. We can then compute the $C\\times C$ clustered autocovariance matrix:", "where $\\pi $ is the steady-state distribution of the process and $\\Pi =\\text{diag}(\\pi )$. The element $[R(t,H)]_{\\alpha \\beta }$ quantifies the probability that a random walker starting from community $\\alpha $ at $t=0$ will be in community $\\beta $ at time $t$, minus the probability that this event occurs by chance at stationarity.", "The above definitions allow us to introduce our cost function measuring the goodness of a partition over time $t$, termed the Markov Stability of partition $H$:", "A partition $H$ that maximises $r(t,H)$ is comprised of communities that preserve the flow within themselves over time $t$, since in that case the diagonal elements of $R(t,H)$ will be large and the off-diagonal elements will be small. For details, see BIBREF15, BIBREF40, BIBREF16, BIBREF42.", "Our computational algorithm thus searches for partitions at each Markov time $t$ that maximise $r(t,H)$. Although the maximisation of (DISPLAY_FORM11) is an NP-hard problem (hence with no guarantees for global optimality), there are efficient optimisation methods that work well in practice. Our implementation here uses the Louvain Algorithm BIBREF43, BIBREF18 which is efficient and known to give good results when applied to benchmarks. To obtain robust partitions, we run the Louvain algorithm 500 times with different initialisations at each Markov time and pick the best 50 with the highest Markov Stability value $r(t,H)$. We then compute the variation of information BIBREF44 of this ensemble of solutions $VI(t)$, as a measure of the reproducibility of the result under the optimisation. In addition, we search for partitions that are persistent across time $t$, as given by low values of the variation of information between optimised partitions across time $VI(t,t^{\\prime })$. Robust partitions are therefore indicated by Markov times where $VI(t)$ shows a dip and $VI(t,t^{\\prime })$ has an extended plateau with low values, indicating consistency under the optimisation and validity over extended scales BIBREF42, BIBREF16. Below, we apply MS to find partitions across scales of the similarity graph of documents, $A$. The communities detected correspond to groups of documents with similar content at different levels of granularity." ], [ "Graph layouts: We use the ForceAtlas2 BIBREF45 layout algorithm to represent graphs on the plane. This layout assigns a harmonic spring to each edge and finds through iterative rearrangements finds an arrangement on the plane that balances attractive and repulsive forces between nodes. Hence similar nodes tend to appear close together on this layout. We colour the nodes by either hand-coded categories (Figure FIGREF7) or multiscale MS communities (Figure FIGREF21). Spatially coherent colourings on this layout imply good clusters in terms of the similarity graph.", "Tracking membership through Sankey diagrams: Sankey diagrams allow us to visualise the relationship of node membership across different partitions and with respect to the hand-coded categories. Two-layer Sankey diagrams (e.g., Fig. FIGREF22) reflect the correspondence between MS clusters and the hand-coded external categories, whereas we use a multilayer Sankey diagram in Fig. FIGREF21 to present the multi-resolution MS community detection across scales.", "Normalised contingency tables: To capture the relationship between our MS clusters and the hand-coded categories, we also provide a complementary visualisation as z-score heatmaps of normalised contingency tables, e.g., Fig. FIGREF22. This allows us to compare the relative association of content clusters to the external categories at different resolution levels. A quantification of the overall correspondence is also provided by the $NMI$ score in Eq. (DISPLAY_FORM17).", "Word clouds of increased intelligibility through lemmatisation: Our method clusters text documents according to their intrinsic content. This can be understood as a type of topic detection. To visualise the content of clusters, we use Word Clouds as basic, yet intuitive, summaries of information to extract insights and compare a posteriori with hand-coded categories. They can also provide an aid for monitoring results when used by practitioners.", "The stemming methods described in Section SECREF3 truncate words severely to enhance the power of the language processing computational methods by reducing the redundancy in the word corpus. Yet when presenting the results back to a human observer, it is desirable to report the cluster content with words that are readily comprehensible. To generate comprehensible word clouds in our a posteriori analyses, we use a text processing method similar to the one described in BIBREF46. Specifically, we use the part of speech (POS) tagging module from NLTK to leave out sentence parts except the adjectives, nouns, and verbs. We also remove less meaningful common verbs such as `be', `have', and `do' and their variations. The remaining words are then lemmatised in order to normalise variations of the same word. Finally, we use the Python library wordcloud to create word clouds with 2 or 3-gram frequency list of common word groups." ], [ "Although our dataset has a classification hand-coded by a human operator, we do not use it in our analysis. Indeed, one of our aims is to explore the relevance of the fixed external classes as compared to content-driven groupings obtained in an unsupervised manner. Therefore we provide a double route to quantify the quality of the clusters by computing two complementary measures: (i) an intrinsic measure of topic coherence, and (ii) a measure of similarity to the external hand-coded categories.", "Topic coherence of text: As an intrinsic measure of consistency of word association, we use the pointwise mutual information ($PMI$) BIBREF19, BIBREF47. The $PMI$ is an information-theoretical score that captures the probability of words being used together in the same group of documents. The $PMI$ score for a pair of words $(w_1,w_2)$ is:", "where the probabilities of the words $P(w_1)$, $P(w_2)$, and of their co-occurrence $P(w_1 w_2)$ are obtained from the corpus. We obtain an aggregate $\\widehat{PMI}$ for the graph partition $C=\\lbrace c_i\\rbrace $ by computing the $PMI$ for each cluster, as the median $PMI$ between its 10 most common words (changing the number of words gives similar results), and computing the weighted average of the $PMI$ cluster scores:", "where $c_i$ denotes the clusters in partition $C$, each with size $n_i$, so that $N=\\sum _{c_i \\in C} n_i$ is the total number of nodes. Here $S_i$ denotes the set of top 10 words for cluster $c_i$.", "The $PMI$ score has been shown to perform well BIBREF19, BIBREF47 when compared to human interpretation of topics on different corpora BIBREF48, BIBREF49, and is designed to evaluate topical coherence for groups of documents, in contrast to other tools aimed at short forms of text. See BIBREF26, BIBREF27, BIBREF50, BIBREF51 for other examples.", "Here, we use the $\\widehat{PMI}$ score to evaluate partitions without any reference to an externally labelled `ground truth'.", "Similarity between the obtained partitions and the hand-coded categories: To quantify how our content-driven unsupervised clusters compare against the external classification, we use the normalised mutual information ($NMI$), a well-known information-theoretical score that quantifies the similarity between clusterings considering correct and incorrect assignments in terms of the information between the clusterings. The NMI between two partitions $C$ and $D$ of the same graph is:", "where $I(C,D)$ is the Mutual Information and $H(C)$ and $H(D)$ are the entropies of the two partitions.", "The $NMI$ is bounded ($0 \\le NMI \\le 1$) and a higher value corresponds to higher similarity of the partitions (i.e., $NMI=1$ when there is perfect agreement between partitions $C$ and $D$). The $NMI$ score is directly related to the V-measure in the computer science literature BIBREF52." ], [ "As a further application of our work, we have carried out a supervised classification task aimed at predicting the degree of harm of an incident directly from the text and the hand-coded features (e.g., external category, medical specialty, location). A one-hot encoding is applied to turn these categorical values into numerical ones. We also checked if using our unsupervised content-driven cluster labels as additional features can improve the performance of the supervised classification.", "The supervised classification was carried out by training on features and text three classifiers commonly applied to text classification tasks BIBREF22, BIBREF23: a Ridge classifier, Support Vector Machines with a linear kernel, and Random Forests. The goal is to predict the degree of harm (DoH) among five possible values (1-5). The classification is carried out with five-fold cross validation, using 80% of the data to train the model and the remaining 20% to test it. As a measure of performance of the classifiers and models, we use the weighted average of the F1 score for all levels of DoH, which takes into account both precision and recall, i.e., both the exactness and completeness of the model." ], [ "We showcase our methodology through the analysis of the text from NRLS patient incident reports. In addition to textual descriptions, the reports are hand-coded upon reporting with up to 170 features per case, including a two-level manual classification of the incidents.", "Here, we only use the text component and apply our graph-based text clustering to a set of 3229 reports from St Mary's Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014. As summarised in Figure FIGREF2, we start by training our Doc2Vec text embedding using the full 13+ million records collected by the NRLS since 2004 (although, as discussed above, a much smaller corpus of NRLS documents can be used). We then infer vectors for our 3229 records, compute the cosine similarity matrix and construct an MST-kNN graph with $k=13$ for our graph-based clustering. (We have confirmed the robustness of the MST-kNN construction in our data for $k>13$ by scanning values of $k \\in [1,50]$, see Section SECREF27). We then applied Markov Stability, a multi-resolution graph partitioning algorithm to the MST-kNN graph. We scan across Markov time ($t \\in [0.01, 100]$ in steps of 0.01). At each $t$, we run 500 independent Louvain optimisations to select the optimal partition found, as well as quantifying the robustness to optimisation by computing the average variation of information $VI(t)$ between the top 50 partitions. Once the full scan across $t$ is finalised, we compute $VI(t,t^{\\prime })$, the variation of information between the optimised partitions found across the scan in Markov time, to select partitions that are robust across scales." ], [ "Figure FIGREF21 presents a summary of our MS analysis. We plot the number of clusters of the optimal partition and the two metrics of variation of information across all Markov times. The existence of a long plateau in $VI(t,t^{\\prime })$ coupled to a dip in $VI(t)$ implies the presence of a partition that is robust both to the optimisation and across Markov time. To illustrate the multi-scale features of the method, we choose several of these robust partitions, from finer (44 communities) to coarser (3 communities), obtained at five Markov times and examine their structure and content. The multi-level Sankey diagram summarises the relationship of the partitions across levels.", "The MS analysis of the graph reveals a multi-level structure of partitions, with a strong quasi-hierarchical organisation. We remark that our optimisation does not impose any hierarchical structure a priori, so that the observed consistency of communities across levels is intrinsic to the data and suggests the existence of sub-themes that integrate into larger thematic categories. The unsupervised detection of intrinsic scales by MS enables us to obtain groups of records with high content similarity at different levels of granularity. This capability can be used by practitioners to tune the level of description to their specific needs, and is used below as an aid in our supervised classification task in Section SECREF4.", "To ascertain the relevance of the layers of content found by MS, we examined the five levels of resolution in Figure FIGREF21. For each level, we produced lemmatised word clouds, which we used to generate descriptive content labels for the communities. We then compared a posteriori the content clusters with the hand-coded categories through a Sankey diagram and a contingency table. The results are shown in Figures FIGREF22–FIGREF25 for each of the levels.", "The partition into 44 communities presents content clusters with well-defined characterisations, as shown by the Sankey diagram and the highly clustered structure of the contingency table (Figure FIGREF22). Compared to the 15 hand-coded categories, this 44-community partition provides finer groupings corresponding to specific sub-themes within the generic hand-coded categories. This is apparent in the hand-coded classes `Accidents', `Medication', `Clinical assessment', `Documentation' and `Infrastructure', where a variety of meaningful subtopics are identified (see Fig. FIGREF23 for details). In other cases, however, the content clusters cut across the external categories, e.g., the clusters on labour ward, chemotherapy, radiotherapy and infection control are coherent in content but can belong to several of the external classes. At this level of resolution, our algorithm also identified highly specific topics as separate content clusters, including blood transfusions, pressure ulcer, consent, mental health, and child protection, which have no direct relationship with the external classes provided to the operator.", "Figure FIGREF24A and FIGREF24B present the results for two partitions at medium level of resolution, where the number of communities (12 and 17) is close to that of hand-coded categories (15). As expected from the quasi-hierarchy detected by our multi-resolution analysis, we find that the communities in the 17-way and 12-way partitions emerge from consistent aggregation of the smaller communities in the 44-way partition in Figure FIGREF22. Focussing on the 12-way partition, we see that some of the sub-themes in Figure FIGREF23 are merged into more general topics. An example is Accidents (community 2 in Fig. FIGREF24A), a merger of seven finer communities, which corresponds well with the external category `Patient accidents'. A similar phenomenon is seen for the Nursing cluster (community 1), which falls completely under the external category `Infrastructure'. The clusters related to `Medication' similarly aggregate into a larger community (community 3), yet there still remains a smaller, specific community related to Homecare medication (community 12) with distinct content. Other communities, on the other hand, still strand across external categories. This is clearly observable in communities 10 and 11 (Samples/ lab tests/forms and Referrals/appointments), which fall naturally across the `Documentation' and `Clinical Assessment'. Similarly, community 9 (Patient transfers) sits across the `Admission/Transfer' and `Infrastructure' external categories, due to its relation to nursing and hospital constraints. A substantial proportion of records was hand-coded under the generic `Treatment/Procedure' class, yet MS splits into into content clusters that retain medical coherence, e.g., Radiotherapy (Comm. 4), Blood transfusions (Comm. 7), IV/cannula (Comm. 5), Pressure ulcer (Comm. 8), and the large community Labour ward (Comm. 6).", "The medical specificity of the Radiotherapy, Pressure ulcer and Labour ward clusters means that they are still preserved as separate groups to the next level of coarseness in the 7-way partition (Figure FIGREF25A). The mergers in this case lead to a larger communities referring to Medication, Referrals/Forms and Staffing/Patient transfers. Figure FIGREF25B shows the final level of agglomeration into 3 content clusters: records referring to Accidents; a group broadly referring to matters Procedural (referrals, forms, staffing, medical procedures) cutting across external categories; and the Labour ward cluster, still on its own as a subgroup with distinctive content.", "This process of agglomeration of content, from sub-themes into larger themes, as a result of the multi-scale hierarchy of MS graph partitions is shown explicitly with word clouds in Figure FIGREF26 for the 17-, 12- and 7-way partitions. Our results show good overall correspondence with the hand-coded categories across resolutions, yet our results also reveal complementary categories of incidents not defined in the external classification. The possibility of tuning the granularity afforded by our method can be used to provide a distinct level of resolution in certain areas corresponding to specialised or particular sub-themes." ], [ "We have examined quantitatively the robustness of the results to parametric and methodological choices in different steps of our framework. Specifically, we evaluate the effect of: (i) using Doc2Vec embeddings instead of BoW vectors; (ii) the size of corpus for training Doc2Vec; (iii) the sparsity of the MST-kNN graph construction. We have also carried out quantitative comparisons to other methods for topic detection and clustering: (i) LDA-BoW, and (ii) several standard clustering methods.", "Doc2Vec provides improved clusters compared to BoW: As compared to standard bag of words (BoW), fixed-sized vector embeddings (Doc2Vec) produces lower dimensional vector representations with higher semantic and syntactic content. Doc2Vec outperforms BoW representations in practical benchmarks of semantic similarity and is less sensitive to hyper-parameters BIBREF30. To quantify the improvement provided by Doc2Vec, we constructed a MST-kNN graph from TF-iDF vectors and ran MS on this TF-iDF similarity graph. Figure FIGREF28 shows that Doc2Vec outperforms BoW across all resolutions in terms of both $NMI$ and $\\widehat{PMI}$ scores.", "Robustness to the size of the Doc2Vec training dataset : Table TABREF5 indicates a small effect of the size of the training corpus on the Doc2Vec model. To confirm this, we trained two additional Doc2Vec models on sets of 1 million and 2 million records (randomly chosen from the full 13+ million records) and followed the same procedure to construct the MST-kNN graph and carry out the MS analysis. Figure FIGREF29 shows that the performance is affected only mildly by the size of the Doc2Vec training set.", "Robustness to the level of graph sparsification:", "We sparsify the matrix of cosine similarities using the MST-kNN graph construction. The smaller the value of $k$, the sparser the graph. Sparser graphs have computational advantages for community detection algorithms, but too much sparsification degrades the results. Figure FIGREF30 shows the effect of sparsification in the graph construction on the performance of MS clusters. Our results are robust to the choice of $k$, provided it is not too small: both the $NMI$ and $\\widehat{PMI}$ scores reach a similar level for values of $k$ above 13-16. Due to computational efficiency, we favour a relatively small value of $k=13$.", "Comparison of MS partitions to Latent Dirichlet Allocation with Bag-of-Words (LDA-BoW): We have compared the MS results to LDA, a widely used methodology for text analysis. A key difference in LDA is that a different model needs to be trained when the number of topics changes, whereas our MS method produces clusterings at all levels of resolution in one go. To compare the outcomes, we trained five LDA models corresponding to the five MS levels in Figure FIGREF21. Table TABREF31 shows that MS and LDA give partitions that are comparably similar to the hand-coded categories (as measured with $NMI$), with some differences depending on the scale, whereas the MS clusters have higher topic coherence (as given by $\\widehat{PMI}$) across all scales.", "To give an indication of computational cost, we ran both methods on the same servers. Our method takes approximately 13 hours in total (11 hours to train the Doc2Vec model on 13 million records and 2 hours to produce the full MS scan with 400 partitions across all resolutions). The time required to train just the 5 LDA models on the same corpus amounts to 30 hours (with timings ranging from $\\sim $2 hours for the 3 topic LDA model to 12.5 hours for the 44 topic LDA model). This comparison also highlights the conceptual difference between our multi-scale methodology and LDA topic modelling. While LDA computes topics at a pre-determined level of resolution, our method obtains partitions at all resolutions in one sweep of the Markov time, from which relevant partitions are chosen based on their robustness. The MS partitions at all resolutions are available for further investigation if so needed.", "Comparison of MS to other partitioning and community detection algorithms: We have partitioned the same kNN-MST graph using several well-known algorithms readily available in code libraries (i.e., the iGraph module for Python): Modularity Optimisation BIBREF53, InfoMap BIBREF5, Walktrap BIBREF54, Label Propagation BIBREF55, and Multi-resolution Louvain BIBREF43. Note that, in contrast with our multiscale MS analysis, these methods give just one partition at a particular resolution (or two for the Louvain implementation in iGraph). Figure FIGREF32 shows that MS provides improved or equal results to all those other graph partitioning methods for both $NMI$ and $\\widehat{PMI}$ across all scales. Only for very fine resolution (more than 50 clusters) does Infomap, which partitions graphs into small clique-like subgraphs BIBREF40, BIBREF56, provide a slightly improved $NMI$. Therefore, MS finds both relevant and high quality clusterings across all scales by sweeping the Markov time parameter." ], [ "Here we approach the task of training a supervised classifier that predicts the degree of harm of an incident based on other features of the record (such as location, external category, and medical specialty) and on the textual component of the report. To this end, we use the embedded text vectors and MS cluster labels of the records as features to predict the degree of harm to the patient.", "Each NRLS record has more than 170 features filled manually by healthcare staff, including the degree of harm (DoH) to the patient, a crucial assessment of the reported incident. The incident is classified into five levels: 'No harm', 'Low', 'Moderate', 'Severe', and 'Death'. However, the reported DoH is not consistent across hospitals and can be unreliable BIBREF6.", "The lack of reliability of the recorded DoH poses a challenge when training supervised models. Given the size of the dataset, it is not realistic to ask medics to re-evaluate incidents manually. Instead, we use the publicly available `Learning from mistakes league table' based on NHS staff survey data to identify organisations (NHS Trusts) with `outstanding' (O) and `poor reporting culture' (PRC). Our hypothesis is that training our classifiers on records from organisations with better rankings in the league table should lead to improved prediction. If there is a real disparity in the manual classification among organisations, only incidents labelled by O-ranked Trusts should be regarded as a `ground truth'." ], [ "We study NRLS incidents reported between 2015 and 2017 from O-ranked and PRC-ranked Trusts. The 2015-17 NRLS dataset is very unbalanced: there are 2,038,889 “No harm” incidents against only 6,754 “Death” incidents. To tackle this issue, we sample our dataset as recommended by BIBREF8, and randomly select 1,016 records each of `No harm' , `Low', and `Moderate', and 508 records each of `Severe' and `Death' incidents, from each type of Trust. We thus obtain two datasets (O and PRC) consisting of a total of 4,064 incidents each.", "For each dataset (O and PRC), we train three classifiers (Ridge, Support Vector Machine with a linear kernel, and Random Forest) with five-fold cross validation, and we compute the F-1 scores of each fold to evaluate the model performance. We first train models using three categories from the reports: location (L), external hand-coded category (C), and medical specialty (S). We also compute the performance of models trained on text features, both TF-iDF and Doc2Vec. We also study models trained on a mixture of text and categories. Finally, we run Markov Stability as described above to obtain cluster labels for each dataset (O and PRC) at different resolutions (70, 45, 30 and 13 communities). We then evaluate if it is advantageous to include the labels of the MS clusters as additional features.", "Table TABREF34 presents the results of our numerical experiments. Our first observation is that, for this data, SVM with linear kernel has the best performance (similar to Ridge), and Random Forests perform poorly in general. There are several conclusions from our study. First, there is a consistent difference between the scores of the O and PRC datasets (ranging from 1.7% to 11.2% for an average of 5.6%), thus confirming our hypothesis that automated classification performs better when training with data from organizations with better rankings in the league table. Second, using text features is highly advantageous in predicting the degree of harm compared to category alone: there is a substantial increase of up to 100% in the F1 score between column 1 (all three categories) and column 2 (Tf-iDF). Furthermore, adding categorical features (L, C, or S) to the TF-iDF text features improves the scores only marginally (around 2%), as seen by comparing columns 3–6 with column 2.", "Given the demonstrated importance of text, we studied the effect of using more refined textual features for classification. In columns 7-10, we considered the effect of adding to TF-iDF the MS labels extracted from our text analysis (as described above), and we find a larger improvement of around 7% with respect to mere TF-iDF (column 2). The improvement is larger for finer clusterings into 70 and 45 communities, which contain enough detail that can be associated with levels of risk (e.g., type of accident). This supports the value of the multi-resolution groupings we have extracted through our analysis.", "We also studied the impact of using Doc2Vec vectors as features. Interestingly, the comparison between columns 2 and 11 shows that there is only a slight improvement of 2% when using Doc2Vec instead of TF-iDF features for the case of records from O-ranked institutions, but the improvement is of 12% for the records from PRC Trusts. This differences suggests that the usage of terms is more precise in O-ranked hospitals so that the differences between TF-iDF are minimised, while the advantages of the syntactic and semantic reconstruction of the Doc2Vec embedding becomes more important in the case of PRC Trusts.", "Based on these findings, we build our final model that uses a Support Vector Machine classifier with both Doc2Vec embeddings and the MS labels for 30 content clusters (encoded via a One-Hot encoder) as features. We choose to keep only 30 communities as this performs well when combined with the Doc2Vec embedding (without slowing too much the classifier). We performed a grid search to optimise the hyperparameters of our model (penalty = 10, tolerance for stopping criterion = 0.0001, linear kernel). For the O-ranked records, our model achieves a weighted F1 score of 0.657, with a 19% improvement with respect to TF-iDF text features and a 107% improvement with respect to categorical features. (For the PRC records, the corresponding improvements are 33% and 215%, respectively.) Note that similar improvements are also obtained for the other classifiers when using Doc2Vec and MS labels as features. It is also worth noting that the differences in the prediction of DoH between PRC and O-ranked records is reduced when using text tools and, specifically, the F1-score of the SVM classifier based on Doc2Vec with MS is almost the same for both datasets. Hence the difference in the quality of the reporting categories can be ameliorated by the use of the textual content of the reports. We summarise the main comparison of the performance of the SVM classifier based on categorical, raw text, and text with content for both datasets in Figure FIGREF35.", "Examination of the types of errors and ex novo re-classification by clinicians:", "A further analysis of the confusion matrices used to compute the F1 score reveals that most of the errors of our model are concentrated in the `No harm', `Low harm' and `Moderate harm' categories, whereas fewer errors are incurred in the `Severe harm' and `Death' categories. Therefore, our method is more likely to return false alarms rather than missing important and harmful incidents.", "In order to have a further evaluation of our results, we asked three clinicians to analyse ex novo a randomly chosen sample of 135 descriptions of incidents, and to determine their degree of harm based on the information in the incident report. The sample was selected from the O-ranked dataset and no extra information apart from the text was provided. We then compared the DoH assigned by the clinicians with both the results of our classifier and the recorded DoH in the dataset.", "Remarkably, the agreement rate of the clinicians' assessment with the recorded DoH was surprisingly low. For example, the agreement in the `No Harm' incidents was only 38%, and in the `Severe' incidents only 49%. In most cases, though, the disparities amounted to switching the DoH by one degree above or below. To reduce this variability, we analysed the outcomes in terms of three larger groups: `No Harm' and `Low Harm' incidents were considered as one outcome; `Moderate Harm' was kept separate; and `Severe Harm' and `Death' were grouped as one outcome, since they both need to be notified to NHS safety managers.", "The results are presented in Table TABREF36. Our classification agrees as well as the pre-existing DoH in the dataset with the ex novo assessment of the clinicians, but our method has higher agreement in the severe and deadly incidents. These results confirm that our method performs as well as the original annotators but is better at identifying risky events." ], [ "We have applied a multiscale graph partitioning algorithm (Markov Stability) to extract content-based clusters of documents from a textual dataset of incident reports in an unsupervised manner at different levels of resolution. The method uses paragraph vectors to represent the records and analyses the ensuing similarity graph of documents through multi-resolution capabilities to capture clusters without imposing a priori their number or structure. The different levels of resolution found to be relevant can be chosen by the practitioner to suit the requirements of detail for each specific task. For example, the top level categories of the pre-defined classification hierarchy are highly diverse in size, with large groups such as `Patient accident', `Medication', `Clinical assessment', `Documentation', `Admissions/Transfer' or `Infrastructure' alongside small, specific groups such as `Aggressive behaviour', `Patient abuse', `Self-harm' or `Infection control'. Our multi-scale partitioning finds additional subcategories with medical detail within some of the large categories (Fig. FIGREF22 and FIGREF23).", "Our a posteriori analysis showed that the method recovers meaningful clusters of content as measured by the similarity of the groups against the hand-coded categories and by the intrinsic topic coherence of the clusters. The clusters have high medical content, thus providing complementary information to the externally imposed classification categories. Indeed, some of the most relevant and persistent communities emerge because of their highly homogeneous medical content, even if they cannot be mapped to standardised external categories.", "An area of future research will be to confirm if the finer unsupervised cluster found by our analysis are consistent with a second level in the hierarchy of external categories (Level 2, around 100 categories), which is used less consistently in hospital settings. The use of content-driven classification of reports could also be important within current efforts by the World Health Organisation (WHO) under the framework for the International Classification for Patient Safety (ICPS) BIBREF9 to establish a set of conceptual categories to monitor, analyse and interpret information to improve patient care.", "We have used our clusters within a supervised classifier to predict the degree of harm of an incident based only on free-text descriptions. The degree of harm is an important measure in hospital evaluation and has been shown to depend on the reporting culture of the particular organisation. Overall, our method shows that text description complemented by the topic labels extracted by our method show improved performance in this task. The use of such enhanced NLP tools could help improve reporting frequency and quality, in addition to reducing burden to staff, since most of the necessary information can be retrieved automatically from text descriptions. Further work, would aim to add interpretability to the supervised classification BIBREF57, so as to provide medical staff with a clearer view of the outcomes of our method and to encourage its uptake.", "One of the advantages of a free text analytical approach is the provision, in a timely manner, of an intelligible description of incident report categories derived directly from the 'words' of the reporters themselves. Insights from the analysis of such free text entries can add rich information than would have not otherwise been obtained from pre-defined classes. Not only could this improve the current state of play where much of the free text of these reports goes unused, but by avoiding the strict assignment to pre-defined categories of fixed granularity free text analysis could open an opportunity for feedback and learning through more nuanced classifications as a complementary axis to existing approaches.", "Currently, local incident reporting systems used by hospitals to submit reports to the NRLS require risk managers to improve data quality, due to errors or uncertainty in categorisation. The application of free text analytical approaches has the potential to free up time from this labour-intensive task, focussing instead in quality improvement derived from the content of the data itself. Additionally, the method allows for the discovery of emerging topics or classes of incidents directly from the data when such events do not fit existing categories by using methods for anomaly detection to decide whether new topic clusters should be created. This is a direction of future work.", "Further work also includes the use of our method to enable comparisons across healthcare organisations and also to monitor changes in their incident reports over time. Another interesting direction is to provide online classification suggestions to users based on the text they input as an aid with decision support and data collection, which can also help fine-tune the predefined categories. Finally, it would be interesting to test if the use of deep learning algorithms can improve our classification scores.", "We thank Elias Bamis, Zijing Liu and Michael Schaub for helpful discussions. This research was supported by the National Institute for Health Research (NIHR) Imperial Patient Safety Translational Research Centre and NIHR Imperial Biomedical Research Centre. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health. All authors acknowledge support from the EPSRC through award EP/N014529/1 funding the EPSRC Centre for Mathematics of Precision Healthcare." ] ] }
{ "question": [ "How are content clusters used to improve the prediction of incident severity?", "What cluster identification method is used in this paper?" ], "question_id": [ "ee9b95d773e060dced08705db8d79a0a6ef353da", "dbdf13cb4faa1785bdee90734f6c16380459520b" ], "nlp_background": [ "infinity", "infinity" ], "topic_background": [ "familiar", "familiar" ], "paper_read": [ "no", "no" ], "search_query": [ "", "" ], "question_writer": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f", "fa716cd87ce6fd6905e2f23f09b262e90413167f" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "they are used as additional features in a supervised classification task", "evidence": [ "As a further application of our work, we have carried out a supervised classification task aimed at predicting the degree of harm of an incident directly from the text and the hand-coded features (e.g., external category, medical specialty, location). A one-hot encoding is applied to turn these categorical values into numerical ones. We also checked if using our unsupervised content-driven cluster labels as additional features can improve the performance of the supervised classification." ], "highlighted_evidence": [ "As a further application of our work, we have carried out a supervised classification task aimed at predicting the degree of harm of an incident directly from the text and the hand-coded features (e.g., external category, medical specialty, location). ", "We also checked if using our unsupervised content-driven cluster labels as additional features can improve the performance of the supervised classification." ] } ], "annotation_id": [ "82453702db84beeb6427825f2997da5bb04df935" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "A combination of Minimum spanning trees, K-Nearest Neighbors and Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18", "evidence": [ "The trained Doc2Vec model is subsequently used to infer high-dimensional vector descriptions for the text of each document in our target analysis set. We then compute a matrix containing all the pairwise (cosine) similarities between the Doc2Vec document vectors. This similarity matrix can be thought of as the adjacency matrix of a full, weighted graph with documents as nodes and edges weighted by their similarity. We sparsify this graph to the union of a minimum spanning tree and a k-Nearest Neighbors (MST-kNN) graph BIBREF14, a geometric construction that removes less important similarities but preserves global connectivity for the graph and, hence, for the dataset. The MST-kNN graph is then analysed with Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18, a multi-resolution graph partitioning method that identifies relevant subgraphs (i.e., clusters of documents) at different levels of granularity. MS uses a diffusive process on the graph to reveal the multiscale organisation at different resolutions without the need to choose a priori the number or type of clusters." ], "highlighted_evidence": [ "We sparsify this graph to the union of a minimum spanning tree and a k-Nearest Neighbors (MST-kNN) graph BIBREF14, a geometric construction that removes less important similarities but preserves global connectivity for the graph and, hence, for the dataset. The MST-kNN graph is then analysed with Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18, a multi-resolution graph partitioning method that identifies relevant subgraphs (i.e., clusters of documents) at different levels of granularity." ] } ], "annotation_id": [ "6af21ecba3913d0642839a78afa05336601103e4" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] } ] }
{ "caption": [ "Fig. 1: Pipeline for data analysis contains training of the text embedding model along with the two methods we showcase in this work. First is the graph-based unsupervised clustering of documents at different levels of resolution to find topic clusters only from the free text descriptions of hospital incident reports from the NRLS database. Second one uses the topic clusters to improve supervised classification performance of degree of harm prediction.", "Table 1: Benchmarking of text corpora used for Doc2Vec training. A Doc2Vec model was trained on three corpora of NRLS records of different sizes and a corpus of Wikipedia articles using a variety of hyper-parameters. The scores represent the quality of the vectors inferred using the corresponding model. Specifically, we calcule centroids for the 15 externally hand-coded categories and select the 100 nearest reports for each centroid. We then report the number of incident reports (out of 1500) correctly assigned to their centroid.", "Fig. 2: Similarity graphs generated from the vectors of a subset of 3229 patient records. Each node represents a record and is coloured according to its hand-coded, external category to aid visualisation but these external categories are not used to produce our content-driven clustering in Figure 3. Layouts for: (a) full, weighted normalised similarity matrix Ŝ without MST-kNN applied, and (b)–(e)MST-kNN graphs generated from the data with increasing sparsity as k is reduced. The structure of the graph is sharpened for intermediate values of k.", "Fig. 3: The top plot presents the results of the Markov Stability algorithm across Markov times, showing the number of clusters of the optimised partition (red), the variation of information V I(t) for the ensemble of optimised solutions at each time (blue) and the variation of Information V I(t, t ′) between the optimised partitions across Markov time (background colourmap). Relevant partitions are indicated by dips of V I(t) and extended plateaux of V I(t, t ′). We choose five levels with different resolutions (from 44 communities to 3) in our analysis. The Sankey diagram below illustrates how the communities of documents (indicated by numbers and colours) map across Markov time scales. The community structure across scales present a strong quasi-hierarchical character—a result of the analysis and the properties of the data, since it is not imposed a priori. The different partitions for the five chosen levels are shown on a graph layout for the document similarity graph created with the MST-kNN algorithm with k = 13. The colours correspond to the communities found by MS indicating content clusters.", "Fig. 4: Summary of the 44-community partition found with the MS algorithm in an unsupervised manner directly from the text of the incident reports. The 44 content communities are compared a posteriori to the 15 hand-coded categories (indicated by names and colours) through a Sankey diagram between communities and categories (left), and through a z-score contingency table (right). We have assigned a descriptive label to the content communities based on their word clouds in Figure 5.", "Fig. 5: Word clouds of the 44-community partition showing the detailed content of the communities found. The word clouds are split into two sub-figures (A) and (B) for ease of visualisation.", "Fig. 6: Summary of: (A) 17-way and (B) 12-way MS content clusters and their correspondence to the external categories.", "Fig. 7: Summary of MS partitions into (A) 7 communities and (B) 3 communities, showing their correspondence to external hand-coded categories. Some of the MS content clusters have strong medical content (e.g., labour ward, radiotherapy, pressure ulcer) and are not grouped with other procedural records due to their semantic distinctiveness, even to this coarse level of clustering.", "Fig. 8: Word clouds of the MS partitions into 17, 12 and 7 clusters show a multiresolution coarsening in the content following the quasi-hierarchical community structure found in the document similarity graph.", "Fig. 9: Comparison of MS applied to Doc2Vec versus BoW (using TF-iDF) similarity graphs obtained under the same graph construction. (A) Similarity against the externally hand-coded categories measured with N MI; (B) intrinsic topic coherence of the computed clusters measured with P̂MI.", "Fig. 10: Evaluating the effect of the size of the training corpus. (A) Similarity to hand-coded categories (measured with N MI) and (B) Topic Coherence score (measured with P̂MI) of the MS clusters when the Doc2Vec model is trained on: 1 million, 2 million, and the full set of 13 million records. The corpus size does not affect the results.", "Fig. 11: Effect of the sparsification of the MST-kNN graphs on MS clusters. (A) Similarity against the externally hand-coded categories measured with N MI; (B) Intrinsic topic coherence of the computed clusters measured with P̂MI. The clusters have similar quality for values of k above 13-16.", "Table 2: Similarity to hand-coded categories (N MI) and topic coherence (P̂MI) for the five MS resolutions in Figure 3 and their corresponding LDA models.", "Fig. 12: Comparison of MS results versus other common community detection or graph partitioning methods: (A) Similarity against the externally hand-coded categories measured with N MI; (B) intrinsic topic coherence of the computed clusters measured with P̂MI. MS provides high quality clusters across all scales.", "Table 3: Weighted F1-scores for three classifiers (Ridge, SVM with a linear kernel, Random Forest) trained on the O and PRC datasets of incident reports, for different features: non-textual categorical features (L: Localisation; C: Hand-coded Category; S: Medical Specialty); TF-iDF textual features (TF-iDF embedding of text in incident report); Doc2Vec textual features (Doc2Vec embedding of text in incident report); labels of X=70, 45, 30, 13 communities obtained from unsupervised Markov Stability analysis (MS-x). The SVM classifier performs best across the dataset. The classification is better for O-ranked records compared to PRC-ranked records. Text classifiers have highly improved performance compared to purely categorical classifiers. The best classifier is based on Doc2Vec features augmented by MS labels obtained with our unsupervised framework.", "Fig. 13: Performance of the SVM classifier based on categorical features alone, text features (TF-iDF) alone, and text features (Doc2Vec) with content labels (MS30) on both sets of incident reports: the one collecged from Outstanding Trusts (’O’-ranked) and from Trusts with a Poor Reporting Culture (’PRC’-ranked). The inclusion of more sophisticated text and content labels improves prediction and closes the gap in the quality between both sets of records.", "Table 4: The ex novo re-classification by three clinicians of 135 incident reports (chosen at random) is compared to the pre-existing classification in the dataset and the prediction of our model." ], "file": [ "5-Figure1-1.png", "7-Table1-1.png", "9-Figure2-1.png", "15-Figure3-1.png", "16-Figure4-1.png", "17-Figure5-1.png", "19-Figure6-1.png", "20-Figure7-1.png", "21-Figure8-1.png", "22-Figure9-1.png", "23-Figure10-1.png", "24-Figure11-1.png", "24-Table2-1.png", "25-Figure12-1.png", "27-Table3-1.png", "28-Figure13-1.png", "29-Table4-1.png" ] }
1703.08885
Question Answering from Unstructured Text by Retrieval and Comprehension
Open domain Question Answering (QA) systems must interact with external knowledge sources, such as web pages, to find relevant information. Information sources like Wikipedia, however, are not well structured and difficult to utilize in comparison with Knowledge Bases (KBs). In this work we present a two-step approach to question answering from unstructured text, consisting of a retrieval step and a comprehension step. For comprehension, we present an RNN based attention model with a novel mixture mechanism for selecting answers from either retrieved articles or a fixed vocabulary. For retrieval we introduce a hand-crafted model and a neural model for ranking relevant articles. We achieve state-of-the-art performance on W IKI M OVIES dataset, reducing the error by 40%. Our experimental results further demonstrate the importance of each of the introduced components.
{ "section_name": [ "Introduction", "WikiMovies Dataset", "Comprehension Model", "Comprehension model detail", "Retrieval Model", "Hand-Crafted Model (r1)", "Learning Model (R2)", "Experiments", "Performance of Retrieval Models", "Benefit of training methods", "Visualization", "Performance in each category", "Analysis of the mixture gate", "Related Work", "Conclusion and Future Work" ], "paragraphs": [ [ "Natural language based consumer products, such as Apple Siri and Amazon Alexa, have found wide spread use in the last few years. A key requirement for these conversational systems is the ability to answer factual questions from the users, such as those about movies, music, and artists.", "Most of the current approaches for Question Answering (QA) are based on structured Knowledge Bases (KB) such as Freebase BIBREF0 and Wikidata BIBREF1 . In this setting the question is converted to a logical form using semantic parsing, which is queried against the KB to obtain the answer BIBREF2 , BIBREF3 . However, recent studies have shown that even large curated KBs, such as Freebase, are incomplete BIBREF4 . Further, KBs support only certain types of answer schemas, and constructing and maintaining them is expensive.", "On the other hand, there is a vast amount of unstructured knowledge available in textual form from web pages such as Wikipedia, and hence an alternative is to directly answer questions from these documents. In this approach, shown in Figure 1 , articles relevant to the question are first selected (retrieval step). Then, the retrieved articles and question are jointly processed to extract the answer (comprehension step). This retrieval based approach has a longer history than the KB based approach BIBREF5 . It can potentially provide a much wider coverage over questions, and is not limited to specific answer schemas. However, there are still gaps in its performance compared to the KB-based approach BIBREF6 . The comprehension step, which requires parsing information from natural language, is the main bottleneck, though suboptimal retrieval can also lead to lower performance.", "Several large-scale datasets introduced recently BIBREF7 , BIBREF8 have facilitated the development of powerful neural models for reading comprehension. These models fall into one of two categories: (1) those which extract answers as a span of text from the document BIBREF9 , BIBREF10 , BIBREF11 (Figure 2 top); (2) those which select the answer from a fixed vocabulary BIBREF12 , BIBREF6 (Figure 2 bottom). Here we argue that depending on the type of question, either (1) or (2) may be more appropriate, and introduce a latent variable mixture model to combine the two in a single end-to-end framework.", "We incorporate the above mixture model in a simple Recurrent Neural Network (RNN) architecture with an attention mechanism BIBREF13 for comprehension. In the second part of the paper we focus on the retrieval step for the QA system, and introduce a neural network based ranking model to select the articles to feed the comprehension model. We evaluate our model on WikiMovies dataset, which consists of 200K questions about movies, along with 18K Wikipedia articles for extracting the answers. KV:16 applied Key-Value Memory Neural Networks (KV-MemNN) to the dataset, achieving 76.2% accuracy. Adding the mixture model for answer selection improves the performance to 85.4%. Further, the ranking model improves both precision and recall of the retrieved articles, and leads to an overall performance of 85.8%." ], [ "We focus on the WikiMovies dataset, proposed by BIBREF6 . The dataset consists of pairs of questions and answers about movies. Some examples are shown in Table 1 .", "As a knowledge source approximately 18K articles from Wikipedia are also provided, where each article is about a movie. Since movie articles can be very long, we only use the first paragraph of the article, which typically provides a summary of the movie. Formally, the dataset consists of question-answer pairs $\\lbrace (q_j, A_j)\\rbrace _{j=1}^J$ and movie articles $\\lbrace d_k\\rbrace _{k=1}^K$ . Additionally, the dataset includes a list of entities: movie titles, actor names, genres etc. Answers to all the questions are in the entity list. The questions are created by human annotators using SimpleQuestions BIBREF14 , an existing open-domain question answering dataset, and the annotated answers come from facts in two structured KBs: OMDb and MovieLens.", "There are two splits of the dataset. The “Full” dataset consists of 200K pairs of questions and answers. In this dataset, some questions are difficult to answer from Wikipedia articles alone. A second version of the dataset, “Wiki Entity” is constructed by removing those QA pairs where the entities in QAs are not found in corresponding Wikipedia articles. We call these splits WikiMovies-FL and WikiMovies-WE, respectively. The questions are divided into train, dev and test such that the same question template does not appear in different splits. Further, they can be categorized into 13 categories, including movie_to_actors, director_to_movies, etc. The basic statistics of the dataset are summarized in Table 2 .", "We also note that more than 50% of the entities appear less than 5 times in the training set. This makes it very difficult to learn the global statistics of each entity, necessitating the need to use an external knowledge source." ], [ "Our QA system answers questions in two steps, as shown in Figure 1 . The first step is retrieval, where articles relevant to the question are retrieved. The second step is comprehension, where the question and retrieved articles are processed to derive answers.", "In this section we focus on the comprehension model, assuming that relevant articles have already been retrieved and merged into a context document. In the next section, we will discuss approaches for retrieving the articles.", " BIBREF6 , who introduced WikiMovies dataset, used an improved variant of Memory Networks called Key-Value Memory Networks. Instead, we use RNN based network, which has been successfully used in many reading comprehension tasks BIBREF10 , BIBREF9 , BIBREF12 .", "WikiMovies dataset has two notable differences from many of the existing comprehension datasets, such as CNN and SQuAD BIBREF10 , BIBREF9 , BIBREF12 . First, with imperfect retrieval, the answer may not be present in the context. We handle this case by using the proposed mixture model. Second, there may be multiple answers to a question, such as a list of actors. We handle this by optimizing a sum of the cross-entropy loss over all possible answers.", "We also use attention sum architecture proposed by BIBREF10 , which has been shown to give high performance for comprehension tasks. In this approach, attention scores over the context entities are used as the output. We term this the attention distribution $p_{att}$ , defined over the entities in the context. The mixture model combines this distribution with another output probability distribution $p_{vocab}$ over all the entities in the vocabulary. The intuition behind this is that named entities (such as actors and directors) can be better handled by the attention part, since there are few global statistics available for these, and other entities (such as languages and genres) can be captured by vocabulary part, for which global statistics can be leveraged." ], [ "Let $\\mathcal {V}$ be the vocabulary consisting of all tokens in the corpus, and $\\mathcal {E}$ be the set of entities in the corpus The question is converted to a sequence of lower cased word ids, $(w_i) \\in \\mathcal {V}$ and a sequence of 0-1 flags for word capitalization, $(c_i) \\in \\lbrace 0,1\\rbrace $ . For each word position $i$ , we also associate an entity id if the i-th word is part of an entity, $e_i \\in \\mathcal {E}$ (see Figure 3 ). Then, the combined embedding of the i-th position is given by ", "$$x_i = W_w(w_i) + W_c(c_i) \\Vert W_e(e_i), \\hspace{7.22743pt} (i=1,\\ldots ,L_q), $$ (Eq. 12) ", "where $\\Vert $ is the concatenation of two vectors, $L_q$ is the number of words in a question $q$ , and $W_w, W_c$ and $W_e$ are embedding matrices. Note that if there are no entities at i-th position, $W_e(e_i)$ is set to zero. The context is composed of up to $M$ movie articles concatenated with a special separation symbol. The contexts are embedded in exactly the same way as questions, sharing the embedding matrices.", "To avoid overfitting, we use another technique called anonymization. We limit the number of columns of $W_e$ to a relatively small number, $n_e$ , and entity ids are mapped to one of $n_e$ columns randomly (without collision). The map is common for each question/context pair but randomized across pairs. The method is similar to the anonymization method used in CNN / Daily Mail datasets BIBREF8 . emergent:16 showed that such a procedure actually helps readers since it adds coreference information to the system.", "Next, the question embedding sequence $(x_i)$ is fed into a bidirectional GRU (BiGRU) BIBREF15 to obtain a fixed length vector $v$ ", "$$v = \\overrightarrow{h}_{q}(L_q) \\Vert \\overleftarrow{h}_{q}(0), $$ (Eq. 13) ", "where $\\overrightarrow{h}_{q}$ and $\\overleftarrow{h}_{q}$ are the final hidden states of forward and backward GRUs respectively.", "The context embedding sequence is fed into another BiGRU, to produce the output $H_c = [h_{c,1}, h_{c,2}, \\ldots h_{c,L_c}]$ , where $L_c$ is the length of the context. An attention score for each word position $i$ is given by ", "$$s_i \\propto \\exp ( v^T h_{c,i} ).$$ (Eq. 14) ", "The probability over the entities in the context is then given by ", "$$p_{att}(e) \\propto \\sum _{i \\in I(e, c)} s_i,$$ (Eq. 15) ", "where $I(e,c)$ is the set of word positions in the entity $e$ within the context $c$ .", "We next define the probability $p_{vocab}$ to be the probability over the complete set of entities in the corpus, given by ", "$$p_{vocab}(e) = {\\rm Softmax}(V u), $$ (Eq. 16) ", "where the vector $u$ is given by $u = \\sum _{i} s_i h_{c, i}$ . Each row of the matrix $V$ is the coefficient vector for an entity in the vocabulary. It is computed similar to Eq. ( 12 ). ", "$$V(e) = \\sum _{w \\in e} W_w(w) + \\sum _{c \\in e} W_c(c) \\Vert W_e(e). $$ (Eq. 17) ", "The embedding matrices are shared between question and context.", "The final probability that an entity $e$ answers the question is given by the mixture $p(e) = (1-g) p_{att}(e) + g p_{vocab}(e)$ , with the mixture coefficient $g$ defined as ", "$$g = \\sigma (W_g g_0), \\hspace{7.22743pt} g_0 = v^T u \\Vert \\max V u.$$ (Eq. 18) ", "The two components of $g_0$ correspond to the attention part and vocabulary part respectively. Depending on the strength of each, the value of $g$ may be high or low.", "Since there may be multiple answers for a question, we optimize the sum of the probabilities: ", "$$\\textrm {loss} = - \\log \\Big ( \\sum _{a \\in A_j} p(a|q_j,c_j) \\Big ) $$ (Eq. 19) ", "Our overall model is displayed in Figure 4 .", "We note that KV-MemNN BIBREF6 employs “Title encoding” technique, which uses the prior knowledge that movie titles are often in answers. BIBREF6 showed that this technique substantially improves model performance by over 7% for WikiMovies-WE dataset. In our work, on the other hand, we do not use any data specific feature engineering." ], [ "Our QA system answers questions by two steps as in Figure 1 . Accurate retrieval of relevant articles is essential for good performance of the comprehension model, and in this section we discuss three approaches for it. We use up to $M$ articles as context. A baseline approach for retrieval is to select articles which contain at least one entity also present in the question. We identify maximal intervals of words that match entities in questions and articles. Capitalization of words is ignored in this step because some words in the questions are not properly capitalized. Out of these (say $N$ ) articles we can randomly select $M$ . We call this approach (r0). For some movie titles, however, this method retrieves too many articles that are actually not related to questions. For example, there is a movie titled “Love Story” which accidentally picks up the words “love story”. This degrades the performance of the comprehension step. Hence, we describe two more retrieval models – (1) a dataset specific hand-crafted approach, and (2) a general learning based approach." ], [ "In this approach, the $N$ articles retrieved using entity matching are assigned scores based on certain heuristics. If the movie title matches an entity in the question, the article is given a high score, since it is very likely to be relevant. A similar heuristic was also employed in BIBREF6 . In addition, the number of matching entities is also used to score each article. The top $M$ articles based on these scores are selected for comprehension. This hand-crafted approach already gives strong performance for the WikiMovies dataset, however the heuristic for matching article titles may not be appropriate for other QA tasks. Hence we also study a general learning based approach for retrieval." ], [ "The learning model for retrieval is trained by an oracle constructed using distant supervision. Using the answer labels in the training set, we can find appropriate articles that include the information requested in the question. For example, for x_to_movie question type, the answer movie articles are the correct articles to be retrieved. On the other hand, for questions in movie_to_x type, the movie in the question should be retrieved. Having collected the labels, we train a retrieval model for classifying a question and article pair as relevant or not relevant.", "Figure 5 gives an overview of the model, which uses a Word Level Attention (WLA) mechanism. First, the question and article are embedded into vector sequences, using the same method as the comprehension model. We do not use anonymization here, to retain simplicity. Otherwise, the anonymization procedure would have to be repeated several times for a potentially large collection of documents. These vector sequences are next fed to a Bi-GRU, to produce the outputs $v$ (for the question) and $H_c$ (for the document) similar to the previous section.", "To classify the article as relevant or not, we introduce a novel attention mechanism to compute the score, ", "$$s = \\sum _{i} ((w \\tilde{v} + b)^T \\tilde{h}_{c,i})^4$$ (Eq. 25) ", "Each term in the sum above corresponds to the match between the query representation and a token in the context. This is passed through a 4-th order non-linearity so that relevant tokens are emphasized more. Next, we compute the probability that the article is relevant using a sigmoid: ", "$$o = \\sigma (w^{\\prime } s + b^{\\prime })$$ (Eq. 27) ", "In the above, $\\tilde{x}$ is the normalized version (by L2-norm) of vector $x$ , $w, b, w^{\\prime }, b^{\\prime }$ are scalar learnable parameters to control scales." ], [ "We evaluate the comprehension model on both WikiMovies-FL and WikiMovies-WE datasets. The performance is evaluated using the accuracy of the top hit (single answer) over all possible answers (all entities). This is called hits@1 metric.", "For the comprehension model, we use embedding dimension 100, and GRU dimension 128. We use up to $M=10$ retrieved articles as context. The order of the articles are randomly shuffled for each training instance to prevent over-fitting. The size of the anonymized entity set $n_e$ is 600, since in most of the cases, number of entities in a question and context pair is less than 600.", "For training the comprehension model, the Adam BIBREF16 optimization rule is used with batch size 32. We stop the optimization based on dev-set performance, and training takes around 10 epochs. For WikiMovies-FL (resp. WikiMovies-WE) dataset, each epoch took approximately 4 (resp. 2) hours on an Nvidia GTX1080 GPU.", "For training the retrieval model R2, we use a binary cross entropy objective. Since most articles are not relevant to a question, the ration of positive and negative samples is tuned to $1:10$ . Each epoch for training the retrieval model takes about 40 minutes on an Nvidia GTX1080 GPU." ], [ "We evaluate the retrieval models based on precision and recall of the oracle articles. The evaluation is done on the test set. R@k is the ratio of cases where the highest ranked oracle article is in the top k retrieved articles. P@k is the ratio of oracle articles which are in the top k retrieved results. These numbers are summarized in Table 3 . We can see that both (r1) and (R2) significantly outperform (r0), with (R2) doing slightly better. We emphasize that (R2) uses no domain specific knowledge, and can be readily applied to other datasets where articles may not be about specific types of entities.", "We have also tested simpler models based on inner product of question and article vectors. In these models, a question $q_j$ and article $d_k$ are converted to vectors $\\Phi (q_j), \\Psi (d_k)$ , and the relevance score is given by their inner product: ", "$${\\rm score}(j,k) = \\Phi (q_j)^T \\Psi (d_k).$$ (Eq. 32) ", "In the view of computation, those models are attractive because we can compute the article vectors offline, and do not need to compute the attention over words in the article. Maximum Inner Product Search algorithms may also be utilized here BIBREF17 , BIBREF18 . However, as shown in upper block of Table 4 , those models perform much worse in terms of scoring. The “Sum of Hidden State” and “Query Free Attention” models are similar to WLA model, using BiGRUs for question and article. In both of those models, $\\Phi (q)$ is defined the same way as WLA model, Eq ( 13 ). For the “Sum of Hidden States” model, $\\Psi (d)$ is given by the sum of BiGRU hidden states. This is the same as the proposed model by replacing the fourth order of WLA to one. For the “Query Free Attention” model, $\\Psi (d)$ is given by the sum of BiGRU hidden states.", "We compare our model and several ablations with the KV-MemNN model. Table 5 shows the average performance across three evaluations. The (V) “Vocabulary Model” and (A) “Attention Model” are simplified versions of the full (AV) “Attention and Vocabulary Model”, using only $p_{vocab}$ and $p_{att}$ , respectively. Using a mixture of $p_{att}$ and $p_{vocab}$ gives the best performance.", "Interestingly, for WE dataset the Attention model works better. For FL dataset, on the other hand, it is often impossible to select answer from the context, and hence the Vocab model works better.", "The number of entities in the full vocabulary is 71K, and some of these are rare. Our intuition to use the Vocab model was to only use it for common entities, and hence we next constructed a smaller vocabulary consisting of all entities which appear at least 10 times in the corpus. This results in a subset vocabulary $\\mathcal {V}_S$ of 2400 entities. Using this vocabulary in the mixture model (AsV) further improves the performance.", "Table 5 also shows a comparison between (r0), (r1), and (R2) in terms of the overall task performance. We can see that improving the quality of retrieved articles benefits the downstream comprehension performance. In line with the results of the previous section, (r1) and (R2) significantly outperform (r0). Among (r1) and (R2), (R2) performs slightly better." ], [ "Table 6 shows the impact of anonymization of entities and shuffling of training articles before the comprehension step, described in Section \"Comprehension Model\" .", "Shuffling the context article before concatenating them, works as a data augmentation technique. Entity anonymization helps because without it each entity has one embedding. Since most of the entities appear only a few times in the articles, these embeddings may not be properly trained. Instead, the anonymous embedding vectors are trained to distinguish different entities. This technique is motivated by a similar procedure used in the construction of CNN / Daily Mail BIBREF8 , and discussed in detail in BIBREF19 ." ], [ "Figure 6 shows a test example from the WikiMovies-FL test data. In this case, even though the answers “Hindi” and “English” are not in the context, they are correctly estimated from $p_{vocab}$ . Note the high value of $g$ in this case. Figure 7 shows another example of how the mixture model works. Here the the answer is successfully selected from the document instead of the vocabulary. Note the low value of $g$ in this case." ], [ "Table 7 shows the comparison for each category of questions between our model and KV-MemNN for the WikiMovies-WE dataset . We can see that performance improvements in the movie_to_x category is relatively large. The KV-MemNN model has a dataset specific “Title encoding” feature which helps the model x_to_movie question types. However without this feature performance in other categories is poor." ], [ "The benefit of the mixture model comes from the fact that $p_{pointer}$ works well for some question types, while $p_{vocab}$ works well for others. Table 8 shows how often for each category $p_{vocab}$ is used ( $g > 0.5$ ) in AsV model. For question types “Movie to Language” and “Movie to Genre” (the so called “choice questions”) the number of possible answers is small. For this case, even if the answer can be found in the context, it is easier for the model to select answer from an external vocabulary which encodes global statistics about the entities. For other “free questions”, depending on the question type, one approach is better than the other. Our model is able to successfully estimate the latent category and switch the model type by controlling the coefficient $g$ ." ], [ "hierarchical:16 solve the QA problem by selecting a sentence in the document. They show that joint training of selection and comprehension slightly improves the performance. In our case, joint training is much harder because of the large number of movie articles. Hence we introduce a two-step retrieval and comprehension approach.", "Recently architecture:16 proposed a framework to use the performance on a downstream task (e.g. comprehension) as a signal to guide the learning of neural network which determines the input to the downstream task (e.g. retrieval). This motivates us to introduce neural network based approach for both retrieval and comprehension, since in this case the retrieval step can be directly trained to maximize the downstream performance.", "In the context of language modeling, the idea of combining of two output probabilities is given in BIBREF20 , however, our equation to compute the mixture coefficient is slightly different. More recently, ahn2016neural used a mixture model to predict the next word from either the entire vocabulary, or a set of Knowledge Base facts associated with the text. In this work, we present the first application of such a mixture model to reading comprehension." ], [ "We have developed QA system using a two-step retrieval and comprehension approach. The comprehension step uses a mixture model to achieve state of the art performance on WikiMovies dataset, improving previous work by a significant margin.", "We would like to emphasize that our approach has minimal heuristics and does not use dataset specific feature engineering. Efficient retrieval while maintaining representation variation is a challenging problem. While there has been a lot of research on comprehension, little focus has been given to designing neural network based retrieval models. We present a simple such model, and emphasize the importance of this direction of research." ] ] }
{ "question": [ "How can a neural model be used for a retrieval if the input is the entire Wikipedia?" ], "question_id": [ "73e715e485942859e1db75bfb5f35f1d5eb79d2e" ], "nlp_background": [ "five" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "question" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Using the answer labels in the training set, we can find appropriate articles that include the information requested in the question." ], "yes_no": null, "free_form_answer": "", "evidence": [ "The learning model for retrieval is trained by an oracle constructed using distant supervision. Using the answer labels in the training set, we can find appropriate articles that include the information requested in the question. For example, for x_to_movie question type, the answer movie articles are the correct articles to be retrieved. On the other hand, for questions in movie_to_x type, the movie in the question should be retrieved. Having collected the labels, we train a retrieval model for classifying a question and article pair as relevant or not relevant.", "Figure 5 gives an overview of the model, which uses a Word Level Attention (WLA) mechanism. First, the question and article are embedded into vector sequences, using the same method as the comprehension model. We do not use anonymization here, to retain simplicity. Otherwise, the anonymization procedure would have to be repeated several times for a potentially large collection of documents. These vector sequences are next fed to a Bi-GRU, to produce the outputs $v$ (for the question) and $H_c$ (for the document) similar to the previous section." ], "highlighted_evidence": [ "The learning model for retrieval is trained by an oracle constructed using distant supervision. Using the answer labels in the training set, we can find appropriate articles that include the information requested in the question.", "First, the question and article are embedded into vector sequences, using the same method as the comprehension model. We do not use anonymization here, to retain simplicity. Otherwise, the anonymization procedure would have to be repeated several times for a potentially large collection of documents. These vector sequences are next fed to a Bi-GRU, to produce the outputs $v$ (for the question) and $H_c$ (for the document) similar to the previous section." ] } ], "annotation_id": [ "19c585c40215021d5b5084f7bf4bdfa00b1ca013" ], "worker_id": [ "101dbdd2108b3e676061cb693826f0959b47891b" ] } ] }
{ "caption": [ "Figure 1: Overview of a retrieval + comprehension (r+c) QA system. First, movie articles relevant to a question are retrieved. Then, the retrieved articles along with the question are processed to obtain an answer.", "Figure 2: Example of comprehension step from WIKIMOVIES dataset. Top: answer is a span of text in article. Bottom: answer is not explicitly written in article.", "Table 1: Example of questions and answers.", "Table 2: Basic statistics of WIKIMOVIES dataset.", "Figure 3: Example of embedded vectors for a question “who directed the movie Blade Runner?”", "Figure 4: Visualization of our model. A question is encoded to a vector by a BiGRU. With this vector, attention is computed over another BiGRU. Output probabilities patt, pvocab and the mixture coefficient g are computed from those attentions and BiGRU states.", "Figure 5: Overview of retrieval model. Similar to the comprehension model, a question is encoded to a fixed length vector. Attention is computed over the words of the movie article.", "Table 5: Performance (hits@1) comparison over different models and datasets.", "Table 3: Performance of retrieval methods. (WikiMovies-WE)", "Figure 7: Model behavior of a question “Martin Zandvliet directed which movies?” Martin Zandvliet is a writer of Teddy Bear, not a director.", "Table 7: Hits@1 scores for each question type. Our model gets > 80% in all cases but two.", "Table 8: Ratio of the gate being open. (g > 0.5) If the answer is named entity, the model need to select answer from text. Therefore, g = 0. Bold font indicates winning model. Vocabulary Only model wins when g is high." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "2-Table1-1.png", "3-Table2-1.png", "4-Figure3-1.png", "4-Figure4-1.png", "5-Figure5-1.png", "6-Table5-1.png", "6-Table3-1.png", "7-Figure7-1.png", "8-Table7-1.png", "8-Table8-1.png" ] }
1908.06138
UDS--DFKI Submission to the WMT2019 Similar Language Translation Shared Task
In this paper we present the UDS-DFKI system submitted to the Similar Language Translation shared task at WMT 2019. The first edition of this shared task featured data from three pairs of similar languages: Czech and Polish, Hindi and Nepali, and Portuguese and Spanish. Participants could choose to participate in any of these three tracks and submit system outputs in any translation direction. We report the results obtained by our system in translating from Czech to Polish and comment on the impact of out-of-domain test data in the performance of our system. UDS-DFKI achieved competitive performance ranking second among ten teams in Czech to Polish translation.
{ "section_name": [ "Introduction", "Related Work", "Data", "Data ::: Pre-processing", "System Architecture - The Transference Model", "Experiments", "Experiments ::: Experiment Setup", "Experiments ::: Hyper-parameter Setup", "Results", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "The shared tasks organized annually at WMT provide important benchmarks used in the MT community. Most of these shared tasks include English data, which contributes to make English the most resource-rich language in MT and NLP. In the most popular WMT shared task for example, the News task, MT systems have been trained to translate texts from and to English BIBREF0, BIBREF1.", "This year, we have observed a shift on the dominant role that English on the WMT shared tasks. The News task featured for the first time two language pairs which did not include English: German-Czech and French-German. In addition to that, the Similar Language Translation was organized for the first time at WMT 2019 with the purpose of evaluating the performance of MT systems on three pairs of similar languages from three different language families: Ibero-Romance, Indo-Aryan, and Slavic.", "The Similar Language Translation BIBREF2 task provided participants with training, development, and testing data from the following language pairs: Spanish - Portuguese (Romance languages), Czech - Polish (Slavic languages), and Hindi - Nepali (Indo-Aryan languages). Participant could submit system outputs to any of the three language pairs in any direction. The shared task attracted a good number of participants and the performance of all entries was evaluated using popular MT automatic evaluation metrics, namely BLEU BIBREF3 and TER BIBREF4.", "In this paper we describe the UDS-DFKI system to the WMT 2019 Similar Language Translation task. The system achieved competitive performance and ranked second among ten entries in Czech to Polish translation in terms of BLEU score." ], [ "With the widespread use of MT technology and the commercial and academic success of NMT, there has been more interest in training systems to translate between languages other than English BIBREF5. One reason for this is the growing need of direct translation between pairs of similar languages, and to a lesser extent language varieties, without the use of English as a pivot language. The main challenge is to overcome the limitation of available parallel data taking advantage of the similarity between languages. Studies have been published on translating between similar languages (e.g. Catalan - Spanish BIBREF5) and language varieties such as European and Brazilian Portuguese BIBREF6, BIBREF7. The study by lakew2018neural tackles both training MT systems to translate between European–Brazilian Portuguese and European–Canadian French, and two pairs of similar languages Croatian–Serbian and Indonesian–Malay.", "Processing similar languages and language varieties has attracted attention not only in the MT community but in NLP in general. This is evidenced by a number of research papers published in the last few years and the recent iterations of the VarDial evaluation campaign which featured multiple shared tasks on topics such as dialect detection, morphosyntactic tagging, cross-lingual parsing, cross-lingual morphological analysis BIBREF8, BIBREF9." ], [ "We used the Czech–Polish dataset provided by the WMT 2019 Similar Language Translation task organizers for our experiments. The released parallel dataset consists of out-of-domain (or general-domain) data only and it differs substantially from the released development set which is part of a TED corpus. The parallel data includes Europarl v9, Wiki-titles v1, and JRC-Acquis. We combine all the released data and prepare a large out-domain dataset." ], [ "The out-domain data is noisy for our purposes, so we apply methods for cleaning. We performed the following two steps: (i) we use the cleaning process described in Pal:2015:WMT, and (ii) we execute the Moses BIBREF10 corpus cleaning scripts with minimum and maximum number of tokens set to 1 and 100, respectively. After cleaning, we perform punctuation normalization, and then we use the Moses tokenizer to tokenize the out-domain corpus with `no-escape' option. Finally, we apply true-casing.", "The cleaned version of the released data, i.e., the General corpus containing 1,394,319 sentences, is sorted based on the score in Equation DISPLAY_FORM2. Thereafter, We split the entire data (1,394,319) into two sets; we use the first 1,000 for validation and the remaining as training data. The released development set (Dev) is used as test data for our experiment. It should be noted noted that, we exclude 1,000 sentences from the General corpus which are scored as top (i.e., more in-domain like) during the data selection process.", "We prepare two parallel training sets from the aforementioned training data: (i) transference500K(presented next), collected 500,000 parallel data through data selection method BIBREF11, which are very similar to the in-domain data (for our case the development set), and (ii) transferenceALL, utilizing all the released out-domain data sorted by Equation DISPLAY_FORM2.", "The transference500Ktraining set is prepared using in-domain (development set) bilingual cross-entropy difference for data selection as was described in Axelrod:2011. The difference in cross-entropy is computed based on two language models (LM): a domain-specific LM is estimated from the in-domain (containing 2050 sentences) corpus ($lm_{i}$) and the out-domain LM ($lm_{o}$) is estimated from the eScape corpus. We rank the eScape corpus by assigning a score to each of the individual sentences which is the sum of the three cross-entropy ($H$) differences. For a $j^{th}$ sentence pair ${src}_j$–${trg}_j$, the score is calculated based on Equation DISPLAY_FORM2.", "" ], [ "Our transference model extends the original transformer model to multi-encoder based transformer architecture. The transformer architecture BIBREF12 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. The transformer uses positional encoding to encode the input and output sequences, and computes both self- and cross-attention through so-called multi-head attentions, which are facilitated by parallelization. We use multi-head attention to jointly attend to information at different positions from different representation subspaces.", "The first encoder ($enc_1$) of our model encodes word form information of the source ($f_w$), and a second sub-encoder ($enc_2$) to encode sub-word (byte-pair-encoding) information of the source ($f_s$). Additionally, a second encoder ($enc_{src \\rightarrow mt}$) which takes the encoded representation from the $enc_1$, combines this with the self-attention-based encoding of $f_s$ ($enc_2$), and prepares a representation for the decoder ($dec_{e}$) via cross-attention. Our second encoder ($enc_{1 \\rightarrow 2}$) can be viewed as a transformer based NMT's decoding block, however, without masking. The intuition behind our architecture is to generate better representations via both self- and cross-attention and to further facilitate the learning capacity of the feed-forward layer in the decoder block. In our transference model, one self-attended encoder for $f_w$, $\\mathbf {f_w}$ = $(w_1, w_2, \\ldots , w_k)$, returns a sequence of continuous representations, $enc_{2}$, and a second self-attended sub-encoder for $f_s$, $\\mathbf {f_s}$ = $(s_1, s_2, \\ldots , s_l)$, returns another sequence of continuous representations, $enc_{2}$. Self-attention at this point provides the advantage of aggregating information from all of the words, including $f_w$ and $f_s$, and successively generates a new representation per word informed by the entire $f_w$ and $f_s$ context. The internal $enc_{2}$ representation performs cross-attention over $enc_{1}$ and prepares a final representation ($enc_{1 \\rightarrow 2}$) for the decoder ($dec_{e}$). The decoder generates the $e$ output in sequence, $\\mathbf {e}$ = $(e_1, e_2, \\ldots , e_n)$, one word at a time from left to right by attending to previously generated words as well as the final representations ($enc_{1 \\rightarrow 2}$) generated by the encoder.", "We use the scale-dot attention mechanism (like Vaswani:NIPS2017) for both self- and cross-attention, as defined in Equation DISPLAY_FORM3, where $Q$, $K$ and $V$ are query, key and value, respectively, and $d_k$ is the dimension of $K$.", "The multi-head attention mechanism in the transformer network maps the Q, K, and V matrices by using different linear projections. Then $h$ parallel heads are employed to focus on different parts in V. The $i^{th}$ multi-head attention is denoted by $head_i$ in Equation DISPLAY_FORM4. $head_i$ is linearly learned by three projection parameter matrices: $W_i^Q,W_i^K \\in R^{d_{model} \\times d_k}$, $W_i^V \\in R^{d_{model} \\times d_v}$; where $d_k = d_v = d_{model}/h$, and $d_{model}$ is the number of hidden units of our network.", "Finally, all the vectors produced by parallel heads are linearly projected using concatenation and form a single vector, called a multi-head attention ($M_{att}$) (cf. Equation DISPLAY_FORM5). Here the dimension of the learned weight matrix $W^O$ is $R^{d_{model} \\times d_{model}}$.", "" ], [ "We explore our transference model –a two-encoder based transformer architecture, in CS-PL similar language translation task." ], [ "For transferenceALL, we initially train on the complete out-of-domain dataset (General). The General data is sorted based on their in-domain similarities as described in Equation DISPLAY_FORM2.", "transferenceALLmodels are then fine-tuned towards the 500K (in-domain-like) data. Finally, we perform checkpoint averaging using the 8 best checkpoints. We report the results on the provided development set, which we use as a test set before the submission. Additionally we also report the official test set result.", "To handle out-of-vocabulary words and to reduce the vocabulary size, instead of considering words, we consider subword units BIBREF13 by using byte-pair encoding (BPE). In the preprocessing step, instead of learning an explicit mapping between BPEs in the Czech (CS) and Polish (PL), we define BPE tokens by jointly processing all parallel data. Thus, CS and PL derive a single BPE vocabulary. Since CS and PL belong to the similar language, they naturally share a good fraction of BPE tokens, which reduces the vocabulary size.", "We pass word level information on the first encoder and the BPE information to the second one. On the decoder side of the transference model we pass only BPE text.", "We evaluate our approach with development data which is used as test case before submission. We use BLEU BIBREF3 and TER BIBREF4." ], [ "We follow a similar hyper-parameter setup for all reported systems. All encoders, and the decoder, are composed of a stack of $N_{fw} = N_{fs} = N_{es} = 6$ identical layers followed by layer normalization. Each layer again consists of two sub-layers and a residual connection BIBREF14 around each of the two sub-layers. We apply dropout BIBREF15 to the output of each sub-layer, before it is added to the sub-layer input and normalized. Furthermore, dropout is applied to the sums of the word embeddings and the corresponding positional encodings in both encoders as well as the decoder stacks.", "We set all dropout values in the network to 0.1. During training, we employ label smoothing with value $\\epsilon _{ls}$ = 0.1. The output dimension produced by all sub-layers and embedding layers is $d_{model} = 512$. Each encoder and decoder layer contains a fully connected feed-forward network ($FFN$) having dimensionality of $d_{model} = 512$ for the input and output and dimensionality of $d_{ff} = 2048$ for the inner layers. For the scaled dot-product attention, the input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. As multi-head attention parameters, we employ $h = 8$ for parallel attention layers, or heads. For each of these we use a dimensionality of $d_k = d_v = d_{model}/h = 64$. For optimization, we use the Adam optimizer BIBREF16 with $\\beta _1 = 0.9$, $\\beta _2 = 0.98$ and $\\epsilon = 10^{-9}$.", "The learning rate is varied throughout the training process, and increasing for the first training steps $warmup_{steps} = 8000$ and afterwards decreasing as described in BIBREF12. All remaining hyper-parameters are set analogously to those of the transformer's base model. At training time, the batch size is set to 25K tokens, with a maximum sentence length of 256 subwords, and a vocabulary size of 28K. After each epoch, the training data is shuffled. After finishing training, we save the 8 best checkpoints which are written at each epoch. Finally, we use a single model obtained by averaging the last 8 checkpoints. During decoding, we perform beam search with a beam size of 4. We use shared embeddings between CS and PL in all our experiments." ], [ "We present the results obtained by our system in Table TABREF8.", "Our fine-tuned system on development set provides significant performance improvement over the generic model. We found +12.9 absolute BLEU points improvement over the generic model. Similar improvement is also observed in terms of TER (-16.9 absolute). It is to be noted that our generic model is trained solely on the clean version of training data.", "Before submission, we performed punctuation normalization, unicode normalization, and detokenization for the run.", "In Table TABREF9 we present the ranking of the competition provided by the shared task organizers. Ten entries were submitted by five teams and are ordered by BLEU score. TER is reported for all submissions which achieved BLEU score greater than 5.0. The type column specifies the type of system, whether it is a Primary (P) or Constrastive (C) entry.", "Our system was ranked second in the competition only 0.3 BLEU points behind the winning team UPC-TALP. The relative low BLEU and high TER scores obtained by all teams are due to out-of-domain data provided in the competition which made the task equally challenging to all participants." ], [ "This paper presented the UDS-DFKI system submitted to the Similar Language Translation shared task at WMT 2019. We presented the results obtained by our system in translating from Czech to Polish. Our system achieved competitive performance ranking second among ten teams in the competition in terms of BLEU score. The fact that out-of-domain data was provided by the organizers resulted in a challenging but interesting scenario for all participants.", "In future work, we would like to investigate how effective is the proposed hypothesis (i.e., word-BPE level information) in similar language translation. Furthermore, we would like to explore the similarity between these two languages (and the other two language pairs in the competition) in more detail by training models that can best capture morphological differences between them. During such competitions, this is not always possible due to time constraints." ], [ "This research was funded in part by the German research foundation (DFG) under grant number GE 2819/2-1 (project MMPE) and the German Federal Ministry of Education and Research (BMBF) under funding code 01IW17001 (project Deeplee). The responsibility for this publication lies with the authors. We would like to thank the anonymous WMT reviewers for their valuable input, and the organizers of the shared task." ] ] }
{ "question": [ "Which algorithm is used in the UDS-DFKI system?", "Does the use of out-of-domain data improve the performance of the method?" ], "question_id": [ "12391aab31c899bac0ecd7238c111cb73723a6b7", "8b43201e7e648c670c02e16ba189230820879228" ], "nlp_background": [ "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ "Spanish", "Spanish" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Our transference model extends the original transformer model to multi-encoder based transformer architecture. The transformer architecture BIBREF12 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. " ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our transference model extends the original transformer model to multi-encoder based transformer architecture. The transformer architecture BIBREF12 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. The transformer uses positional encoding to encode the input and output sequences, and computes both self- and cross-attention through so-called multi-head attentions, which are facilitated by parallelization. We use multi-head attention to jointly attend to information at different positions from different representation subspaces." ], "highlighted_evidence": [ "Our transference model extends the original transformer model to multi-encoder based transformer architecture. The transformer architecture BIBREF12 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. The transformer uses positional encoding to encode the input and output sequences, and computes both self- and cross-attention through so-called multi-head attentions, which are facilitated by parallelization. We use multi-head attention to jointly attend to information at different positions from different representation subspaces." ] } ], "annotation_id": [ "528d88233ee6b4e23d10e8d614eaebfe86d3839b" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "Our system was ranked second in the competition only 0.3 BLEU points behind the winning team UPC-TALP. The relative low BLEU and high TER scores obtained by all teams are due to out-of-domain data provided in the competition which made the task equally challenging to all participants.", "This paper presented the UDS-DFKI system submitted to the Similar Language Translation shared task at WMT 2019. We presented the results obtained by our system in translating from Czech to Polish. Our system achieved competitive performance ranking second among ten teams in the competition in terms of BLEU score. The fact that out-of-domain data was provided by the organizers resulted in a challenging but interesting scenario for all participants." ], "highlighted_evidence": [ "Our system was ranked second in the competition only 0.3 BLEU points behind the winning team UPC-TALP. The relative low BLEU and high TER scores obtained by all teams are due to out-of-domain data provided in the competition which made the task equally challenging to all participants.", "The fact that out-of-domain data was provided by the organizers resulted in a challenging but interesting scenario for all participants." ] } ], "annotation_id": [ "19f0d23f95a5093a30d224badec456011be9934e" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ] }
{ "caption": [ "Table 2: Rank table for Czech to Polish Translation", "Table 1: Results for CS–PL Translation; * averaging 8 best checkpoints." ], "file": [ "4-Table2-1.png", "4-Table1-1.png" ] }
1801.09030
Exploration on Generating Traditional Chinese Medicine Prescriptions from Symptoms with an End-to-End Approach
Traditional Chinese Medicine (TCM) is an influential form of medical treatment in China and surrounding areas. In this paper, we propose a TCM prescription generation task that aims to automatically generate a herbal medicine prescription based on textual symptom descriptions. Sequence-tosequence (seq2seq) model has been successful in dealing with sequence generation tasks. We explore a potential end-to-end solution to the TCM prescription generation task using seq2seq models. However, experiments show that directly applying seq2seq model leads to unfruitful results due to the repetition problem. To solve the problem, we propose a novel decoder with coverage mechanism and a novel soft loss function. The experimental results demonstrate the effectiveness of the proposed approach. Judged by professors who excel in TCM, the generated prescriptions are rated 7.3 out of 10. It shows that the model can indeed help with the prescribing procedure in real life.
{ "section_name": [ "Introduction", "Related Work", "Methodology", "Task Definition", "Basic Encoder-Decoder Model", "Coverage Mechanism", "Soft Loss Function", "Dataset Construction", "Experiment Settings", "Proposed Baseline", "Human Evaluation", "Automatic Evaluation Results", "Case Study", "Conclusion" ], "paragraphs": [ [ "Traditional Chinese Medicine (TCM) is one of the most important forms of medical treatment in China and the surrounding areas. TCM has accumulated large quantities of documentation and therapy records in the long history of development. Prescriptions consisting of herbal medication are the most important form of TCM treatment. TCM practitioners prescribe according to a patient's symptoms that are observed and analyzed by the practitioners themselves instead of using medical equipment, e.g., the CT. The patient takes the decoction made out of the herbal medication in the prescription. A complete prescription includes the composition of herbs, the proportion of herbs, the preparation method and the doses of the decoction. In this work, we focus on the composition part of the prescription, which is the most essential part of the prescription.", "During the long history of TCM, there has been a number of therapy records or treatment guidelines in the TCM classics composed by outstanding TCM researchers and practitioners. In real life, TCM practitioners often take these classical records for reference when prescribing for the patient, which inspires us to design a model that can automatically generate prescriptions by learning from these classics. It also needs to be noted that due to the issues in actual practice, the objective of this work is to generate candidate prescriptions to facilitate the prescribing procedure instead of substituting the human practitioners completely. An example of TCM prescription is shown in Table 1 . The herbs in the prescription are organized in a weak order. By “weak order”, we mean that the effect of the herbs are not influenced by the order. However, the order of the herbs reflects the way of thinking when constructing the prescription. Therefore, the herbs are connected to each other, and the most important ones are usually listed first.", "Due to the lack of digitalization and formalization, TCM has not attracted sufficient attention in the artificial intelligence community. To facilitate the studies on automatic TCM prescription generation, we collect and clean a large number of prescriptions as well as their corresponding symptom descriptions from the Internet.", "Inspired by the great success of natural language generation tasks like neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 , abstractive summarization BIBREF3 , generative question answering BIBREF4 , and neural dialogue response generation BIBREF5 , BIBREF6 , we propose to adopt the end-to-end paradigm, mainly the sequence to sequence model, to tackle the task of generating TCM prescriptions based on textual symptom descriptions.", "The sequence to sequence model (seq2seq) consists of an encoder that encodes the input sequence and a decoder that generates the output sequence. The success in the language generation tasks indicates that the seq2seq model can learn the semantic relation between the output sequence and the input sequence quite well. It is also a desirable characteristic for generating prescriptions according to the textual symptom description.", "The prescription generation task is similar to the generative question answering (QA). In such task settings, the encoder part of the model takes in the question, and encodes the sequence of tokens into a set of hidden states, which embody the information of the question. The decoder part then iteratively generates tokens based on the information encoded in the hidden states of the encoder. The model would learn how to generate response after training on the corresponding question-answer pairs.", "In the TCM prescription generation task, the textual symptom descriptions can be seen as the question and the aim of the task is to produce a set of TCM herbs that form a prescription as the answer to the question. However, the set of herbs is different from the textual answers to a question in the QA task. A difference that is most evident is that there will not be any duplication of herbs in the prescription. However, the basic seq2seq model sometimes produces the same herb tokens repeatedly when applied to the TCM prescription generation task. This phenomenon can hurt the performance of recall rate even after applying a post-process to eliminate repetitions. Because in a limited length of the prescription , the model would produce the same token over and over again, rather than real and novel ones. Furthermore, the basic seq2seq assumes a strict order between generated tokens, but in reality, we should not severely punish the model when it predicts the correct tokens in the wrong order. In this paper, we explore to automatically generate TCM prescriptions based on textual symptoms. We propose a soft seq2seq model with coverage mechanism and a novel soft loss function. The coverage mechanism is designed to make the model aware of the herbs that have already been generated while the soft loss function is to relieve the side effect of strict order assumption. In the experiment results, our proposed model beats all the baselines in professional evaluations, and we observe a large increase in both the recall rate and the F1 score compared with the basic seq2seq model.", "The main contributions of this paper lie in the following three folds:" ], [ "There has not been much work concerning computational TCM. zhou2010development attempted to build a TCM clinical data warehouse so that the TCM knowledge can be analyzed and used. This is a typical way of collecting data, since the number of prescriptions given by the practitioners in the clinics is very large. However, in reality, most of the TCM doctors do not refer to the constructed digital systems, because the quality of the input data tends to be poor. Therefore, we choose prescriptions in the classics (books or documentation) of TCM. Although the available data can be fewer than the clinical data, it guarantees the quality of the prescriptions.", "wang2004self attempted to construct a self-learning expert system with several simple classifiers to facilitate the TCM diagnosis procedure, Wang2013TCM proposed to use shallow neural networks and CRF based multi-labeling learning methods to model TCM inquiry process, but they only considered the disease of chronic gastritis and its taxonomy is very simple. These methods either utilize traditional data mining methods or are highly involved with expert crafted systems. Zhang2011Topic,Zhu2017TCM proposed to use LDA to model the herbs. li2017distributed proposed to learn the distributed embedding for TCM herbs with recurrent neural networks." ], [ "Neural sequence to sequence model has proven to be very effective in a wide range of natural language generation tasks, including neural machine translation and abstractive text summarization. In this section, we first describe the definition of the TCM prescription generation task. Then, we introduce how to apply seq2seq model in the prescription composition task. Next, we show how to guide the model to generate more fruitful herbs in the setting of this task by introducing coverage mechanism. Finally, we introduce our novel soft loss function that relieves the strict assumption of order between tokens. An overview of the our final model is shown in Figure 1 ." ], [ "Given a TCM herbal treatment dataset that consists of $N$ data samples, the $i$ -th data sample ( $x^{(i)}, p^{(i)}$ ) contains one piece of source text $x^{(i)}$ that describes the symptoms, and $M_{i}$ TCM herbs $(p_{1}^{i},p_{2}^{i}, ..., p_{M_{i}}^{i})$ that make up the herb prescription $p^{(i)}$ .", "We view the symptoms as a sequence of characters $x^{(i)} = (x^{(i)}_{1}, x^{(i)}_{2}, ..., x^{(i)}_{T})$ . We do not segment the characters into words because they are mostly in traditional Chinese that uses characters as basic semantic units. The herbs $p_{1}^{i},p_{2}^{i}, ..., p_{M_{i}}^{i}$ are all different from each other." ], [ "Sequence-to-sequence model was first proposed to solve the machine translation problem. The model consists of two parts, an encoder and a decoder. The encoder is bound to take in the source sequence and compress the sequence into a series of hidden states. The decoder is used to generate a sequence of target tokens based on the information embodied in the hidden states given by the encoder. Typically, both the encoder and the decoder are implemented with recurrent neural networks (RNN).", "In our TCM prescription generation task, the encoder RNN converts the variable-length symptoms in character sequence $x = (x_{1},x_{2},...,x_{T})$ into a set of hidden representations $h = (h_{1},h_{2},...,h_{T})$ , by iterating the following equations along time $t$ : ", "$$h_{t} = f(x_{t},h_{t-1})$$ (Eq. 8) ", "where $f$ is a RNN family function. In our implementation, we choose gated recurrent unit (GRU BIBREF1 ) as $f$ , as the gating mechanism is expected to model long distance dependency better. Furthermore, we choose the bidirectional version of recurrent neural networks as the encoder to solve the problem that the later words get more emphasis in the unidirectional version. We concatenate both the $h_{t}$ in the forward and backward pass and get $\\widehat{h_{t}}$ as the final representation of the hidden state at time step $t$ .", "We get the context vector $c$ representing the whole source $x$ at the $t$ -th time through a non-linear function $q$ , normally known as the attention mechanism: ", "$$c_{t} = \\sum _{j=1}^{T}\\alpha _{tj}h_{j} \\\\\n\\alpha _{tj} = \\frac{\\text{exp}\\left( a\\left(s_{t-1},h_{j}\\right)\\right)}{\\sum _{k=1}^{T}\\text{exp}\\left( a\\left(s_{t-1},h_{k}\\right)\\right)}$$ (Eq. 9) ", "The context vector $c_{t}$ is calculated as a weighted sum of hidden representation produced by the encoder $\\textbf {h} = (h_{1},...,h_{T})$ . $a(s_{t-1},h_{j})$ is a soft alignment function that measures the relevance between $s_{t-1}$ and $h_{j}$ . It computes how much $h_j$ is needed for the $t$ -th output word based on the previous hidden state of the decoder $s_{t-1}$ . The decoder is another RNN. It generates a variable-length sequence $y = (y_{1},y_{2}, ..., y_{T^{\\prime }})$ token by token (herb), through a conditional language model: ", "$$s_{t} = f(s_{t-1},c_{t},Ey_{t-1}) \\\\\np(y_{t}|y_{1,...,t},x) = g(s_{t})$$ (Eq. 10) ", "where $s_{t}$ is the hidden state of the decoder RNN at time step $t$ . $f$ is also a gated recurrent unit. The non-linear function $g$ is a $softmax$ layer, which outputs the probabilities of all the herbs in the herb vocabulary. $E \\in (V\\times d)$ is the embedding matrix of the target tokens, $V$ is the number of herb vocabulary, $d$ is the embedding dimension. $y_{t-1}$ is the last predicted token.", "In the decoder, the context vector $c_{t}$ is calculated based on the hidden state $s_{t-1}$ of the decoder at time step $t-1$ and all the hidden states in the encoder. The procedure is known as the attention mechanism. The attention mechanism is expected to supplement the information from the source sequence that is more connected to the current hidden state of the decoder instead of only depending on a fixed vector produced by the encoder.", "The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence. A soft version of cross entropy loss is applied to maximize the conditional probability, which we will describe in detail." ], [ "Different from natural language generation tasks, there is no duplicate herb in the TCM prescription generation task. When directly applying seq2seq model in this task, the decoder tends to generate some frequently observed herbs over and over again. Although we can prune the repeated herbs through post processing by eliminating the repeated ones, it still hurts the recall performance as the maximum length of a prescription is limited. This situation is still true when we use a $<EOS>$ label to indicate where the generation should stop.", "To encourage the decoder to generate more diverse and reasonable herb tokens, we propose to apply coverage mechanism to make the model aware of the already generated herbs. Coverage mechanism BIBREF7 , BIBREF8 , BIBREF9 was first proposed to help the decoder focus on the part that has not been paid much attention by feeding a fertility vector to the attention calculation, indicating how much information of the input is used.", "In our model, we do not use the fertility vector to tune the attention weights. The reason is that the symptoms are related to others and altogether describe the whole disease, which is explained in Section \"Introduction\" . Still, inspired by its motivation, we adapt the coverage mechanism to the decoder where a coverage vector is fed to the GRU cell together with the context vector. Equation 10 is then replaced by the following ones. ", "$$a_{t} = \\tanh (WD_{t}+b) \\\\\ns_{t} = f(s_{t-1}, c_{t}, Ey_{t-1}, a_{t})$$ (Eq. 12) ", "where $a_{t}$ is the coverage vector at the $t$ -th time step in decoding. $D_{t}$ is the one-hot representation of the generated tokens until the $t$ -th time step. $W\\in \\mathbb {R}^{V\\times H}$ is a learnable parameter matrix, where $V$ is the size of the herb vocabulary and $H$ is the size of the hidden state. By feeding the coverage vector, which is also a sketch of the generated herbs, to the GRU as part of the input, our model can softly switch more probability to the herbs that have not been predicted. This way, the model is encouraged to produce novel herbs rather than repeatedly predicting the frequently observed ones, thus increasing the recall rate." ], [ "We argue that even though the order of the herbs matters when generating the prescription BIBREF10 , BIBREF11 , we should not strictly restrict the order. However, the traditional cross entropy loss function applied to the basic seq2seq model puts a strict assumption on the order of the labels. To deal with the task of predicting weakly ordered labels (or even unordered labels), we propose a soft loss function instead of the original hard cross entropy loss function: ", "$$loss = -\\sum _{t}\\ q^{\\prime }_{t}\\ log(p_t)$$ (Eq. 14) ", "Instead of using the original hard one-hot target probability $q_t$ , we use a soft target probability distribution $q^{\\prime }_{t}$ , which is calculated according to $q_t$ and the target sequence $\\mathbf {q}$ of this sample. Let $\\mathbf {q_v}$ denote the bag of words representation of $\\mathbf {q}$ , where only slots of the target herbs in $\\mathbf {q}$ are filled with $1s$ . We use a function $\\xi $ to project the original target label probability $q_t$ into a new probability distribution $q^{\\prime }_{t}$0 . ", "$$q^{\\prime }_t = \\xi (q_t, \\mathbf {q_v})$$ (Eq. 15) ", "This function $\\xi $ is designed so as to decrease the harsh punishment when the model predicts the labels in the wrong order. In this paper, we apply a simple yet effective projection function as Equation 16 . This is an example implementation, and one can design more sophisticated projection functions if needed. ", "$$\\xi (y_t,\\mathbf {s}) = ((\\mathbf {q_v}/M) + y_t) / 2 $$ (Eq. 16) ", "where $M$ is the length of $q$ . This function means that at the $t$ -th time of decoding, for each target herb token $p_i$ , we first split a probability density of $1.0$ equally across all the $l$ herbs into $1/M$ . Then, we take the average of this probability distribution and the original probability $q_t$ to be the final probability distribution at time $t$ ." ], [ "We crawl the data from UTF8gbsnTCM Prescription Knowledge Base (中医方剂知识库) . This knowledge base includes comprehensive TCM documentation in the history. The database includes 710 TCM historic books or documents as well as some modern ones, consisting of 85,166 prescriptions in total. Each item in the database provides the name, the origin, the composition, the effect, the contraindications, and the preparation method. We clean and formalize the database and get 82,044 usable symptom-prescription pairs", "In the process of formalization, we temporarily omit the dose information and the preparation method description, as we are mainly concerned with the composition. Because the names of the herbs have evolved a lot, we conclude heuristic rules as well as specific projection rules to project some rarely seen herbs to their similar forms that are normally referred to. There are also prescriptions that refer to the name of other prescriptions. We simply substitute these names with their constituents.", "To make the experiment result more robust, we conduct our experiments on two separate test datasets. The first one is a subset of the data described above. We randomly split the whole data into three parts, the training data (90%), the development data (5%) and the test data (5%). The second one is a set of symptom-prescription pairs we manually extracted from the modern text book of the course Formulaology of TCM (UTF8gbsn中医方剂学) that is popularly adopted by many TCM colleges in China.", "There are more cases in the first sampled test dataset (4,102 examples), but it suffers from lower quality, as this dataset was parsed with simple rules, which may not cover all exceptions. The second test dataset has been proofread and all of the prescriptions are the most classical and influential ones in the history. So the quality is much better than the first one. However, the number of the cases is limited. There are 141 symptom-prescription pairs in the second dataset. Thus we use two test sets to do evaluation to take the advantages of both data magnitude and quality." ], [ "In our experiments, we implement our models with the PyTorch toolkit . We set the embedding size of both Chinese characters in the symptoms and the herb tokens to 100. We set the hidden state size to 300, and the batch size to 20. We set the maximum length of the herb sequence to 20 because the length of nearly all the prescriptions are within this range (see Table 2 for the statistics of the length of prescriptions). Unless specifically stated, we use bidirectional gated recurrent neural networks (BiGRNN) to encode the symptoms. Adam BIBREF12 , and use the model parameters that generate the best F1 score on the development set in testing" ], [ "In this sub-section, we present the Multi-label baseline we apply. In this model, we use a BiGRNN as the encoder, which encodes symptoms in the same way as it is described in Section \"Methodology\" . Because the position of the herbs does not matter in the results, for the generation part, we implement a multi-label classification method to predict the herbs. We use the multi-label max-margin loss (MultiLabelMarginLoss in pytorch) as the optimization objective, because this loss function is more insensitive to the threshold, thus making the model more robust. We set the threshold to be 0.5, that is, if the probability given by the model is above 0.5 and within the top $k$ range (we set k to 20 in our experiment, same to seq2seq model), we take the tokens as answers. The way to calculate probability is shown below. ", "$$p(i) = \\sigma (W_{o}h_{T})$$ (Eq. 23) ", "where $\\sigma $ indicates the non-linear function $sigmoid$ , $W_{o} \\in \\mathbb {R}^{H \\times V}$ , $H$ is the size of the hidden state produced by the encoder and $V$ is the size of the herb vocabulary. $h_{T}$ is the last hidden state produced by the encoder.", "During evaluation, we choose the herbs satisfying two conditions:", "The predicted probability of the herb is within top $k$ among all the herbs, where $k$ is a hyper-parameter. We set $k$ to be the same as the maximum length of seq2seq based models (20).", "The predicted probability is above a threshold 0.5 (related to the max-margin)." ], [ "Since medical treatment is a very complex task, we invite two professors from Beijing University of Chinese Medicine, which is one of the best Traditional Chinese Medicine academies in China. Both of the professors enjoy over five years of practicing traditional Chinese medical treatment. The evaluators are asked to evaluate the prescriptions with scores between 0 and 10. Both the textual symptoms and the standard reference are given, which is similar to the form of evaluation in a normal TCM examination. Different from the automatic evaluation method, the human evaluators focus on the potential curative effect of the candidate answers, rather than merely the literal similarity. We believe this way of evaluation is much more reasonable and close to reality.", "Because the evaluation procedure is very time consuming (each item requires more than 1 minute), we only ask the evaluators to judge the results from test set 2.", "As shown in Table 3 , both of the basic seq2seq model and our proposed modification are much better than the multi-label baseline. Our proposed model gets a high score of 7.3, which can be of real help to TCM practitioners when prescribing in the real life treatment." ], [ "We use micro Precision, Recall, and F1 score as the automatic metrics to evaluate the results, because the internal order between the herbs does not matter when we do not consider the prescribing process.", "In Table 4 , we show the results of our proposed models as well as the baseline models. One thing that should be noted is that since the data in Test set 2 (extracted from text book) have much better quality than Test set 1, the performance on Test set 2 is much higher than it is on Test set 1, which is consistent with our instinct.", "From the experiment results we can see that the baseline model multi-label has higher micro recall rate 29.72, 40.49 but much lower micro precision 10.83, 13.51. This is because unlike the seq2seq model that dynamically determines the length of the generated sequence, the output length is rigid and can only be determined by thresholds. We take the tokens within the top 20 as the answer for the multi-label model.", "As to the basic seq2seq model, although it beats the multi-label model overall, the recall rate drops substantially. This problem is partly caused by the repetition problem, the basic seq2seq model sometimes predicts high frequent tokens instead of more meaningful ones. Apart from this, although the seq2seq based model is better able to model the correlation between target labels, it makes a strong assumption on the order of the target sequence. In the prescription generation task, the order between herb tokens are helpful for generating the sequence. However, since the order between the herbs does not affect the effect of the prescription, we do not consider the order when evaluating the generated sequence. We call the phenomenon that the herbs are under the “weak order”. The much too strong assumption on order can hurt the performance of the model when the correct tokens are placed in the wrong order.", "In Table 5 we show the effect of applying coverage mechanism and soft loss function.", "Coverage mechanism gives a sketch on the prescription. The mechanism not only encourages the model to generate novel herbs but also enables the model to generate tokens based on the already predicted ones. This can be proved by the improvement on Test set 2, where both the precision and the recall are improved over the basic seq2seq model.", "The most significant improvement comes from applying the soft loss function. The soft loss function can relieve the strong assumption of order made by seq2seq model. Because predicting a correct token in the wrong position is not as harmful as predicting a completely wrong token. This simple modification gives a big improvement on both test sets for all the three evaluation metrics." ], [ "In this subsection, we show an example generated by various models in Table 6 in test set 2 because the quality of test set 2 is much more satisfactory. The multi-label model produces too many herbs that lower the precision, we do not go deep into its results, already we report its results in the table.", "For the basic seq2seq model, the result is better than multi-label baseline in this case. UTF8gbsn“柴胡” (radix bupleuri)、“葛根” (the root of kudzu vine) can be roughly matched with “恶风发热,汗出头疼” (Aversion to wind, fever, sweating, headache), “甘草” (Glycyrrhiza)、“陈皮” (dried tangerine or orange peel)、“桔梗” (Platycodon grandiflorum) can be roughly matched with “鼻鸣咽干,苔白不渴” (nasal obstruction, dry throat, white tongue coating, not thirsty), “川芎” (Ligusticum wallichii) can be used to treat the symptom of “头疼” (headache). In this case, most of the herbs can be matched with certain symptoms in the textual description. However, the problem is that unlike the reference, the composition of herbs lacks the overall design. The symptoms should not be treated independently, as they are connected to other symptoms. For example, the appearance of symptom UTF8gbsn“头疼” (headache) must be treated together with UTF8gbsn“汗出” (sweat). When there is simply headache without sweat, UTF8gbsn“川芎” (Ligusticum wallichii) may be suitable. However, since there is already sweat, this herb is not suitable in this situation. This drawback results from the fact that this model heavily relies on the attention mechanism that tries to match the current hidden state in the decoder to a part of the context in the encoder every time it predicts a token.", "Translation: UTF8gbsn桂枝 - cassia twig, 芍药 - Chinese herbaceous peony 大黄 - Rhubarb, 厚朴 - Magnolia officinalis, 枳实 - Fructus Aurantii Immaturus, 芒硝 - Mirabilite, 栀子 - Cape Jasmine Fruit, 枳壳 - Fructus Aurantii, 当归 - Angelica Sinensis, 甘草 - Glycyrrhiza, 黄芩 - Scutellaria, 生姜 - ginger, 大枣 - Chinese date, 柴胡 - radix bupleuri, 葛根 - the root of kudzu vine, 陈皮 - dried tangerine or orange peel, 桔梗 - Platycodon grandiflorum, 川芎 - Ligusticum wallichii, 麻黄 - Chinese ephedra", "For our proposed model, the results are much more satisfactory. UTF8gbsn“外感风寒” (Exogenous wind-cold exterior deficiency syndrome) is the reason of the disease, the symptoms UTF8gbsn“恶风发热,汗出头疼,鼻鸣咽干,苔白不渴,脉浮缓或浮弱” (Aversion to wind, fever, sweating, headache, nasal obstruction, dry throat, white tongue coating, not thirsty, floating slow pulse or floating weak pulse) are the corresponding results. The prescription generated by our proposed model can also be used to cure UTF8gbsn“外感风寒” (Exogenous wind-cold exterior deficiency syndrome), in fact UTF8gbsn“麻黄” (Chinese ephedra) and “桂枝” (cassia twig) together is a common combination to cure cold. However, UTF8gbsn“麻黄” (Chinese ephedra) is not suitable here because there is already sweat. One of the most common effect of UTF8gbsn“麻黄” (Chinese ephedra) is to make the patient sweat. Since there is already sweat, it should not be used. Compared with the basic seq2seq model, our proposed model have a sense of overall disease, rather than merely discretely focusing on individual symptoms.", "From the above analysis, we can see that compared with the basic seq2seq model, our proposed soft seq2seq model is aware more of the connections between symptoms, and has a better overall view on the disease. This advantage is correspondent to the principle of prescribing in TCM that the prescription should be focusing on the UTF8gbsn“辩证” (the reason behind the symptoms) rather than the superficial UTF8gbsn“症” (symptoms)." ], [ "In this paper, we propose a TCM prescription generation task that automatically predicts the herbs in a prescription based on the textual symptom descriptions. To our knowledge, this is the first time that this critical and practicable task has been considered. To advance the research in this task, we construct a dataset of 82,044 symptom-prescription pairs based on the TCM Prescription Knowledge Base.", "Besides the automatic evaluation, we also invite professionals to evaluate the prescriptions given by various models, the results of which show that our model reaches the score of 7.3 out of 10, demonstrating the effectiveness. In the experiments, we observe that directly applying seq2seq model would lead to the repetition problem that lowers the recall rate and the strong assumption of the order between herb tokens can hurt the performance. We propose to apply the coverage mechanism and the soft loss function to solve this problem. From the experimental results, we can see that this approach alleviates the repetition problem and results in an improved recall rate." ] ] }
{ "question": [ "Do they impose any grammatical constraints over the generated output?", "Why did they think this was a good idea?" ], "question_id": [ "5d5a571ff04a5fdd656ca87f6525a60e917d6558", "3c362bfa11c60bad6c7ea83f8753d427cda77de0" ], "nlp_background": [ "five", "five" ], "topic_background": [ "familiar", "familiar" ], "paper_read": [ "somewhat", "somewhat" ], "search_query": [ "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "In the TCM prescription generation task, the textual symptom descriptions can be seen as the question and the aim of the task is to produce a set of TCM herbs that form a prescription as the answer to the question. However, the set of herbs is different from the textual answers to a question in the QA task. A difference that is most evident is that there will not be any duplication of herbs in the prescription. However, the basic seq2seq model sometimes produces the same herb tokens repeatedly when applied to the TCM prescription generation task. This phenomenon can hurt the performance of recall rate even after applying a post-process to eliminate repetitions. Because in a limited length of the prescription , the model would produce the same token over and over again, rather than real and novel ones. Furthermore, the basic seq2seq assumes a strict order between generated tokens, but in reality, we should not severely punish the model when it predicts the correct tokens in the wrong order. In this paper, we explore to automatically generate TCM prescriptions based on textual symptoms. We propose a soft seq2seq model with coverage mechanism and a novel soft loss function. The coverage mechanism is designed to make the model aware of the herbs that have already been generated while the soft loss function is to relieve the side effect of strict order assumption. In the experiment results, our proposed model beats all the baselines in professional evaluations, and we observe a large increase in both the recall rate and the F1 score compared with the basic seq2seq model." ], "highlighted_evidence": [ "Furthermore, the basic seq2seq assumes a strict order between generated tokens, but in reality, we should not severely punish the model when it predicts the correct tokens in the wrong order." ] } ], "annotation_id": [ "19fe7a6492b6ef59d3db2c54da84da629ce7faf4" ], "worker_id": [ "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "They think it will help human TCM practitioners make prescriptions.", "evidence": [ "During the long history of TCM, there has been a number of therapy records or treatment guidelines in the TCM classics composed by outstanding TCM researchers and practitioners. In real life, TCM practitioners often take these classical records for reference when prescribing for the patient, which inspires us to design a model that can automatically generate prescriptions by learning from these classics. It also needs to be noted that due to the issues in actual practice, the objective of this work is to generate candidate prescriptions to facilitate the prescribing procedure instead of substituting the human practitioners completely. An example of TCM prescription is shown in Table 1 . The herbs in the prescription are organized in a weak order. By “weak order”, we mean that the effect of the herbs are not influenced by the order. However, the order of the herbs reflects the way of thinking when constructing the prescription. Therefore, the herbs are connected to each other, and the most important ones are usually listed first." ], "highlighted_evidence": [ "It also needs to be noted that due to the issues in actual practice, the objective of this work is to generate candidate prescriptions to facilitate the prescribing procedure instead of substituting the human practitioners completely." ] } ], "annotation_id": [ "3076e9b3ba1e4630b314a53bedce5b5e6db30a91" ], "worker_id": [ "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] } ] }
{ "caption": [ "Table 1: An example of a TCM symptom-prescription pair. As we are mainly concerned with the composition of the prescription, we only provide the herbs in the prescription.", "Figure 1: An illustration of our model. The model is built on the basis of seq2seq model with attention mechanism. We use a coverage mechanism to reduce repetition problem. The coverage mechanism is realized by adding a coverage vector to the decoder.", "Table 2: The statistic of the length of prescriptions. Crawled data means the overall data crawled from the Internet, including the training set data, the development set data and test set 1. Textbook data is the same to test set 2. Under 20 means the percentage of data that are shorter or equal than length 20.", "Table 3: Professional evaluation on the test set 2. The score range is 0∼10. The Pearson’s correlation coefficient between the two evaluators is 0.72 and the Spearman’s correlation coefficient is 0.72. Both p-values are less than 0.01, indicating strong agreement.", "Table 4: Automatic evaluation results of different models on the two test datasets. Multi-label is introduced in Section 4.3. Test set 1 is the subset of the large dataset collected from the Internet, which is homogeneous to the training set. Test set 2 is the test set extracted from the prescription text book.", "Table 5: Ablation results of applying coverage mechanism and soft loss function. Test set 1 and test set 2 are the same as Table 4", "Table 6: Actual predictions made by various models in test set 2. Multi-label model generates too many herb tokens, so we do not list all of them here. Reference is the standard answer prescription given by the text book.4" ], "file": [ "1-Table1-1.png", "4-Figure1-1.png", "6-Table2-1.png", "6-Table3-1.png", "7-Table4-1.png", "7-Table5-1.png", "8-Table6-1.png" ] }
1805.09055
Grounding the Semantics of Part-of-Day Nouns Worldwide using Twitter
The usage of part-of-day nouns, such as 'night', and their time-specific greetings ('good night'), varies across languages and cultures. We show the possibilities that Twitter offers for studying the semantics of these terms and its variability between countries. We mine a worldwide sample of multilingual tweets with temporal greetings, and study how their frequencies vary in relation with local time. The results provide insights into the semantics of these temporal expressions and the cultural and sociological factors influencing their usage.
{ "section_name": [ "Introduction", "Materials and methods", "Results and validation", "Worldwide average greeting times", "Daily analysis", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Human languages are intertwined with their cultures and societies, having evolved together, reflecting them and in turn shaping them BIBREF0 , BIBREF1 . Part-of-day nouns (e.g. ‘morning’ or ‘night’) are an example of this, as their meaning depends on how each language's speakers organize their daily schedule. For example, while the morning in English-speaking countries is assumed to end at noon, the Spanish term (‘mañana’) is understood to span until lunch time, which normally takes place between 13:00 and 15:00 in Spain. It is fair to relate this difference to cultural (lunch being the main meal of the day in Spain, as opposed to countries like the uk, and therefore being a milestone in the daily timetable) and sociopolitical factors (the late lunch time being influenced by work schedules and the displacement of the Spanish time zones with respect to solar time). Similar differences have been noted for different pairs of languages BIBREF2 and for cultures using the same language BIBREF3 , based on manual study, field research and interviews with natives. Work on automatically extracting the semantics of part-of-day nouns is scarce, as classic corpora are not timestamped. Reiter2003a,Reiter2003b overcome it by analyzing weather forecasts and aligning them to timestamped simulations, giving approximate groundings for time-of-day nouns and showing idiolectal variation on the term ‘evening’, but the work is limited to English.", "The relation between language and sociocultural factors implies that the semantics of part-of-day nouns (e.g. 'end of the morning') cannot be studied in isolation from social habits (e.g. 'typical lunch time'). A relevant study of such habits is done by walch2016global, who develop an app to collect sleep habits from users worldwide. While they do not study the meaning of words, their insights are used for validation.", "We propose a new approach to study the semantics of part-of-day nouns by exploiting Twitter and the time-specific greetings (e.g. ‘good morning’) used in different cultures. By mining tweets with these greetings, we obtain a large, worldwide sample of their usage. Since many tweets come with time and geolocation metadata, we can know the local time and country at which each one was emitted. The main contribution of the paper is to show how it is possible to learn the semantics of these terms in a much more extensive way than previous work, at a global scale, with less effort and allowing statistical testing of differences in usage between terms, countries and languages." ], [ "To ground the semantics of greetings we used 5 terms as seeds: ‘good morning’, ‘good afternoon’, ‘good evening’, ‘good night’ and ‘hello’ (a time-unspecific greeting used for comparison). We translated them to 53 languages and variants using Bing translator. We use italics to refer to greetings irrespective of the language. 172,802,620 tweets were collected from Sept. 2 to Dec. 7 2016.", "For some languages (e.g. Spanish), there is no differentiation between ‘good evening’ and ‘good night’, and they both are translated to the same expression. For some others, some expressions cannot be considered equivalent, e.g. ‘good morning’ is translated to ‘bonjour’ in French, which is however commonly used as ‘hello’, or simply as ‘good day’.", "Text preprocessing is not necessary: we rely on metadata, not on the tweet itself, and only the seed words are needed to categorize tweets within a part of day. To clean up the data, we removed retweets, as they last for hours, biasing the temporal analysis. Duplicate tweets were kept, as similar messages from different days and users (e.g. ‘good night!’) are needed for the task at hand. Tweets need to be associated with a timestamp and country-level geolocation. Tweets have a creation time, composed of a utc time and a utc offset that varies depending on the time zone. However, most tweets are not geolocated and we must rely on the data provided by the user. This may be fake or incomplete, e.g. specifying only a village. We used fine-grained databases to do the mapping to the country level location and performed a sanity check, comparing the Twitter offset to the valid set of offsets for that country, to reduce the amount of wrongly geolocated tweets. Comparing the solar and standard time could provide more insights, but this requires a fine-grained geolocation of the tweets. We obtained a dataset of 10,523,349 elements, available at https://github.com/aghie/peoples2018grounding: 4,503,077 good morning's, 599,586 good afternoon's, 214,231 good evening's, 880,003 good night's and 4,359,797 hello's." ], [ "Given a country, some of the tweets are written in foreign languages for reasons like tourism or immigration. This paper refers to tweets written in official or de facto languages, unless otherwise specified. Also, analyzing differences according to criteria such as gender or solar time can be relevant. As determining the impact of all those is a challenge on its own, we focus on the primary research question: can we learn semantics of the part-of-day nouns from simple analysis of tweets? To verify data quality, good morning tweets were revised: out of 1 000 random tweets from the usa, 97.9% were legitimate greetings and among the rest, some reflected somehow that the user just started the day (e.g ‘Didn't get any good morning sms’). We did the same for Spain (98,1% legitimate), Brazil (97.8%) and India (99.6%).", "Existing work and dated events are used to ratify the results presented below." ], [ "Table TABREF7 shows the average greeting times for the countries from which we collected more data. Asian, African and American countries tend to begin the day earlier than Europe (with exceptions, e.g. Germany). The table reflects that countries in southern Europe (e.g. Spain, Portugal or Greece) start the day later than the northern ones (the Netherlands or uk). For some countries, e.g. France, this information is known to be biased, as good morning (‘bonjour’) is used all along the day. A validation at a fine-grained scale is unfeasible, but the results at the country level are in line with Figure 3 of walch2016global, e.g., they state that Japan, the usa or Germany have earlier wake up times than Spain, Brazil or Turkey.", "The average greeting times for good afternoon reveal insights that may stem from cultural differences (e.g. lunch break time). Anglo-Saxon and South Asian countries have the earliest afternoon (with averages between 13:00 and 14:00), while in Mediterranean countries the morning lasts longer (average greeting times for good afternoon around 15:00 or 16:00). A number of countries under the influence of the United Kingdom, such as the United States, Pakistan or India show earlier afternoon times. The opposite happens in South America, historically influenced by Portuguese and Spanish colonialism during the Early modern period, which exhibits later afternoon times.", "This poses interesting questions for future work, such as whether there is a particular reason that could justify this behavior, like having more similar cuisine practices. In this context, the adoption of food practices in colonialism has been already studied by anthropologists and historians BIBREF4 . trigg2004food points out how in the early period of the Spanish colonialism in the Americas, they `civilized' the Indigenous community by making them adopt manners, dress and customs. She points that the role of food was specially relevant due to its large social component, and was not limited to the way the food was eaten, but also prepared, served and consumed.", "Twitter also reflects differences between countries regarding night life. On the one hand, Anglo-Saxon countries wish good night earlier (from 19:49 in the uk to 21:10 in Canada) than other societies. On the other hand, southern European countries go to bed later, and some of them even wish a good night after midnight (e.g. Spain). Comparing to BIBREF5 , we find similar tendencies. For example, in their study Spain, Turkey or Brazil use the smartphone until later than Canada, the usa or the uk, and therefore they go later to bed. Our Twitter approach also captures the particular case of Japanese mentioned by BIBREF5 : they wake up very early, but use the smartphone until late in the night, suggesting a later bed time.", "A fine-grained analysis shows how Twitter captures other cultural and working differences. Figure FIGREF8 charts the average day time for good morning for the usa, Brazil, Spain and India during part of the polling period. The time peaks in the weekends for many of the countries, showing that Twitter captures how business and work are reduced during holidays, resulting in later wake up times.", "However, this is not visible in some countries where working conditions are sometimes questioned BIBREF6 : for India the weekend peak is less pronounced, which can be considered as an indicator that a significant part of its population does not enjoy work-free weekends.", "The usage of part-of-day expressions can be helpful to understand more complex issues, such as how foreigners integrate into a country and adapt to its daily schedule. We take the usa as example, as it has a large foreign community of Spanish speakers, mainly from Mexico (and in a smaller proportion from other Latin American countries). If we calculate the average day time for the Spanish form of ‘good morning’ (‘buenos días’) in the usa, we obtain that the result is 08:09, while the corresponding English greeting's average time is 08:33. This is reinforced by Figure FIGREF10 , where ‘buenos días’ average day time is consistently lower than ‘good morning’. This would be in line to their presence in low-wage jobs that require to wake up earlier, e.g. waiter, cleaning or construction work BIBREF7 , BIBREF8 .", "It is worth noting that, assuming that these ‘buenos días’ greetings come from latinos, those in the usa wake up even earlier than in their countries of origin (see Table TABREF7 ).", "Figure FIGREF8 also shows how national holidays influence societies. For example, Nov. 2 (Day of the Dead) and Nov. 15 (Proclamation of the Republic) are holidays in Brazil, producing a peak in that country's graph similar to the behavior in the weekends. Similarly, Nov. 1 (All Saints' Day) and Dec. 6 (Constitution Day) are holidays in Spain and similar peaks are observed too. From Figure FIGREF10 we can see how Thanksgiving (Nov. 24 in 2016) reflects a four-day weekend in the usa: many businesses allow employees to take this holiday from Thursday, resulting into a gradual and increasing peak that spans until Sunday. This is captured by the English good mornings, but not by the Spanish ones. The day after the usa 2016 elections (Nov. 9), a valley occurs on the good morning time for the States (Figure FIGREF8 ). The winner was not known until 03:00, suggesting that the distribution of greetings reflects social behaviors in other special events." ], [ "Twitter can be used to do a time-of-day analysis, e.g., as said in § SECREF6 , ‘bonjour’ is assumed to be used all along the day. To test this, we take Canada, where French and English are official languages. Figure FIGREF12 shows how ‘bonjour’ and ‘salut’ (‘hello’) are used all along the day, while ‘good morning’ is used in the morning hours. English and French hello's share a similar distribution.", "Figure FIGREF13 shows a greeting area chart for the usa, showing how ‘good evening’ and ‘good afternoon’ are well differentiated, with the transition happening over 16:30. This contrasts to countries such as Spain (Figure FIGREF14 ), where the language has a single word (‘tarde’) for ‘evening’ and ‘afternoon’, whose greeting spans from over 14:00, as the morning ends late (see § SECREF1 ), to 21:00.", "Area plots like these give a clear picture of the semantics of part-of-day nouns, as they depict the exact times when they are used. The precise semantics can be grounded more rigorously using statistical testing to know the exact time intervals at which people significantly use a specific greeting.", "For example, to know when to switch from good morning to good afternoon in Spanish, we can: (1) group the number of ‘buenos días’ (‘good morning’) and ‘buenas tardes’ (‘good afternoon’) by intervals of 10 minutes, and (2) apply a binomial test to each interval, to determine if one of the greetings is significantly more likely to occur than the other (assuming equal probability of occurrence). For example, for Spain, we obtain that the morning ends at 14:00 (p-value= INLINEFORM0 at 14:00, 0.09 at 14:10) and the afternoon starts at 14:40 (p-value becomes statistically significant again with INLINEFORM1 , showing a significant majority of good afternoon)." ], [ "We crawled Twitter to study the semantics of part-of-day nouns in different countries and societies, showed examples from the polled period and ratified them against existing research and dated events. For space reasons we cannot show insights for all scenarios, but full results are at https://github.com/aghie/peoples2018grounding." ], [ "DV and CGR receive funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01)." ] ] }
{ "question": [ "How many languages are included in the tweets?", "What languages are explored?", "Which countries did they look at?" ], "question_id": [ "e78a47aec37d9a3bec5a18706b0a462c148c118b", "351510da69ab6879df5ff5c7c5f49a8a7aea4632", "d43e868cae91b3dc393c05c55da0754b0fb3a46a" ], "nlp_background": [ "", "", "" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "1a0206f85d6b3a64b65e2947e4c850a72b0bccc8" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "d3425805b9c385b48472928ad096e3115e1f9641" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "369e27f96969222e8d75325693a7860268afc0a6" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Table 1: Average local time for the greetings coming from the countries with most data, sorted by the average time for the greeting good morning. Hello was used as sanity check.", "Figure 1: Average day time for the greeting good morning in different countries (USA, Brazil, Spain and India) for a period from mid October to early December, 2016. Weekends are shaded in gray.", "Figure 2: Average day time for the greeting ‘good morning’ and its Spanish form in the USA.", "Figure 3: Box & whisker plot for the French and English good morning’s and hello’s in Canada.", "Figure 4: Stacked area chart for the greetings in the USA: % (y axis) vs time (x axis).", "Figure 5: Same as Figure 4, but for Spain." ], "file": [ "2-Table1-1.png", "3-Figure1-1.png", "4-Figure2-1.png", "4-Figure3-1.png", "5-Figure4-1.png", "5-Figure5-1.png" ] }
1804.03396
QA4IE: A Question Answering based Framework for Information Extraction
Information Extraction (IE) refers to automatically extracting structured relation tuples from unstructured texts. Common IE solutions, including Relation Extraction (RE) and open IE systems, can hardly handle cross-sentence tuples, and are severely restricted by limited relation types as well as informal relation specifications (e.g., free-text based relation tuples). In order to overcome these weaknesses, we propose a novel IE framework named QA4IE, which leverages the flexible question answering (QA) approaches to produce high quality relation triples across sentences. Based on the framework, we develop a large IE benchmark with high quality human evaluation. This benchmark contains 293K documents, 2M golden relation triples, and 636 relation types. We compare our system with some IE baselines on our benchmark and the results show that our system achieves great improvements.
{ "section_name": [ "Introduction and Background", "Previous IE Systems", "QA4IE Framework", "Contributions", "QA4IE Benchmark Construction", "Question Answering Model", "Experimental Setup", "Results in QA Settings", "Results in IE Settings", "Case Study", "Human Evaluation on QA4IE Benchmark", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "Information Extraction (IE), which refers to extracting structured information (i.e., relation tuples) from unstructured text, is the key problem in making use of large-scale texts. High quality extracted relation tuples can be used in various downstream applications such as Knowledge Base Population BIBREF0 , Knowledge Graph Acquisition BIBREF1 , and Natural Language Understanding. However, existing IE systems still cannot produce high-quality relation tuples to effectively support downstream applications." ], [ "Most of previous IE systems can be divided into Relation Extraction (RE) based systems BIBREF2 , BIBREF3 and Open IE systems BIBREF4 , BIBREF5 , BIBREF6 .", "Early work on RE decomposes the problem into Named Entity Recognition (NER) and relation classification. With the recent development of neural networks (NN), NN based NER models BIBREF7 , BIBREF8 and relation classification models BIBREF9 show better performance than previous handcrafted feature based methods. The recently proposed RE systems BIBREF10 , BIBREF11 try to jointly perform entity recognition and relation extraction to improve the performance. One limitation of existing RE benchmarks, e.g., NYT BIBREF12 , Wiki-KBP BIBREF13 and BioInfer BIBREF14 , is that they only involve 24, 19 and 94 relation types respectively comparing with thousands of relation types in knowledge bases such as DBpedia BIBREF15 , BIBREF16 . Besides, existing RE systems can only extract relation tuples from a single sentence while the cross-sentence information is ignored. Therefore, existing RE based systems are not powerful enough to support downstream applications in terms of performance or scalability.", "On the other hand, early work on Open IE is mainly based on bootstrapping and pattern learning methods BIBREF17 . Recent work incorporates lexical features and sentence parsing results to automatically build a large number of pattern templates, based on which the systems can extract relation tuples from an input sentence BIBREF4 , BIBREF5 , BIBREF6 . An obvious weakness is that the extracted relations are formed by free texts which means they may be polysemous or synonymous and thus cannot be directly used without disambiguation and aggregation. The extracted free-text relations also bring extra manual evaluation cost, and how to automatically evaluate different Open IE systems fairly is an open problem. Stanovsky and Dagan BIBREF18 try to solve this problem by creating an Open IE benchmark with the help of QA-SRL annotations BIBREF19 . Nevertheless, the benchmark only involves 10K golden relation tuples. Hence, Open IE in its current form cannot provide a satisfactory solution to high-quality IE that supports downstream applications.", "There are some recently proposed IE approaches which try to incorporate Question Answering (QA) techniques into IE. Levy et al. BIBREF20 propose to reduce the RE problem to answering simple reading comprehension questions. They build a question template for each relation type, and by asking questions with a relevant sentence and the first entity given, they can obtain relation triples from the sentence corresponding to the relation type and the first entity. Roth et al. BIBREF21 further improve the model performance on a similar problem setting. However, these approaches focus on sentence level relation argument extractions and do not provide a full-stack solution to general IE. In particular, they do not provide a solution to extract the first entity and its corresponding relation types before applying QA. Besides, sentence level relation extraction ignores the information across sentences such as coreference and inference between sentences, which greatly reduces the information extracted from the documents." ], [ "To overcome the above weaknesses of existing IE systems, we propose a novel IE framework named QA4IE to perform document level general IE with the help of state-of-the-art approaches in Question Answering (QA) and Machine Reading Comprehension (MRC) area.", "The input of QA4IE is a document $D$ with an existing knowledge base $K$ and the output is a set of relation triples $R = \\lbrace e_i, r_{ij}, e_j\\rbrace $ in $D$ where $e_i$ and $e_j$ are two individual entities and $r_{ij}$ is their relation. We ignore the adverbials and only consider the entity pairs and their relations as in standard RE settings. Note that we process the entire document as a whole instead of processing individual sentences separately as in previous systems. As shown in Figure 1 , our QA4IE framework consists of four key steps:", "Recognize all the candidate entities in the input document $D$ according to the knowledge base $K$ . These entities serve as the first entity $e_i$ in the relation triples $R$ .", "For each candidate entity $e_i$ , discover the potential relations/properties as $r_{ij}$ from the knowledge base $K$ .", "Given a candidate entity-relation or entity-property pair $\\lbrace e_i, r_{ij}\\rbrace $ as a query, find the corresponding entity or value $e_j$ in the input document $D$ using a QA system. The query here can be directly formed by the word sequence of $\\lbrace e_i, r_{ij}\\rbrace $ , or built from templates as in BIBREF20 .", "Since the results of step 3 are formed by free texts in the input document $D$ , we need to link the results to the knowledge base $K$ .", "This framework determines each of the three elements in relation triples step by step. Step 1 is equivalent to named entity recognition (NER), and state-of-the-art NER systems BIBREF22 , BIBREF8 can achieve over 0.91 F1-score on CoNLL'03 BIBREF23 , a well-known NER benchmark. For attribution discovery in step 2, we can take advantage of existing knowledge base ontologies such as Wikipedia Ontology to obtain a candidate relation/property list according to NER results in step 1. Besides, there is also some existing work on attribution discovery BIBREF24 , BIBREF25 and ontology construction BIBREF26 that can be used to solve the problem in step 2. The most difficult part in our framework is step 3 in which we need to find the entity (or value) $e_j$ in document $D$ according to the previous entity-relation (or entity-property) pair $\\lbrace e_i, r_{ij}\\rbrace $ . Inspired by recent success in QA and MRC BIBREF27 , BIBREF28 , BIBREF29 , we propose to solve step 3 in the setting of SQuAD BIBREF30 which is a very popular QA task. The problem setting of SQuAD is that given a document $\\tilde{D}$ and a question $q$ , output a segment of text $a$ in $\\tilde{D}$ as the answer to the question. In our framework, we assign the input document $D$ as $\\tilde{D}$ and the entity-relation (or entity-property) pair $\\lbrace e_i, r_{ij}\\rbrace $ as $D$0 , and then we can get the answer $D$1 with a QA model. Finally in step 4, since the QA model can only produce answers formed by input free texts, we need to link the answer $D$2 to an entity $D$3 in the knowledge base $D$4 , and the entity $D$5 will form the target relation triple as $D$6 . Existing Entity Linking (EL) systems BIBREF31 , BIBREF32 directly solve this problem especially when we have high quality QA results from step 3.", "As mentioned above, step 1, 2 and 4 in the QA4IE framework can be solved by existing work. Therefore, in this paper, we mainly focus on step 3. According to the recent progress in QA and MRC, deep neural networks are very good at solving this kind of problem with a large-scale dataset to train the network. However, all previous IE benchmarks BIBREF18 are too small to train neural network models typically used in QA, and thus we need to build a large benchmark. Inspired by WikiReading BIBREF33 , a recent large-scale QA benchmark over Wikipedia, we find that the articles in Wikipedia together with the high quality triples in knowledge bases such as Wikidata BIBREF34 and DBpedia can form the supervision we need. Therefore, we build a large scale benchmark named QA4IE benchmark which consists of 293K Wikipedia articles and 2M golden relation triples with 636 different relation types.", "Recent success on QA and MRC is mainly attributed to advanced deep learning architectures such as attention-based and memory-augmented neural networks BIBREF35 , BIBREF36 and the availability of large-scale datasets BIBREF37 , BIBREF38 especially SQuAD. The differences between step 3 and SQuAD can be summarized as follows. First, the answer to the question in SQuAD is restricted to a continuous segment of the input text, but in QA4IE, we remove this constraint which may reduce the number of target relation triples. Second, in existing QA and MRC benchmarks, the input documents are not very long and the questions may be complex and difficult to understand by the model, while in QA4IE, the input documents may be longer but the questions formed by entity-relation (or entity-property) pair are much simpler. Therefore, in our model, we incorporate Pointer Networks BIBREF39 to adapt to the answers formed by any words within the document in any order as well as Self-Matching Networks BIBREF29 to enhance the ability on modeling longer input documents." ], [ "The contributions of this paper are as follows:", "We propose a novel IE framework named QA4IE to overcome the weaknesses of existing IE systems. As we discussed above, the problem of step 1, 2 and 4 can be solved by existing work and we propose to solve the problem of step 3 with QA models.", "To train a high quality neural network QA model, we build a large IE benchmark in QA style named QA4IE benchmark which consists of 293K Wikipedia articles and 2 million golden relation triples with 636 different relation types.", "To adapt QA models to the IE problem, we propose an approach that enhances existing QA models with Pointer Networks and Self-Matching Networks.", "We compare our model with IE baselines on our QA4IE benchmark and achieve a great improvement over previous baselines.", "We open source our code and benchmark for repeatable experiments and further study of IE." ], [ "This section briefly presents the construction pipeline of QA4IE benchmark to solve the problem of step 3 as in our framework (Figure 1 ). Existing largest IE benchmark BIBREF18 is created with the help of QA-SRL annotations BIBREF19 which consists of 3.2K sentences and 10K golden extractions. Following this idea, we study recent large-scale QA and MRC datasets and find that WikiReading BIBREF33 creates a large-scale QA dataset based on Wikipedia articles and WikiData relation triples BIBREF34 . However, we observe about 11% of QA pairs with errors such as wrong answer locations or mismatch between answer string and answer words. Besides, there are over 50% of QA pairs with the answer involving words out of the input text or containing multiple answers. We consider these cases out of the problem scope of this paper and only focus on the information within the input text.", "Therefore, we choose to build the benchmark referring the implementation of WikiReading based on Wikipedia articles and golden triples from Wikidata and DBpedia BIBREF15 , BIBREF16 . Specifically, we build our QA4IE benchmark in the following steps.", "Dump and Preprocessing. We dump the English Wikipedia articles with Wikidata knowledge base and match each article with its corresponding relation triples according to its title. After cleaning data by removing low frequency tokens and special characters, we obtain over 4M articles and 18M triples with over 800 relation types.", "Clipping. We discard the triples with multiple entities (or values) for $e_j$ (account for about 6%, e.g., a book may have multiple authors). Besides, we discard the triples with any word in $e_j$ out of the corresponding article (account for about 50%). After this step, we obtain about 3.5M articles and 9M triples with 636 relation types.", "Incorporating DBpedia. Unlike WikiData, DBpedia is constructed automatically without human verification. Relations and properties in DBpedia are coarse and noisy. Thus we fix the existing 636 relation types in WikiData and build a projection from DBpedia relations to these 636 relation types. We manually find 148 relations which can be projected to a WikiData relation out of 2064 DBpedia relations. Then we gather all the DBpedia triples with the first entity is corresponding to one of the above 3.5M articles and the relation is one of the projected 148 relations. After the same clipping process as above and removing the repetitive triples, we obtain 394K additional triples in 302K existing Wikipedia articles.", "Distillation. Since our benchmark is for IE, we prefer the articles with more golden triples involved by assuming that Wikipedia articles with more annotated triples are more informative and better annotated. Therefore, we figure out the distribution of the number of golden triples in articles and decide to discard the articles with less than 6 golden triples (account for about 80%). After this step, we obtain about 200K articles and 1.4M triples with 636 relation types.", "Query and Answer Assignment. For each golden triple $\\lbrace e_i, r_{ij}, e_j\\rbrace $ , we assign the relation/property $r_{ij}$ as the query and the entity $e_j$ as the answer because the Wikipedia article and its corresponding golden triples are all about the same entity $e_i$ which is unnecessary in the queries. Besides, we find the location of each $e_j$ in the corresponding article as the answer location. As we discussed in Section 1, we do not restrict $e_j$ to a continuous segment in the article as required in SQuAD. Thus we first try to detect a matched span for each $e_j$ and assign this span as the answer location. Then for each of the rest $e_j$ which has no matched span, we search a matched sub-sequence in the article and assign the index sequence as the answer location. We name them span-triples and seq-triples respectively. Note that each triple will have an answer location because we have discarded the triples with unseen words in $e_j$ and if we can find multiple answer locations, all of them will be assigned as ground truths.", "Dataset Splitting. For comparing the performance on span-triples and seq-triples, we set up two different datasets named QA4IE-SPAN and QA4IE-SEQ. In QA4IE-SPAN, only articles with all span-triples are involved, while in QA4IE-SEQ, articles with seq-triples are also involved. For studying the influence of the article length as longer articles are normally more difficult to model by LSTMs, we split the articles according to the article length. We name the set of articles with lengths shorter than 400 as S, lengths between 400 and 700 as M, lengths greater than 700 as L. Therefore we obtain 6 different datasets named QA4IE-SPAN-S/M/L and QA4IE-SEQ-S/M/L. A 5/1/5 splitting of train/dev/test sets is performed. The detailed statistics of QA4IE benchmark are provided in Table 1 .", "We further compare our QA4IE benchmark with some existing IE and QA benchmarks in Table 2 . One can observe that QA4IE benchmark is much larger than previous IE and QA benchmarks except for WikiReading and Zero-Shot Benchmark. However, as we mentioned at the beginning of Section 2, WikiReading is problematic for IE settings. Besides, Zero-Shot Benchmark is a sentence-level dataset and we have described the disadvantage of ignoring information across sentences at Section 1.1. Thus to our best knowledge, QA4IE benchmark is the largest document level IE benchmark and it can be easily extended if we change our distillation strategy." ], [ "In this section, we describe our Question Answering model for IE. The model overview is illustrated in Figure 2 .", "The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings from GloVe BIBREF40 and character embeddings trained by CharCNN BIBREF41 to represent input words. The $2d$ -dimension embedding vectors of input text $x_1, ... , x_n$ and query $q_1, ... , q_n$ are then fed into a Highway Layer BIBREF42 to improve the capability of word embeddings and character embeddings as", "$$\\begin{split}\ng_t &= {\\rm sigmoid}(W_gx_t+b_g) \\\\\ns_t &= {\\rm relu } (W_xx_t+b_x) \\\\\nu_t &= g_t \\odot s_t + (1 - g_t) \\odot x_t~.\n\\end{split}$$ (Eq. 18) ", "Here $W_g, W_x \\in \\mathbb {R}^{d \\times 2d}$ and $b_g, b_x \\in \\mathbb {R}^d$ are trainable weights, $u_t$ is a $d$ -dimension vector. The function relu is the rectified linear units BIBREF43 and $\\odot $ is element-wise multiply over two vectors. The same Highway Layer is applied to $q_t$ and produces $v_t$ .", "Next, $u_t$ and $v_t$ are fed into a Bi-Directional Long Short-Term Memory Network (BiLSTM) BIBREF44 respectively in order to model the temporal interactions between sequence words:", "$$\\begin{split}\nu_t^{^{\\prime }} &= {\\rm BiLSTM}(u^{^{\\prime }}_{t-1},u_t) \\\\\nv_t^{^{\\prime }} &= {\\rm BiLSTM}(v^{^{\\prime }}_{t-1},v_t)~.\n\\end{split}$$ (Eq. 19) ", "Here we obtain $\\mathbf {U} = [u_1^{^{\\prime }}, ... , u_n^{^{\\prime }}] \\in \\mathbb {R}^{2d \\times n}$ and $\\mathbf {V} = [v_1^{^{\\prime }}, ... , v_m^{^{\\prime }}] \\in \\mathbb {R}^{2d \\times m}$ . Then we feed $\\mathbf {U}$ and $\\mathbf {V}$ into the attention flow layer BIBREF27 to model the interactions between the input text and query. We obtain the $8d$ -dimension query-aware context embedding vectors $h_1, ... , h_n$ as the result.", "After modeling interactions between the input text and queries, we need to enhance the interactions within the input text words themselves especially for the longer text in IE settings. Therefore, we introduce Self-Matching Layer BIBREF29 in our model as", "$$\\begin{split}\no_t &= {\\rm BiLSTM}(o_{t-1}, [h_t, c_t]) \\\\\ns_j^t &= w^T {\\rm tanh}(W_hh_j+\\tilde{W_h}h_t)\\\\\n\\alpha _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\\nc_t &= \\Sigma _{i=1}^n\\alpha _i^th_i ~.\n\\end{split}$$ (Eq. 20) ", "Here $W_h, \\tilde{W_h} \\in \\mathbb {R}^{d \\times 8d}$ and $w \\in \\mathbb {R}^d$ are trainable weights, $[h, c]$ is vector concatenation across row. Besides, $\\alpha _i^t$ is the attention weight from the $t^{th}$ word to the $i^{th}$ word and $c_t$ is the enhanced contextual embeddings over the $t^{th}$ word in the input text. We obtain the $2d$ -dimension query-aware and self-enhanced embeddings of input text after this step. Finally we feed the embeddings $\\mathbf {O} = [o_1, ... , o_n]$ into a Pointer Network BIBREF39 to decode the answer sequence as", "$$\\begin{split}\np_t &= {\\rm LSTM}(p_{t-1}, c_t) \\\\\ns_j^t &= w^T {\\rm tanh}(W_oo_j+W_pp_{t-1})\\\\\n\\beta _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\\nc_t &= \\Sigma _{i=1}^n\\beta _i^to_i~.\n\\end{split}$$ (Eq. 21) ", "The initial state of LSTM $p_0$ is $o_n$ . We can then model the probability of the $t^{th}$ token $a^t$ by", "$$& {\\rm P}(a^t | a^1, ... , a^{t-1}, \\mathbf {O}) = (\\beta _1^t, \\beta _2^t, ... , \\beta _n^t, \\beta _{n+1}^t) \\nonumber \\\\\n& {\\rm P}(a^t_i) \\triangleq {\\rm P}(a^t = i|a^1, ... , a^{t-1}, \\mathbf {O}) = \\beta _i^t ~.$$ (Eq. 22) ", "Here $\\beta _{n+1}^t$ denotes the probability of generating the “ ${\\rm eos}$ ” symbol since the decoder also needs to determine when to stop. Therefore, the probability of generating the answer sequence $\\textbf {a}$ is as follows", "$${\\rm P}(\\textbf {a}|\\mathbf {O}) = \\prod _t {\\rm P}(a^t | a^1, ... , a^{t-1}, \\mathbf {O})~.$$ (Eq. 23) ", "Given the supervision of answer sequence $\\mathbf {y} = (y_1, ... , y_L)$ , we can write down the loss function of our model as", "$${\\rm L(\\theta )} = -\\sum _{t=1}^L \\log {\\rm P} (a^t_{y_t})~.$$ (Eq. 24) ", "To train our model, we minimize the loss function ${\\rm L(\\theta )}$ based on training examples." ], [ "We build our QA4IE benchmark following the steps described in Section 2. In experiments, we train and evaluate our QA models on the corresponding train and test sets while the hyper-parameters are tuned on dev sets. In order to make our experiments more informative, we also evaluate our model on SQuAD dataset BIBREF30 .", "The preprocessing of our QA4IE benchmark and SQuAD dataset are all performed with the open source code from BIBREF27 . We use 100 1D filters with width 5 to construct the CharCNN in our char embedding layer. We set the hidden size $d=100$ for all the hidden states in our model. The optimizer we use is the AdaDelta optimizer BIBREF45 with an initial learning rate of 2. A dropout BIBREF46 rate of 0.2 is applied in all the CNN, LSTM and linear transformation layers in our model during training. For SQuAD dataset and our small sized QA4IE-SPAN/SEQ-S datasets, we set the max length of input texts as 400 and a mini-batch size of 20. For middle sized (and large sized) QA4IE datasets, we set the max length as 700 (800) and batch size as 7 (5). We introduce an early stopping in training process after 10 epochs. Our model is trained on a GTX 1080 Ti GPU and it takes about 14 hours on small sized QA4IE datasets. We implement our model with TensorFlow BIBREF47 and optimize the computational expensive LSTM layers with LSTMBlockFusedCell." ], [ "We first perform experiments in QA settings to evaluate our QA model on both SQuAD dataset and QA4IE benchmark. Since our goal is to solve IE, not QA, the motivation of this part of experiments is to evaluate the performance of our model and make a comparison between QA4IE benchmark and existing datasets. Two metrics are introduced in the SQuAD dataset: Exact Match (EM) and F1-score. EM measures the percentage that the model prediction matches one of the ground truth answers exactly while F1-score measures the overlap between the prediction and ground truth answers. Our QA4IE benchmark also adopts these two metrics.", "Table 3 presents the results of our QA model on SQuAD dataset. Our model outperforms the previous sequence model but is not competitive with span models because it is designed to produce sequence answers in IE settings while baseline span models are designed to produce span answers for SQuAD dataset.", "The comparison between our QA model and two baseline QA models on our QA4IE benchmark is shown in Table 4 . For training of both baseline QA models, we use the same configuration of max input length as our model and tune the rest of hyper-parameters on dev sets. Our model outperforms these two baselines on all 6 datasets. The performance is good on S and M datasets but worse for longer documents. As we mentioned in Section 4.1, we set the max input length as 800 and ignore the rest words on L datasets. Actually, there are 11% of queries with no answers in the first 800 words in our benchmark. Processing longer documents is a tough problem BIBREF51 and we leave this to our future work.", "To study the improvement of each component in our model, we present model ablation study results in Table 5 . We do not involve Attention Flow Layer and Pointer Network Decoder as they cannot be replaced by other architectures with the model still working. We can observe that the first three components can effectively improve the performance but Self Matching Layer makes the training more computationally expensive by 40%. Besides, the LSTMBlockFusedCell works effectively and accelerates the training process by 6 times without influencing the performance." ], [ "In this subsection, we put our QA model in the entire pipeline of our QA4IE framework (Figure 1 ) and evaluate the framework in IE settings. Existing IE systems are all free-text based Open IE systems, so we need to manually evaluate the free-text based results in order to compare our model with the baselines. Therefore, we conduct experiments on a small dataset, the dev set of QA4IE-SPAN-S which consists of 4393 documents and 28501 ground truth queries.", "Our QA4IE benchmark is based on Wikipedia articles and all the ground truth triples of each article have the same first entity (i.e. the title of the article). Thus, we can directly use the title of the article as the first entity of each triple without performing step 1 (entity recognition) in our framework. Besides, all the ground truth triples in our benchmark are from knowledge base where they are disambiguated and aggregated in the first place, and therefore step 4 (entity linking) is very simple and we do not evaluate it in our experiments.", "A major difference between QA settings and IE settings is that in QA settings, each query corresponds to an answer, while in the QA4IE framework, the QA model take a candidate entity-relation (or entity-property) pair as the query and it needs to tell whether an answer to the query can be found in the input text. We can consider the IE settings here as performing step 2 and then step 3 in the QA4IE framework.", "In step 2, we need to build a candidate query list for each article in the dataset. Instead of incorporating existing ontology or knowledge base, we use a simple but effective way to build the candidate query list of an article. Since we have a ground truth query list with labeled answers of each article, we can add all the neighboring queries of each ground truth query into the query list. The neighboring queries are defined as two queries that co-occur in the same ground truth query list of any articles in the dataset. We transform the dev set of QA4IE-SPAN-S above by adding neighboring queries into the query list. After this step, the number of queries grows to 426336, and only 28501 of them are ground truth queries labeled with an answer.", "In step 3, we require our QA model to output a confidence score along with the answer to each candidate query. Our QA model produces no answer to a query when the confidence score is less than a threshold $\\delta $ or the output is an “ ${\\rm eos}$ ” symbol. For the answers with a confidence score $\\ge \\delta $ , we evaluate them by the EM measurement with ground truth answers and count the true positive samples in order to calculate the precision and recall under the threshold $\\delta $ . Specifically, we try two confidence scores calculated as follows:", "$$\\begin{split}\n{\\rm Score_{mul}} = \\prod _{t=1}^L{\\rm P}(a^t_{i_t}),~~~{\\rm Score_{avg}} = \\sum _{t=1}^L{\\rm P}(a^t_{i_t}) / L ~,\n\\end{split}$$ (Eq. 34) ", "where $(a^1_{i_1}, ... , a^L_{i_L})$ is the answer sequence and ${\\rm P}(a^t_i)$ is defined in Eq. ( 22 ). ${\\rm Score_{mul}}$ is equivalent to the training loss in Eq. ( 24 ) and ${\\rm Score_{avg}}$ takes the answer length into account.", "The precision-recall curves of our framework based on the two confidence scores are plotted in Figure 3 . We can observe that the EM rate we achieve in QA settings is actually the best recall (91.87) in this curve (by setting $\\delta = 0$ ). The best F1-scores of the two curves are 29.97 (precision $= 21.61$ , recall $= 48.85$ , $\\delta = 0.91$ ) for ${\\rm Score_{mul}}$ and 31.05 (precision $= 23.93$ , recall $= 44.21$ , $\\delta = 0.97$ ) for ${\\rm Score_{avg}}$ . ${\\rm Score_{avg}}$ is better than $= 21.61$0 , which suggests that the answer length should be taken into account.", "We then evaluate existing IE systems on the dev set of QA4IE-SPAN-S and empirically compare them with our framework. Note that while BIBREF20 is closely related to our work, we cannot fairly compare our framework with BIBREF20 because their systems are in the sentence level and require additional negative samples for training. BIBREF21 is also related to our work, but their dataset and code have not been published yet. Therefore, we choose to evaluate three popular Open IE systems, Open IE 4 BIBREF6 , Stanford IE BIBREF4 and ClauseIE BIBREF5 .", "Since Open IE systems take a single sentence as input and output a set of free-text based triples, we need to find the sentences involving ground truth answers and feed the sentences into the Open IE systems. In the dev set of QA4IE-SPAN-S, there are 28501 queries with 44449 answer locations labeled in the 4393 documents. By feeding the 44449 sentences into the Open IE systems, we obtain a set of extracted triples from each sentence. We calculate the number of true positive samples by first filtering out triples with less than 20% words overlapping with ground truth answers and then asking two human annotators to verify the remaining triples independently. Since in the experiments, our framework is given the ground-truth first entity of each triple (the title of the corresponding Wikipedia article) while the baseline systems do not have this information, we ask our human annotators to ignore the mistakes on the first entities when evaluating triples produced by the baseline systems to offset this disadvantage. For example, the 3rd case of ClauseIE and the 4th case of Open IE 4 in Table 7 are all labeled as correct by our annotators even though the first entities are pronouns. The two human annotators reached an agreement on 191 out of 195 randomly selected cases.", "The evaluation results of the three Open IE baselines are shown in Table 6 . We can observe that most of the extracted triples are not related to ground truths and the precision and recall are all very low (around 1%) although we have already helped the baseline systems locate the sentences containing ground truth answers." ], [ "In this subsection, we perform case studies of IE settings in Table 7 to better understand the models and benchmarks. The baseline Open IE systems produce triples by analyzing the subjects, predicates and objects in input sentences, and thus our annotators lower the bar of accepting triples. However, the analysis on semantic roles and parsing trees cannot work very well on complicated input sentences like the 2nd and the 3rd cases. Besides, the baseline systems can hardly solve the last two cases which require inference on input sentences.", "Our framework works very well on this dataset with the QA measurements EM $= 91.87$ and F1 $= 93.53$ and the IE measurements can be found in Figure 3 . Most of the error cases are the fourth case which is acceptable by human annotators. Note that our framework takes the whole document as the input while the baseline systems take the individual sentence as the input, which means the experiment setting is much more difficult for our framework." ], [ "Finally, we perform a human evaluation on our QA4IE benchmark to verify the reliability of former experiments. The evaluation metrics are as follows:", "Triple Accuracy is to check whether each ground truth triple is accurate (one cannot find conflicts between the ground truth triple and the corresponding article) because the ground truth triples from WikiData and DBpedia may be incorrect or incomplete.", "Contextual Consistency is to check whether the context of each answer location is consistent with the corresponding ground truth triple (one can infer from the context to obtain the ground truth triple) because we keep all matched answer locations as ground truths but some of them may be irrelevant with the corresponding triple.", "Triple Consistency is to check whether there is at least one answer location that is contextually consistent for each ground truth triple. It can be calculated by counting the results of Contextual Consistency.", "We randomly sample 25 articles respectively from the 6 datasets (in total of 1002 ground truth triples with 2691 labeled answer locations) and let two human annotators label the Triple Accuracy for each ground truth triple and the Contextual Consistency for each answer location. The two human annotators reached an agreement on 131 of 132 randomly selected Triple Accuracy cases and on 229 of 234 randomly selected Contextual Consistency cases. The human evaluation results are shown in Table 8 . We can find that the Triple Accuracy and the Triple Consistency is acceptable while the Contextual Consistency still needs to be improved. The Contextual Consistency problem is a weakness of distant supervision, and we leave this to our future work." ], [ "In this paper, we propose a novel QA based IE framework named QA4IE to address the weaknesses of previous IE solutions. In our framework (Figure 1 ), we divide the complicated IE problem into four steps and show that the step 1, 2 and 4 can be solved well enough by existing work. For the most difficult step 3, we transform it to a QA problem and solve it with our QA model. To train this QA model, we construct a large IE benchmark named QA4IE benchmark that consists of 293K documents and 2 million golden relation triples with 636 different relation types. To our best knowledge, our QA4IE benchmark is the largest document level IE benchmark. We compare our system with existing best IE baseline systems on our QA4IE benchmark and the results show that our system achieves a great improvement over baseline systems.", "For the future work, we plan to solve the triples with multiple entities as the second entity, which is excluded from problem scope in this paper. Besides, processing longer documents and improving the quality of our benchmark are all challenging problems as we mentioned previously. We hope this work can provide new thoughts for the area of information extraction." ], [ "W. Zhang is the corresponding author of this paper. The work done by SJTU is sponsored by National Natural Science Foundation of China (61632017, 61702327, 61772333) and Shanghai Sailing Program (17YF1428200)." ] ] }
{ "question": [ "What QA models were used?", "Can this approach model n-ary relations?", "Was this benchmark automatically created from an existing dataset?" ], "question_id": [ "fd8b6723ad5f52770bec9009e45f860f4a8c4321", "4ce3a6632e7d86d29a42bd1fcf325114b3c11d46", "e7c0cdc05b48889905cc03215d1993ab94fb6eaa" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "somewhat", "somewhat", "somewhat" ], "search_query": [ "information extraction", "information extraction", "information extraction" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "A pointer network decodes the answer from a bidirectional LSTM with attention flow layer and self-matching layer, whose inputs come from word and character embeddings of the query and input text fed through a highway layer.", "evidence": [ "The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings from GloVe BIBREF40 and character embeddings trained by CharCNN BIBREF41 to represent input words. The $2d$ -dimension embedding vectors of input text $x_1, ... , x_n$ and query $q_1, ... , q_n$ are then fed into a Highway Layer BIBREF42 to improve the capability of word embeddings and character embeddings as", "$$\\begin{split} g_t &= {\\rm sigmoid}(W_gx_t+b_g) \\\\ s_t &= {\\rm relu } (W_xx_t+b_x) \\\\ u_t &= g_t \\odot s_t + (1 - g_t) \\odot x_t~. \\end{split}$$ (Eq. 18)", "Here $W_g, W_x \\in \\mathbb {R}^{d \\times 2d}$ and $b_g, b_x \\in \\mathbb {R}^d$ are trainable weights, $u_t$ is a $d$ -dimension vector. The function relu is the rectified linear units BIBREF43 and $\\odot $ is element-wise multiply over two vectors. The same Highway Layer is applied to $q_t$ and produces $v_t$ .", "Next, $u_t$ and $v_t$ are fed into a Bi-Directional Long Short-Term Memory Network (BiLSTM) BIBREF44 respectively in order to model the temporal interactions between sequence words:", "Here we obtain $\\mathbf {U} = [u_1^{^{\\prime }}, ... , u_n^{^{\\prime }}] \\in \\mathbb {R}^{2d \\times n}$ and $\\mathbf {V} = [v_1^{^{\\prime }}, ... , v_m^{^{\\prime }}] \\in \\mathbb {R}^{2d \\times m}$ . Then we feed $\\mathbf {U}$ and $\\mathbf {V}$ into the attention flow layer BIBREF27 to model the interactions between the input text and query. We obtain the $8d$ -dimension query-aware context embedding vectors $h_1, ... , h_n$ as the result.", "After modeling interactions between the input text and queries, we need to enhance the interactions within the input text words themselves especially for the longer text in IE settings. Therefore, we introduce Self-Matching Layer BIBREF29 in our model as", "$$\\begin{split} o_t &= {\\rm BiLSTM}(o_{t-1}, [h_t, c_t]) \\\\ s_j^t &= w^T {\\rm tanh}(W_hh_j+\\tilde{W_h}h_t)\\\\ \\alpha _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\ c_t &= \\Sigma _{i=1}^n\\alpha _i^th_i ~. \\end{split}$$ (Eq. 20)", "Here $W_h, \\tilde{W_h} \\in \\mathbb {R}^{d \\times 8d}$ and $w \\in \\mathbb {R}^d$ are trainable weights, $[h, c]$ is vector concatenation across row. Besides, $\\alpha _i^t$ is the attention weight from the $t^{th}$ word to the $i^{th}$ word and $c_t$ is the enhanced contextual embeddings over the $t^{th}$ word in the input text. We obtain the $2d$ -dimension query-aware and self-enhanced embeddings of input text after this step. Finally we feed the embeddings $\\mathbf {O} = [o_1, ... , o_n]$ into a Pointer Network BIBREF39 to decode the answer sequence as", "$$\\begin{split} p_t &= {\\rm LSTM}(p_{t-1}, c_t) \\\\ s_j^t &= w^T {\\rm tanh}(W_oo_j+W_pp_{t-1})\\\\ \\beta _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\ c_t &= \\Sigma _{i=1}^n\\beta _i^to_i~. \\end{split}$$ (Eq. 21)", "Here $\\beta _{n+1}^t$ denotes the probability of generating the “ ${\\rm eos}$ ” symbol since the decoder also needs to determine when to stop. Therefore, the probability of generating the answer sequence $\\textbf {a}$ is as follows", "$${\\rm P}(\\textbf {a}|\\mathbf {O}) = \\prod _t {\\rm P}(a^t | a^1, ... , a^{t-1}, \\mathbf {O})~.$$ (Eq. 23)" ], "highlighted_evidence": [ "The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings from GloVe BIBREF40 and character embeddings trained by CharCNN BIBREF41 to represent input words. The $2d$ -dimension embedding vectors of input text $x_1, ... , x_n$ and query $q_1, ... , q_n$ are then fed into a Highway Layer BIBREF42 to improve the capability of word embeddings and character embeddings as\n\n$$\\begin{split} g_t &= {\\rm sigmoid}(W_gx_t+b_g) \\\\ s_t &= {\\rm relu } (W_xx_t+b_x) \\\\ u_t &= g_t \\odot s_t + (1 - g_t) \\odot x_t~. \\end{split}$$ (Eq. 18)", "The same Highway Layer is applied to $q_t$ and produces $v_t$ .", "Next, $u_t$ and $v_t$ are fed into a Bi-Directional Long Short-Term Memory Network (BiLSTM) BIBREF44 respectively in order to model the temporal interactions between sequence words:", "Then we feed $\\mathbf {U}$ and $\\mathbf {V}$ into the attention flow layer BIBREF27 to model the interactions between the input text and query.", "Therefore, we introduce Self-Matching Layer BIBREF29 in our model as\n\n$$\\begin{split} o_t &= {\\rm BiLSTM}(o_{t-1}, [h_t, c_t]) \\\\ s_j^t &= w^T {\\rm tanh}(W_hh_j+\\tilde{W_h}h_t)\\\\ \\alpha _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\ c_t &= \\Sigma _{i=1}^n\\alpha _i^th_i ~. \\end{split}$$ (Eq. 20)", "Finally we feed the embeddings $\\mathbf {O} = [o_1, ... , o_n]$ into a Pointer Network BIBREF39 to decode the answer sequence as\n\n$$\\begin{split} p_t &= {\\rm LSTM}(p_{t-1}, c_t) \\\\ s_j^t &= w^T {\\rm tanh}(W_oo_j+W_pp_{t-1})\\\\ \\beta _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\ c_t &= \\Sigma _{i=1}^n\\beta _i^to_i~. \\end{split}$$ (Eq. 21)", "Therefore, the probability of generating the answer sequence $\\textbf {a}$ is as follows\n\n$${\\rm P}(\\textbf {a}|\\mathbf {O}) = \\prod _t {\\rm P}(a^t | a^1, ... , a^{t-1}, \\mathbf {O})~.$$ (Eq. 23)" ] } ], "annotation_id": [ "5370c482a9e9c424d28b8ecadac5f0bad4cc0b9e" ], "worker_id": [ "043654eefd60242ac8da08ddc1d4b8d73f86f653" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "For the future work, we plan to solve the triples with multiple entities as the second entity, which is excluded from problem scope in this paper. Besides, processing longer documents and improving the quality of our benchmark are all challenging problems as we mentioned previously. We hope this work can provide new thoughts for the area of information extraction.", "The input of QA4IE is a document $D$ with an existing knowledge base $K$ and the output is a set of relation triples $R = \\lbrace e_i, r_{ij}, e_j\\rbrace $ in $D$ where $e_i$ and $e_j$ are two individual entities and $r_{ij}$ is their relation. We ignore the adverbials and only consider the entity pairs and their relations as in standard RE settings. Note that we process the entire document as a whole instead of processing individual sentences separately as in previous systems. As shown in Figure 1 , our QA4IE framework consists of four key steps:" ], "highlighted_evidence": [ "For the future work, we plan to solve the triples with multiple entities as the second entity, which is excluded from problem scope in this paper.", "The input of QA4IE is a document $D$ with an existing knowledge base $K$ and the output is a set of relation triples $R = \\lbrace e_i, r_{ij}, e_j\\rbrace $ in $D$ where $e_i$ and $e_j$ are two individual entities and $r_{ij}$ is their relation." ] } ], "annotation_id": [ "b38449c5de925046121e3e09d3e32348e23e9a99" ], "worker_id": [ "043654eefd60242ac8da08ddc1d4b8d73f86f653" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "As mentioned above, step 1, 2 and 4 in the QA4IE framework can be solved by existing work. Therefore, in this paper, we mainly focus on step 3. According to the recent progress in QA and MRC, deep neural networks are very good at solving this kind of problem with a large-scale dataset to train the network. However, all previous IE benchmarks BIBREF18 are too small to train neural network models typically used in QA, and thus we need to build a large benchmark. Inspired by WikiReading BIBREF33 , a recent large-scale QA benchmark over Wikipedia, we find that the articles in Wikipedia together with the high quality triples in knowledge bases such as Wikidata BIBREF34 and DBpedia can form the supervision we need. Therefore, we build a large scale benchmark named QA4IE benchmark which consists of 293K Wikipedia articles and 2M golden relation triples with 636 different relation types.", "Incorporating DBpedia. Unlike WikiData, DBpedia is constructed automatically without human verification. Relations and properties in DBpedia are coarse and noisy. Thus we fix the existing 636 relation types in WikiData and build a projection from DBpedia relations to these 636 relation types. We manually find 148 relations which can be projected to a WikiData relation out of 2064 DBpedia relations. Then we gather all the DBpedia triples with the first entity is corresponding to one of the above 3.5M articles and the relation is one of the projected 148 relations. After the same clipping process as above and removing the repetitive triples, we obtain 394K additional triples in 302K existing Wikipedia articles." ], "highlighted_evidence": [ "However, all previous IE benchmarks BIBREF18 are too small to train neural network models typically used in QA, and thus we need to build a large benchmark.", "Therefore, we build a large scale benchmark named QA4IE benchmark which consists of 293K Wikipedia articles and 2M golden relation triples with 636 different relation types.", "We manually find 148 relations which can be projected to a WikiData relation out of 2064 DBpedia relations." ] } ], "annotation_id": [ "1a703b3a71caca8e01e48af84574b49a0a704560" ], "worker_id": [ "043654eefd60242ac8da08ddc1d4b8d73f86f653" ] } ] }
{ "caption": [ "Fig. 1. An overview of our QA4IE Framework.", "Table 1. Detailed Statistics of QA4IE Benchmark.", "Table 2. Comparison between existing IE benchmarks and QA benchmarks. The first two are IE benchmarks and the rest four are QA benchmarks.", "Fig. 2. An overview of our QA model.", "Table 3. Comparison of QA models on SQuAD datasets. We only include the single model results on the dev set from published papers.", "Table 4. Comparison of QA models on 6 datasets of our QA4IE benchmark. The BiDAF model cannot work on our SEQ datasets thus the results are N/A.", "Fig. 3. Precision-recall curves with two confidence scores on the dev set of QA4IE-SPAN-S.", "Table 6. Results of three Open IE baselines on the dev set of QA4IE-SPAN-S.", "Table 7. Case study of three Open IE baselines and our framework on dev set of QA4IE-SPAN-S, the results of baselines are judged by two human annotators while the results of our framework are measured by Exact Match with ground truth. The triples in red indicate the wrong cases.", "Table 8. Human evaluation on QA4IE benchmark." ], "file": [ "3-Figure1-1.png", "6-Table1-1.png", "6-Table2-1.png", "7-Figure2-1.png", "9-Table3-1.png", "10-Table4-1.png", "11-Figure3-1.png", "12-Table6-1.png", "13-Table7-1.png", "14-Table8-1.png" ] }
2004.02083
A Resource for Studying Chatino Verbal Morphology
We present the first resource focusing on the verbal inflectional morphology of San Juan Quiahije Chatino, a tonal mesoamerican language spoken in Mexico. We provide a collection of complete inflection tables of 198 lemmata, with morphological tags based on the UniMorph schema. We also provide baseline results on three core NLP tasks: morphological analysis, lemmatization, and morphological inflection.
{ "section_name": [ "Introduction", "The Chatino Language", "The Chatino Language ::: Typology and Writing System", "The Chatino Language ::: Verb Morphology", "The Resource", "Baseline Results ::: Inflectional realization", "Baseline Results ::: Morphological Analysis", "Baseline Results ::: Lemmatization", "Related Work", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "The recent years have seen unprecedented forward steps for Natural Language Processing (NLP) over almost every NLP subtask, relying on the advent of large data collections that can be leveraged to train deep neural networks. However, this progress has solely been observed in languages with significant data resources, while low-resource languages are left behind.", "The situation for endangered languages is usually even worse, as the focus of the scientific community mostly relies in language documentation. The typical endangered language documentation process typically includes the creation of language resources in the form of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored into large online linguistics archives. This process is often hindered by the so-called Transcription Bottleneck, but recent advances BIBREF0, BIBREF1 provide promising directions towards a solution for this issue.", "However, language documentation and linguistic description, although extremely important itself, does not meaningfully contribute to language conservation, which aims to ensure that the language stays in use. We believe that a major avenue towards continual language use is by further creating language technologies for endangered languages, essentially elevating them to the same level as high-resource, economically or politically stronger languages.", "The majority of the world's languages are categorized as synthetic, meaning that they have rich morphology, be it fusional, agglutinative, polysynthetic, or a mixture thereof. As Natural Language Processing (NLP) keeps expanding its frontiers to encompass more and more languages, modeling of the grammatical functions that guide language generation is of utmost importance. It follows, then, that the next crucial step for expanding NLP research on endangered languages is creating benchmarks for standard NLP tasks in such languages.", "With this work we take a small first step towards this direction. We present a resource that allows for benchmarking two NLP tasks in San Juan Quiahije Chatino, an endangered language spoken in southern Mexico: morphological analysis and morphological inflection, with a focus on the verb morphology of the language.", "We first briefly discuss the Chatino language and the intricacies of its verb morphology (§SECREF2), then describe the resource (§SECREF3), and finally present baseline results on both the morphological analysis and the inflection tasks using state-of-the-art neural models (§SECREF4). We make our resource publicly available online." ], [ "Chatino is a group of languages spoken in Oaxaca, Mexico. Together with the Zapotec language group, the Chatino languages form the Zapotecan branch of the Otomanguean language family. There are three main Chatino languages: Zenzontepec Chatino (ZEN, ISO 639-2 code czn), Tataltepec Chatino (TAT, cta), and Eastern Chatino (ISO 639-2 ctp, cya, ctz, and cly) (E.Cruz 2011 and Campbell 2011). San Juan Quiahije Chatino (SJQ), the language of the focus of this study, belongs to Eastern Chatino, and is used by about 3000 speakers." ], [ "Eastern Chatino languages , including SJQ Chatino, are intensively tonal BIBREF2, BIBREF3. Tones mark both lexical and grammatical distinctions in Eastern Chatino languages.", "In SJQ Chatino, there are eleven tones. Three different systems for representing tone distinctions are employed in the literature: the S-H-M-L system of BIBREF2; the numeral system of BIBREF4; and the alphabetic system of BIBREF3. The correspondences among these three systems are given in Table . For present purposes, we will use numeral representations of the second sort. The number 1 represents a high pitch, 4 represents a low pitch, and double digits represent contour tones." ], [ "SJQ Chatino verb inflection distinguishes four aspect/mood categories: completive (`I did'), progressive (`I am doing'), habitual (`I habitually do') and potential (`I might do'). In each of these categories, verbs inflect for three persons (first, second, third) and two numbers (singular, plural) and distinguish inclusive and exclusive categories of the first person plural (`we including you' vs `we excluding you'). Verbs can be classified into dozens of different conjugation classes. Each conjugation class involves its own tone pattern; each tone pattern is based on a series of three person/number (PN) triplets. A PN triplet [X, Y, Z] consists of three tones: tone X is employed in the third person singular as well as in all plural forms; tone Y is employed in the second person singular, and tone Z, in the third person singular. Thus, a verb's membership in a particular conjugation class entails the assignment of one tone triplet to completive forms, another to progressive forms, and a third to habitual and potential forms. The paradigm of the verb lyu1 `fall' in Table illustrates: the conjugation class to which this verb belongs entails the assignment of the triplet [1, 42, 20] to the completive, [1, 42, 32] to the progressive, and [20, 42, 32] to the habitual and potential. Verbs in other conjugation classes exhibit other triplet series." ], [ "We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:", "Person: first (1), second (2), and third (3)", "Number: singular (SG) ad plural (PL)", "Inclusivity (only applicable to first person plural verbs: inclusive (INCL) and exclusive (EXCL)", "Aspect/mood: completive (CPL), progressive (PROG), potential (POT), and habitual (HAB).", "Two examples of complete inflection tables for the verbs ndyu2 `fell from above' and lyu1 `fall' are shown in Table . Note how the first verb has the same PN triplet for all four aspect/mood categories, while the second paradigm is more representative in that it involves three different triplets (one for the completive, another for the progressive, and another for the habitual/potential). This variety is at the core of why the SJQ verb morphology is particularly interesting, and a challenging testcase for modern NLP systems.", "In total, we end up with 4716 groupings (triplets) of a lemma, a tag-set, and a form; we split these groupings randomly into a training set (3774 groupings), a development set (471 groupings), and test set (471 groupings). Basic statistics of the corpus are outlined in Table 1 . Compared to all the other languages from the Unimorph project, this puts SJQ Chatino in the low- to mid-resource category, but nonetheless it is more than enough for benchmarking purposes." ], [ "Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection,\" inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.", "Morphological inflection has been thoroughly studied in monolingual high resource settings, especially through the recent SIGMORPHON challenges BIBREF8, BIBREF9, BIBREF10, with the latest iteration focusing more on low-resource settings, utilizing cross-lingual transfer BIBREF11. We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.", "Inflection results are outlined in Table . In the `standard' setting we simply train on the pre-defined training set, achieving an exact-match accuracy of 60% over the test set. Interestingly, the data augmentation approach of BIBREF12 that hallucinates new training paradigms based on character level alignments does not heed significant improvements in accuracy (only 2 percentage points increase, cf. with more than 15 percentage points increases in other languages). These results indicate that automatic morphological inflection for low-resource tonal languages like SJQ Chatino poses a particularly challenging setting, which perhaps requires explicit handling of tone information by the model." ], [ "Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose." ], [ "Lemmatization is the task of retrieving the underlying lemma from which an inflected form was derived. Although in some languages the lemma is distinct from all forms, in SJQ Chatino the lemma is defined as the completive third-person singular form. As a computational task, lemmatization entails producing the lemma given an inflected form (and possibly, given a set of morphological tags describing the input form). Popular approaches tackle it as a character-level edit sequence generation task BIBREF15, or as a character-level sequence-to-sequence task BIBREF16. For our baseline lemmatization systems we follow the latter approach. We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet.", "The baseline results, with and without providing gold morphological tags along with the inflected form as input, are outlined in Table . We find that automatic lemmatization in SJQ Chatino achieves fairly high accuracy even with our simple baseline models (89% accuracy, $0.27$ average Levenshtein distance) and that providing the gold morphological tags provides a performance boost indicated by small improvements on both metrics. It it worth noting, though, that these results are also well below the $94--95\\%$ average accuracy and $0.13$ average Levenshtein distance that lemmatization models achieved over 107 treebanks in 66 languages for the SIGMORPHON 2019 shared task BIBREF11." ], [ "Our work builds and expands upon previous work on Indigenous languages of the Americas specifically focusing on the complexity of their morphology. Among other works similar to ours, BIBREF17 focused on the morphology of Dene verbs, BIBREF18 on Arapaho verbs, BIBREF19 on Shipibo-Konibo, and BIBREF20 on Saint Lawrence Island and Central Siberian Yupik. BIBREF21 describe an approach for elicit complete inflection paradigms, with experiments in languages like Nahuatl. Our resource is the first one for SJQ Chatino, but it also provides an exciting new data point in the computational study of morphological analysis, lemmatization, and inflection, as it is the first one in a tonal language with explicit tonal markings in the writing system. In a similar vein, the Oto-Manguean Inflectional Class Database BIBREF22 provides a valuable resource for studying the verbal morphology of Oto-Manguean languages (including a couple of other Chatino variants: Yaitepec and Zenzotepec Chatino) but not in a format suitable for computational experiments." ], [ "We presented a resource of 198 complete inflectional paradigms in San Juan Quiahije Chatino, which will facilitate research in computational morphological analysis and inflection for low-resource tonal languages and languages of Mesoamerica. We also provide strong baseline results on computational morphological analysis, lemmatization, and inflection realization, using character-level neural encoder-decoder systems.", "For future work, while we will keep expanding our resource to include more paradigms, we will also follow the community guidelines in extending our resource to include morphological analysis and inflection examples in context." ], [ "Part of this work was done during the Workshop on Language Technology for Language Documentation and Revitalization. This material is based upon work generously supported by the National Science Foundation under grant 1761548." ] ] }
{ "question": [ "How does morphological analysis differ from morphological inflection?", "What was the criterion used for selecting the lemmata?", "What are the architectures used for the three tasks?", "Which language family does Chatino belong to?", "What system is used as baseline?", "How was annotation done?", "How was the data collected?" ], "question_id": [ "99760276cfd699e55b827ceeb653b31b043b9ceb", "247e1fe052230458ce11b98e3637acf0b86795cd", "79cfd1b82c72d18e2279792c66a042c0e9dfa6b7", "9e1bf306658ef2972159643fdaf149c569db524b", "25b24ab1248f14a621686a57555189acc1afd49c", "8486e06c03f82ebd48c7cfbaffaa76e8b899eea5", "27f575e90487ef68298cfb6452683bb977e39e43" ], "nlp_background": [ "two", "two", "two", "two", "two", "two", "two" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no" ], "search_query": [ "morphology", "morphology", "morphology", "morphology", "", "", "" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Morphological analysis is the task of creating a morphosyntactic description for a given word", " inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection,\" inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.", "Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose." ], "highlighted_evidence": [ "Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection,\" inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.", "Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11." ] } ], "annotation_id": [ "22d7d1a85bad7abf7618c6e132f4613e3b3e5725" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "308f551ab8f5dbfb1a2668a1bf910f7d1c8037d1" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "DyNet" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Morphological inflection has been thoroughly studied in monolingual high resource settings, especially through the recent SIGMORPHON challenges BIBREF8, BIBREF9, BIBREF10, with the latest iteration focusing more on low-resource settings, utilizing cross-lingual transfer BIBREF11. We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.", "Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.", "Lemmatization is the task of retrieving the underlying lemma from which an inflected form was derived. Although in some languages the lemma is distinct from all forms, in SJQ Chatino the lemma is defined as the completive third-person singular form. As a computational task, lemmatization entails producing the lemma given an inflected form (and possibly, given a set of morphological tags describing the input form). Popular approaches tackle it as a character-level edit sequence generation task BIBREF15, or as a character-level sequence-to-sequence task BIBREF16. For our baseline lemmatization systems we follow the latter approach. We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet." ], "highlighted_evidence": [ "We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.", "We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet.", "We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet." ] } ], "annotation_id": [ "4d88a72e4cfb4a6f1b643127dadc9d86e1c212ac" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "the Otomanguean language family" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Chatino is a group of languages spoken in Oaxaca, Mexico. Together with the Zapotec language group, the Chatino languages form the Zapotecan branch of the Otomanguean language family. There are three main Chatino languages: Zenzontepec Chatino (ZEN, ISO 639-2 code czn), Tataltepec Chatino (TAT, cta), and Eastern Chatino (ISO 639-2 ctp, cya, ctz, and cly) (E.Cruz 2011 and Campbell 2011). San Juan Quiahije Chatino (SJQ), the language of the focus of this study, belongs to Eastern Chatino, and is used by about 3000 speakers." ], "highlighted_evidence": [ "Chatino is a group of languages spoken in Oaxaca, Mexico. Together with the Zapotec language group, the Chatino languages form the Zapotecan branch of the Otomanguean language family. " ] } ], "annotation_id": [ "cc096b36121cde866038dfc869d1e428684dd341" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "DyNet" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Morphological inflection has been thoroughly studied in monolingual high resource settings, especially through the recent SIGMORPHON challenges BIBREF8, BIBREF9, BIBREF10, with the latest iteration focusing more on low-resource settings, utilizing cross-lingual transfer BIBREF11. We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.", "Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.", "Lemmatization is the task of retrieving the underlying lemma from which an inflected form was derived. Although in some languages the lemma is distinct from all forms, in SJQ Chatino the lemma is defined as the completive third-person singular form. As a computational task, lemmatization entails producing the lemma given an inflected form (and possibly, given a set of morphological tags describing the input form). Popular approaches tackle it as a character-level edit sequence generation task BIBREF15, or as a character-level sequence-to-sequence task BIBREF16. For our baseline lemmatization systems we follow the latter approach. We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet." ], "highlighted_evidence": [ "We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.", "We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet.", "We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet." ] } ], "annotation_id": [ "8915168041897c713aa479d6cf8600df8e163c56" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " hand-curated collection of complete inflection tables for 198 lemmata" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:", "Person: first (1), second (2), and third (3)", "Number: singular (SG) ad plural (PL)", "Inclusivity (only applicable to first person plural verbs: inclusive (INCL) and exclusive (EXCL)", "Aspect/mood: completive (CPL), progressive (PROG), potential (POT), and habitual (HAB)." ], "highlighted_evidence": [ "We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:\n\nPerson: first (1), second (2), and third (3)\n\nNumber: singular (SG) ad plural (PL)\n\nInclusivity (only applicable to first person plural verbs: inclusive (INCL) and exclusive (EXCL)\n\nAspect/mood: completive (CPL), progressive (PROG), potential (POT), and habitual (HAB)." ] } ], "annotation_id": [ "1aa3e1eadbd324766d0859d5dc47fed107d1940d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "5df0aa468473aeb500aeb2a01e6bce8ffa0c3d66" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 1: Basic Statistics of our resource.", "Table 2: Three alternative systems for representing the SJQ Chatino tones.", "Table 3: Complete inflection paradigms for two lemmata: one with a single PN triple across all aspects (top), and one with three different PN triples (bottom).", "Table 4: Morphological Inflection Results", "Table 6: Lemmatization Results.", "Table 5: Morphological Analysis Results" ], "file": [ "1-Table1-1.png", "2-Table2-1.png", "3-Table3-1.png", "3-Table4-1.png", "3-Table6-1.png", "3-Table5-1.png" ] }
1707.03764
N-GrAM: New Groningen Author-profiling Model
We describe our participation in the PAN 2017 shared task on Author Profiling, identifying authors' gender and language variety for English, Spanish, Arabic and Portuguese. We describe both the final, submitted system, and a series of negative results. Our aim was to create a single model for both gender and language, and for all language varieties. Our best-performing system (on cross-validated results) is a linear support vector machine (SVM) with word unigrams and character 3- to 5-grams as features. A set of additional features, including POS tags, additional datasets, geographic entities, and Twitter handles, hurt, rather than improve, performance. Results from cross-validation indicated high performance overall and results on the test set confirmed them, at 0.86 averaged accuracy, with performance on sub-tasks ranging from 0.68 to 0.98.
{ "section_name": [ "Introduction", "Final System", "Data Analysis", "Alternative Features and Methods: An Analysis of Negative Results", "Supplementary Data and Features", "Modelling", "Results on Test Data", "Conclusion" ], "paragraphs": [ [ "With the rise of social media, more and more people acquire some kind of on-line presence or persona, mostly made up of images and text. This means that these people can be considered authors, and thus that we can profile them as such. Profiling authors, that is, inferring personal characteristics from text, can reveal many things, such as their age, gender, personality traits, location, even though writers might not consciously choose to put indicators of those characteristics in the text. The uses for this are obvious, for cases like targeted advertising and other use cases, such as security, but it is also interesting from a linguistic standpoint.", "In the shared task on author profiling BIBREF0 , organised within the PAN framework BIBREF1 , the aim is to infer Twitter users' gender and language variety from their tweets in four different languages: English, Spanish, Arabic, and Portuguese. Gender consists of a binary classification (male/female), whereas language variety differs per language, from 2 varieties for Portuguese (Brazilian and Portugal) to 7 varieties for Spanish (Argentina, Chile, Colombia, Mexico, Peru, Spain, Venezuela). The challenge is thus to classify users along two very different axes, and in four highly different languages – forcing participants to either build models that can capture these traits very generally (language-independent) or tailor-make models for each language or subtask.", "Even when looking at the two tasks separately, it looks like the very same features could be reliable clues for classification. Indeed, for both profiling authors on Twitter as well as for discriminating between similar languages, word and character n-grams have proved to be the strongest predictors of gender as well as language varieties. For language varieties discrimination, the systems that performed best at the DSL shared tasks in 2016 (on test set B, i.e. social media) used word/character n-grams, independently of the algorithm BIBREF2 . The crucial contribution of these features was also observed by BIBREF3 , BIBREF4 , who participated in the 2017 DSL shared task with the two best performing systems. For author profiling, it has been shown that tf-idf weighted n-gram features, both in terms of characters and words, are very successful in capturing especially gender distinctions BIBREF5 . If different aspects such as language variety and gender of a speaker on Twitter might be captured by the same features, can we build a single model that will characterise both aspects at once?", "In the context of the PAN 2017 competition on user profiling we therefore experimented with enriching a basic character and word n-gram model by including a variety of features that we believed should work. We also tried to view the task jointly and model the two problems as one single label, but single modelling worked best.", "In this paper we report how our final submitted system works, and provide some general data analysis, but we also devote substantial space to describing what we tried (under which motivations), as we believe this is very informative towards future developments of author profiling systems." ], [ "After an extensive grid-search we submitted as our final run, a simple SVM system (using the scikit-learn LinearSVM implementation) that uses character 3- to 5-grams and word 1- to 2-grams with tf-idf weighting with sublinear term frequency scaling, where instead of the standard term frequency the following is used:", " INLINEFORM0 ", "We ran the grid search over both tasks and all languages on a 64-core machine with 1 TB RAM (see Table TABREF2 for the list of values over which the grid search was performed). The full search took about a day to complete. In particular, using min_df=2 (i.e. excluding all terms that are used by only one author) seems to have a strong positive effect and greatly reduces the feature size as there are many words that appear only once. The different optimal parameters for different languages provided only a slight performance boost for each language. We decided that this increase was too small to be significant, so we decided to use a single parameter set for all languages and both tasks." ], [ "The training dataset provided consist of 11400 sets of tweets, each set representing a single author. The target labels are evenly distributed across variety and gender. The labels for the gender classification task are `male' and `female'. Table TABREF4 shows the labels for the language variation task and also shows the data distribution across languages.", "We produced two visualisations, one per label (i.e. variety and gender), in order to gain some insights that could help the feature engineering process. For the variety label we trained a decision tree classifier using word unigrams: although the performance is poor (accuracy score of 0.63) this setup has the benefit of being easy to interpret: Figure FIGREF3 shows which features are used for the first splits of the tree.", "We also created a visualisation of the English dataset using the tool described in BIBREF6 , and comparing the most frequent words used by males to those used by females. The visualisation shown in Figure SECREF6 indicates several interesting things about the gendered use of language. The words used often by males and very seldom by females are often sport-related, and include words such as “league”, and “chelsea”. There are several emojis that are used frequently by females and infrequently by males, e.g. “”, “”, as well as words like “kitten”, “mom”, “sister” and “chocolate”. In the top right of the visualisation we see words like “trump” and “sleep”, which indicates that these words are used very frequently, but equally so by both genders. This also shows that distinguishing words include both time-specific ones, like “gilmore” and “imacelebrityau”, and general words from everyday life, which are less likely to be subject to time-specific trends, like “player”, and “chocolate”." ], [ "This section is meant to highlight all of the potential contributions to the systems which turned out to be detrimental to performance, when compared to the simpler system that we have described in Section SECREF2 . We divide our attempts according to the different ways we attempted to enhance performance: manipulating the data itself (adding more, and changing preprocessing), using a large variety of features, and changing strategies in modelling the problem by using different algorithms and paradigms. All reported results are on the PAN 2017 training data using five-fold cross-validation, unless otherwise specified." ], [ "We extended the training dataset by adding data and gender labels from the PAN 16 Author Profiling shared task BIBREF5 . However, the additional data consistently resulted in lower cross-validation scores than when using only the training data provided with the PAN 17 task. One possible explanation for this is that our unigram model captures aspects that are tied specifically to the PAN 17 dataset, because it contains topics that may not be present in datasets that were collected in a different time period. To confirm this, we attempted to train on English data from PAN 17 and predict gender labels for the English data from PAN 16, as well as vice versa. Training on the PAN 16 data resulted in an accuracy score of 0.754 for the PAN 17 task, and training on PAN 17 gave an accuracy score of 0.70 for PAN 16, both scores significantly lower than cross-validated results on data from a single year.", "We attempted to classify the English tweets by Gender using only the data collected by BIBREF7 . This dataset consists of aggregated word counts by gender for about 14,000 Twitter users and 9 million Tweets. We used this data to calculate whether each word in our dataset was a `male' word (used more by males), or a `female' word, and classified users as male or female based on a majority count of the words they used. Using this method we achieved 71.2 percent accuracy for the English gender data, showing that this simple method can provide a reasonable baseline to the gender task.", "We experimented with different tokenization techniques for different languages, but our average results did not improve, so we decided to use the default scikit-learn tokenizer.", "We tried adding POS-tags to the English tweets using the spaCy tagger: compared to the model using unigrams only the performances dropped slightly for gender and a bit more for variety:", "It is not clear whether the missed increase in performance is due to the fact that the data are not normal (i.e. the tokenizer is not Twitter specific) or to the fact that POS tags confuse the classifier. Considering the results we decided not to include a POS-tagger in the final system.", "()", "In April 2015, SwiftKey did an extensive report on emoji use by country. They discovered that emoji use varies across languages and across language varieties. For example, they found that Australians use double the average amount of alcohol-themed emoji and use more junk food and holiday emoji than anywhere else in the world.", "We tried to leverage these findings but the results were disappointing. We used a list of emojis as a vocabulary for the td/idf vectorizer. Encouraged by the results of the SwiftKey report, we tried first to use emojis as the only vocabulary and although the results are above the baseline and also quite high considering the type of features, they were still below the simple unigram model. Adding emojis as extra features to the unigram model also did not provide any improvement.", "Since emojis are used across languages we built a single model for the four languages. We trained the model for the gender label on English, Portuguese and Arabic and tested it on Spanish: the system scored 0.67 in accuracy.", "We looked at accuracy scores for the English gender and variety data more closely. We tried different representations of the tweet texts, to see what kind of words were most predictive of variety and gender. Specifically, we look at using only words that start with an uppercase letter, only words that start with a lowercase letter, only Twitter handles (words that start with an \"@\") and all the text excluding the handles.", "It is interesting that the accuracies are so high although we are using only a basic unigram model, without looking at the character n-grams that we include in our final model. Representing each text only by the Twitter handles used in that text results in 0.77 accuracy for variety, probably because users tend to interact with other users who are in the same geographic area. However, excluding handles from the texts barely decreases performance for the variety task, showing that while the handles can be discriminative, they are not necessary for this task. It is also interesting to note that for this dataset, looking only at words beginning with an uppercase character results in nearly the same score for the Gender task as we get when using all of the available text, while using only lowercase words decreases performance. The opposite is true for the variety task, where using lowercase-only words results in as good performance as using all the text, but using only uppercase words decreases accuracy by over 10 percent.", "We tried using the counts of geographical names related to the language varieties were as a feature. We also treated this list of locations as vocabulary for our model. Both these approaches did not improve our model.", "We then tried enriching the data to improve the Unigram model. For each of the language varieties, we obtained 100 geographical location names, representing the cities with the most inhabitants. When this location was mentioned in the tweet, the language variety the location was part of was added to the tweet.", "We attempted to use Twitter handles in a similar manner. The 100 most-followed Twitter users per language variety were found and the language variety was added to the text when one of its popular Twitter users was mentioned.", "Unfortunately, this method did not improve our model. We suspect that the information is being captured by the n-gram model, which could explain why this did not improve performance.", "We have tried the partial setup of last year's winning system, GronUP BIBREF8 , with the distinction that we had to classify language variety instead of age groups. We have excluded the features that are language-dependent (i.e. pos-tagging and misspelling/typos), and experimented with various feature combinations of the rest while keeping word and character n-grams the same. We achieved average accuracy from 0.810 to 0.830, which is clearly lower than our simple final model." ], [ "We tried to build a single model that predicts at the same time both the language variety and the gender of each user: as expected (since the task is harder) the performance goes down when compared to a model trained independently on each label. However, as highlighted in Table TABREF21 , the results are still surprisingly high. To train the system we simply merged the two labels.", "We experimented with Facebook's FastText system, which is an out-of-the-box supervised learning classifier BIBREF9 . We used only the data for the English gender task, trying both tweet-level and author-level classification. We pre-processed all text with the NLTK Tweet Tokenizer and used the classification-example script provided with the FastText code base. Training on 3,000 authors and testing on 600 authors gave an accuracy score of 0.64. Changing the FastText parameters such as number of epochs, word n-grams, and learning rate showed no improvement. We achieved an accuracy on 0.79 when we attempted to classify on a per-tweet basis (300,000 tweets for training and 85,071 for test), but this is an easier task as some authors are split over the training and test sets. There are various ways to summarise per-tweet predictions into author-predictions, but we did not experiment further as it seemed that the SVM system worked better for the amount of data we have.", "In the final system we used the SVM classifier because it outperformed all the others that we tried. Table TABREF23 highlights the results." ], [ "For the final evaluation we submitted our system, N-GrAM, as described in Section 2. Overall, N-GrAM came first in the shared task, with a score of 0.8253 for gender 0.9184 for variety, a joint score of 0.8361 and an average score of 0.8599 (final rankings were taken from this average score BIBREF0 ). For the global scores, all languages are combined. We present finer-grained scores showing the breakdown per language in Table TABREF24 . We compare our gender and variety accuracies against the LDR-baseline BIBREF10 , a low dimensionality representation especially tailored to language variety identification, provided by the organisers. The final column, + 2nd shows the difference between N-GrAM and that achieved by the second-highest ranked system (excluding the baseline).", "Results are broken down per language, and are summarised as both joint and average scores. The joint score is is the percentage of texts for which both gender and variety were predicted correctly at the same time. The average is calculated as the mean over all languages.", "N-GrAM ranked first in all cases except for the language variety task. In this case, the baseline was the top-ranked system, and ours was second by a small margin. Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task." ], [ "We conclude that, for the current author profiling task, a seemingly simple system using word and character n-grams and an SVM classifier proves very hard to beat. Indeed, N-GrAM turned out to be the best-performing out of the 22 systems submitted in this shared task. Using additional training data, `smart' features, and hand-crafted resources hurts rather than helps performance. A possible lesson to take from this would be that manually crafting features serves only to hinder a machine learning algorithm's ability to find patterns in a dataset, and perhaps it is better to focus one's efforts on parameter optimisation instead of feature engineering.", "However, we believe that this is too strong a conclusion to draw from this limited study, since several factors specific to this setting need to be taken into account. For one, a support vector machine clearly outperforms other classifiers, but this does not mean that this is an inherently more powerful. Rather, we expect that an SVM is the best choice for the given amount of training data, but with more training data, a neural network-based approach would achieve better results.", "Regarding the frustrating lack of benefit from more advanced features than n-grams, a possible explanation comes from a closer inspection of the data. Both the decision tree model (see Figure FIGREF3 ) and the data visualisation (see Figure SECREF6 ) give us an insight in the most discriminating features in the dataset. In the case of language variety, we see that place names can be informative features, and could therefore be used as a proxy for geographical location, which in turn serves as a proxy for language variety. Adding place names explicitly to our model did not yield performance improvements, which we take to indicate that this information is already captured by n-gram features. Whether and how geographical information in the text can be useful in identifying language variety, is a matter for future research.", "In the case of gender, many useful features are ones that are highly specific to the Twitter platform (#iconnecthearts), time (cruz), and topics (pbsnewshour) in this dataset, which we suspect would not carry over well to other datasets, but provide high accuracy in this case. Conversely, features designed to capture gender in a more general sense do not yield any benefit over the more specific features, although they would likely be useful for a robust, cross-dataset system. These hypotheses could be assessed in the future by testing author profiling systems in a cross-platform, cross-time setting.", " Scatter plot of terms commonly used by male and female English speakers." ] ] }
{ "question": [ "How do their results compare against other competitors in the PAN 2017 shared task on Author Profiling?", "On which task does do model do worst?", "On which task does do model do best?" ], "question_id": [ "157b9f6f8fb5d370fa23df31de24ae7efb75d6f3", "9bcc1df7ad103c7a21d69761c452ad3cd2951bda", "8427988488b5ecdbe4b57b3813b3f981b07f53a5" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "They achieved best result in the PAN 2017 shared task with accuracy for Variety prediction task 0.0013 more than the 2nd best baseline, accuracy for Gender prediction task 0.0029 more than 2nd best baseline and accuracy for Joint prediction task 0.0101 more than the 2nd best baseline", "evidence": [ "FLOAT SELECTED: Table 8. Results (accuracy) on the test set for variety, gender and their joint prediction.", "For the final evaluation we submitted our system, N-GrAM, as described in Section 2. Overall, N-GrAM came first in the shared task, with a score of 0.8253 for gender 0.9184 for variety, a joint score of 0.8361 and an average score of 0.8599 (final rankings were taken from this average score BIBREF0 ). For the global scores, all languages are combined. We present finer-grained scores showing the breakdown per language in Table TABREF24 . We compare our gender and variety accuracies against the LDR-baseline BIBREF10 , a low dimensionality representation especially tailored to language variety identification, provided by the organisers. The final column, + 2nd shows the difference between N-GrAM and that achieved by the second-highest ranked system (excluding the baseline)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 8. Results (accuracy) on the test set for variety, gender and their joint prediction.", "For the final evaluation we submitted our system, N-GrAM, as described in Section 2. Overall, N-GrAM came first in the shared task, with a score of 0.8253 for gender 0.9184 for variety, a joint score of 0.8361 and an average score of 0.8599 (final rankings were taken from this average score BIBREF0 ). ", "We present finer-grained scores showing the breakdown per language in Table TABREF24 .", "The final column, + 2nd shows the difference between N-GrAM and that achieved by the second-highest ranked system (excluding the baseline).\n\n" ] } ], "annotation_id": [ "ca1cbe32990697dc4b2c440c07fa82bfeee4c346" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Gender prediction task", "evidence": [ "N-GrAM ranked first in all cases except for the language variety task. In this case, the baseline was the top-ranked system, and ours was second by a small margin. Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task." ], "highlighted_evidence": [ "N-GrAM ranked first in all cases except for the language variety task. In this case, the baseline was the top-ranked system, and ours was second by a small margin. Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task.\n\n" ] } ], "annotation_id": [ "33c0a0971c00615c05d4259aaa489ee926bd3fb8" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Variety prediction task", "evidence": [ "N-GrAM ranked first in all cases except for the language variety task. In this case, the baseline was the top-ranked system, and ours was second by a small margin. Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task." ], "highlighted_evidence": [ "Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task." ] } ], "annotation_id": [ "5c112a7545f5de3816ae328c728a1109da194b90" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Table 1. Results (accuracy) for the 5-fold cross-validation", "Table 2. A list of values over which we performed the grid search.", "Figure 1. Decision Tree output", "Table 4. Results (accuracy) on the English data for Gender and Variety with and without part of speech tags.", "Table 5. Results (accuracy) on the English data for Gender and Variety when excluding certain words. We preprocessed the text to exclude the specified word-patterns and then vectorized the resulting text with tf-idf. Classification was done using an SVM with a linear kernel over five-fold cross-validation.", "Table 7. Performances per classifier: DT: Decision Tree; MLP: Multi-Layer Perceptron, NB: Naive Bayes.", "Table 8. Results (accuracy) on the test set for variety, gender and their joint prediction." ], "file": [ "3-Table1-1.png", "3-Table2-1.png", "4-Figure1-1.png", "5-Table4-1.png", "6-Table5-1.png", "8-Table7-1.png", "8-Table8-1.png" ] }
2001.10179
Multi-modal Sentiment Analysis using Super Characters Method on Low-power CNN Accelerator Device
Recent years NLP research has witnessed the record-breaking accuracy improvement by DNN models. However, power consumption is one of the practical concerns for deploying NLP systems. Most of the current state-of-the-art algorithms are implemented on GPUs, which is not power-efficient and the deployment cost is also very high. On the other hand, CNN Domain Specific Accelerator (CNN-DSA) has been in mass production providing low-power and low cost computation power. In this paper, we will implement the Super Characters method on the CNN-DSA. In addition, we modify the Super Characters method to utilize the multi-modal data, i.e. text plus tabular data in the CL-Aff sharedtask.
{ "section_name": [ "Introduction", "Super Characters for Multi-modal Sentiment Analysis and Low-Power Hardware Solution", "Experiments ::: Data Exploration", "Experiments ::: Design SuperCharacters Image", "Experiments ::: Design SuperCharacters Image ::: Design Option One", "Experiments ::: Design SuperCharacters Image ::: Design Option Two", "Experiments ::: Design SuperCharacters Image ::: Design Option Three", "Experiments ::: Design SuperCharacters Image ::: Design Option Four", "Experiments ::: Experimental Results", "Conclusion" ], "paragraphs": [ [ "The need to classify sentiment based on the multi-modal input arises in many different problems in customer related marketing fields. Super Characters BIBREF0 is a two-step method for sentiment analysis. It first converts text into images; then feeds the images into CNN models to classify the sentiment. Sentiment classification performance on large text contents from customer online comments shows that the Super Character method is superior to other existing methods. The Super Characters method also shows that the pretrained models on a larger dataset help improve accuracy by finetuning the CNN model on a smaller dataset. Compared with from-scratch trained Super Characters model, the finetuned one improves the accuracy from 95.7% to 97.8% on the well-known Chinese dataset of Fudan Corpus. Squared English Word (SEW) BIBREF1 is an extension of the Super Characters method into Latin Languages. With the wide availability of low-power CNN accelerator chips BIBREF2 BIBREF3, Super Characters method has the great potential to be deployed in large scale by saving power and fast inference speed. In addition, it is easy to deploy as well. The recent work also extend its applications to chatbot BIBREF4, image captioning BIBREF5, and also tabular data machine learning BIBREF6.", "The CL-AFF Shared TaskBIBREF7 is part of the Affective Content Analysis workshop at AAAI 2020. It builds upon the OffMyChest datasetBIBREF8, which contains 12,860 samples of training data and 5,000 samples of testing data. Each sample is a multi-modal input containing both text and tabular data. The text input is an English sentence from Reddit. The tabular data is the corresponding log information for each sentence, like wordcount, created utc time and etc. And each sample has six sets of binary classification labels, EmotionDisclosure?(Yes$|$No), InformationDisclosure?(Yes$|$No), Support?(Yes$|$No), EmmotionSupport?(Yes$|$No), InformationSupport?(Yes$|$No), GeneralSupport?(Yes$|$No). In this paper, we will apply Super Characters on this data set to classify the muti-modal input." ], [ "For multi-modal sentiment analysis, we can simply split the image into two parts. One for the text input, and the other for the tabular data. Such that both can be embedded into the Super Characters image. The CNN accelerator chip comes together with a Model Development Kit (MDK) for CNN model training, which feeds the two-dimensional Super Characters images into MDK and then obtain the fixed point model. Then, using the Software Development Kit (SDK) to load the model into the chip and send command to the CNN accelerator chip, such as to read an image, or to forward pass the image through the network to get the inference result. The advantage of using the CNN accelerator is low-power, it consumes only 300mw for an input of size 3x224x224 RGB image at the speed of 140fps. Compared with other models using GPU or FPGA, this solution implement the heavy-lifting DNN computations in the CNN accelerator chip, and the host computer is only responsible for memory read/write to generate the designed Super Character image. This has shown good result on system implementations for NLP applications BIBREF9." ], [ "The training data set has 12,860 samples with 16 columns. The first ten columns are attributes, including sentenceid, author, nchar, created_utc, score, subreddit, label, full_text, wordcount, and id. And the other six columns are labels for each of the tasks of Emotion_disclosure, Information_disclosure, Support, Emmotion_support, Information_support, and General_support. Each task is a binary classification problem based on the ten attributes. So there will be 60 models to be trained for a 10-fold validation. The test data set has 5000 samples with only the ten columns of attributes. The system run will give labels on these test samples based on the 10-fold training.", "For the training data, unique ids are 3634 compared to the whole training 12,860. While for the testing data, this number is only 2443 compared to the whole testing dataset 5000, meaning some of the records may come from the same discussion thread. And the unique authors are 7556 for training, and 3769 for testing, which means some of the authors are active that they may published more than one comments.", "Based on this, we have considered to include author names in the multi-modal model as well, since a comment may be biased by the personality of its author. The maximum length of an author's name is 20 charactors, if SEW BIBREF1 is to be used to project the names onto a two-dimensional embedding. On the other hand, the nchar which indicates the number of characters for the full_text has a maximum value of 9993, and the maximum wordcount is 481. The column “label\" has 37 unique values, which are different combinations of strings like “husband\", “wife\", “boyfriend\", “girlfriend\", and their abbreviations like “bf\",“gf\". The column “subreddit\" is a categorical attribute with values in (“offmychest\", “CasualConversation\"). After converting the Unix time in the column of “created_utc\", we found that the records are generated from 2017 to 2018. The column score has integers ranging from -44 to 1838 with 251 unique values." ], [ "The sentence length distribution is given in Figure FIGREF3. The layout design for the full_text will be based on this. Since we present the English words using SEW BIBREF1 method, the size of each English word on the SuperCharacters image should better be calculated by (224/N)*(224/N) if the whole image is set to 224x224. Here N is an integer. The dimension is set to 224x224 because of the chip specification." ], [ "In this design setting, we only include the full_text information and ignore the other attributes. If N=7, it means each row has 7 words, and each word has (224/7)*(224/7)=32*32 pixels. In this setting we can hold up to 49 words in full_text. For the records with words more than 49, the full_text will ignore the words from the 49th. In this case, only 0.86% of the training data and 1.98% of the testing data will have to cut the sentence at 49 words. An example of this design setting is in Figure FIGREF4." ], [ "If N=8, it means each row has 8 words, and each word has (224/8)*(224/8)=28*28 pixels. And if we set the cutlength=40, it means that we will have 5 rows for the full_text, and the other 3 rows will not be used for text, but all the space of the 224*(3*28) square pixels will be used for the tabular data given in the attributes other than full_text\". For the records with words more than 40, the full_text will ignore the words from the 40th. In this case, only 2.03% of the training data and 4.14% of the testing data will have to cut the sentence at 40 words. We have the option to use the bottom part of the image to embed the other attributes. The id and sentenceid should be unrelated to the prediction, so these two attributes are not included. One example having the full_text, author, wordcount, created_utc, subreddit, score, nchar, and label is given in Figure FIGREF4.", "However, the 10-fold training accuracy on this design is not good. This is partially because some of the attributes do not contribute to prediction but adds more noise instead. For example, the created time may not be very related to the prediction of the tasks but occupies a good portion of the embedding area of the image. In addition, since most of the wordcounts are centered around less than twenty, the two-dimensional embeddings of the full_text should have better resolution if the cutlength is smaller than 40. So the font size will be larger and easier for CNN to learn." ], [ "This design setting cuts the cut length of the full_text sentence to 42, and leave the space of the last row for some important attributes, including subreddit, wordcount, score, and label. An example of this design setting is in Figure FIGREF4." ], [ "This is data augmentation for Design Option Three. For a small data set, we need more data with the same semantic meaning generated from the raw labeled data without adding any noise. For Super Characters, the text are projected into the image. Adding some spaces at the front should not change the semantic meaning, and at the same time increased the number of generated Super Characters images. For each sentence, if the sentence length is less than 42, we will add one space at the front and then generate the Super Characters image. This process iterates until the length of the sentence with the added space reaches 42. An example of this design setting is in Figure FIGREF4." ], [ "After comparison, only Design Option One and Design Option Four are kept for the entire 10-fold training and validation.", "For the system runs, it is limited to submit a maximum of 10 system runs. So, only the first five 10-folds models on both Design Option One and Design Option Four are tested against the 5000 testing data and submitted. The details of these 10 system runs are given in Table TABREF10$-$TABREF15.", "In general, Design Option Four are a little better than Design Option One, but these results are still not good. The results are a little better than constantly predict one class. We can see that the results on this OffMyChest data is not as good as on AffCon19 CLAFF shared task. And compared with Super Characters on Wikipedia data set, the accuracy on this data is not as accurate as well.", "Several methods could be used to further improve the accuracy. First, pretrained model may help improve. For this shared task, the size of training examples are relatively small to understand the complex definition of these 6 tasks. Second, other data augmentation method could be introduced in order to further boost the accuracy. For example, replacing word with its synonyms. Third, the data set is skewed data set. We can balance the data set by upsampling." ], [ "In this paper, we proposed modified version of Super Characters, in order to make it work on multi-modal data. In the case of this AffCon CLAFF shared task, the multi-modal data includes text data and tabular data. In addition, we deploy the models on low-power CNN chips, which proves the feasibility of applying DNN models with consideration of real-world practical concerns such as power and speed. The Super Characters method is relatively new and starts to attrack attentions for application scenarios. Pretrained models on large corpus would be very helpful for the Super Characters method, as success of pretrained model is observed for NLP models like ELMO and BERT. For fine-tuning on small datasets, data augmentation should further boost the generalization capability." ] ] }
{ "question": [ "Is their implementation on CNN-DSA compared to GPU implementation in terms of power consumption, accuracy and speed?", "Does this implementation on CNN-DSA lead to diminishing of performance?", "How is Super Character method modified to handle tabular data also?" ], "question_id": [ "3604c4fba0a82d7139efd5ced47612c90bd10601", "38e2f07ba965b676a99be06e8872dade7c04722a", "931a2a13a1f6a8d9107d26811089bdccc39b0800" ], "nlp_background": [ "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "1ac6e8ba59da86052a00e3e2b30d64dd2d6d48dc" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "338784f5b988cda04cf6283c917146d453d758ba" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "simply split the image into two parts. One for the text input, and the other for the tabular data" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For multi-modal sentiment analysis, we can simply split the image into two parts. One for the text input, and the other for the tabular data. Such that both can be embedded into the Super Characters image. The CNN accelerator chip comes together with a Model Development Kit (MDK) for CNN model training, which feeds the two-dimensional Super Characters images into MDK and then obtain the fixed point model. Then, using the Software Development Kit (SDK) to load the model into the chip and send command to the CNN accelerator chip, such as to read an image, or to forward pass the image through the network to get the inference result. The advantage of using the CNN accelerator is low-power, it consumes only 300mw for an input of size 3x224x224 RGB image at the speed of 140fps. Compared with other models using GPU or FPGA, this solution implement the heavy-lifting DNN computations in the CNN accelerator chip, and the host computer is only responsible for memory read/write to generate the designed Super Character image. This has shown good result on system implementations for NLP applications BIBREF9." ], "highlighted_evidence": [ "For multi-modal sentiment analysis, we can simply split the image into two parts. One for the text input, and the other for the tabular data." ] } ], "annotation_id": [ "4862d9715079affc3ed36c2fa624b840d8342f77" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Fig. 1. Histogram of sentence length counted by number of words.", "Fig. 2. Demonstrations of design options.", "Table 1. System Run details on 10-folds validation for the task of Emotion disclosure.", "Table 2. System Run details on 10-folds validation for the task of Information disclosure.", "Table 3. System Run details on 10-folds validation for the task of Support.", "Table 4. System Run details on 10-folds validation for the task of Emotion support.", "Table 5. System Run details on 10-folds validation for the task of Information support.", "Table 6. System Run details on 10-folds validation for the task of General support." ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "6-Table1-1.png", "6-Table2-1.png", "7-Table3-1.png", "7-Table4-1.png", "7-Table5-1.png", "8-Table6-1.png" ] }
1907.11907
Nefnir: A high accuracy lemmatizer for Icelandic
Lemmatization, finding the basic morphological form of a word in a corpus, is an important step in many natural language processing tasks when working with morphologically rich languages. We describe and evaluate Nefnir, a new open source lemmatizer for Icelandic. Nefnir uses suffix substitution rules, derived from a large morphological database, to lemmatize tagged text. Evaluation shows that for correctly tagged text, Nefnir obtains an accuracy of 99.55%, and for text tagged with a PoS tagger, the accuracy obtained is 96.88%.
{ "section_name": [ "Introduction", "Related work", "CST Lemmatizer", "Lemmald", "System Description", "Evaluation", "Conclusion" ], "paragraphs": [ [ "In text mining and Natural Language Processing (NLP), a lemmatizer is a tool used to determine the basic form of a word (lemma). Lemmatization differs from stemming in the way this base form is determined. While stemmers chop off word endings to reach the common stem of words, lemmatizers take into account the morphology of the words in order to produce the common morphological base form, i.e., the form of the word found in a dictionary. This type of text normalization is an important step in pre-processing morphologically complex languages, like Icelandic, before conducting various tasks, such as machine translation, text mining and information retrieval.", "To give an example from the Icelandic language, lemmatization helps find all instances of the personal pronoun ég “I” in a text corpus, taking into account all inflectional forms (ég, mig, mér, mín, við, okkur, and okkar). These variations of each word can be up to 16 for nouns and over a hundred for adjectives and verbs. The value of being able to reduce the number of different surface forms that appear for each word is therefore evident, as otherwise it is hard or even impossible to correctly determine word frequency in a corpus, or to look up all instances of a particular term.", "In this paper, we describe and evaluate Nefnir BIBREF0 , a new open source lemmatizer for Icelandic. Nefnir uses suffix substitution rules derived (learned) from the Database of Modern Icelandic Inflection (DMII) BIBREF1 , which contains over 5.8 million inflectional forms.", "This new lemmatizer was used for large-scale lemmatization of the Icelandic Gigaword Corpus BIBREF2 with promising results, but a formal evaluation had not been carried out. Our evaluation of Nefnir indicates that, compared to previously published results, it obtains the highest lemmatization accuracy of Icelandic, with 99.55% accuracy given correct part-of-speech (PoS) tags, and 96.88% accuracy given text tagged with a PoS tagger." ], [ "The most basic approach to lemmatization is a simple look-up in a lexicon. This method has the obvious drawback that words that are not in the lexicon cannot be processed. To solve this, word transformation rules have been used to analyze the surface form of the word (the token) in order to produce the base form. These rules can either be hand-crafted or learned automatically using machine learning. When hand-crafting the rules that are used to determine the lemmas, a thorough knowledge of the morphological features of the language is needed. This is a time-consuming task, further complicated in Icelandic by the extensive inflectional system BIBREF1 . An example of a hand-crafted lemmatizer is the morphological analyzer that is part of the Czech Dependency Treebank BIBREF3 .", "Machine learning methods emerged to make the rule-learning process more effective, and various algorithms have been developed. These methods rely on training data, which can be a corpus of words and their lemmas or a large morphological lexicon BIBREF4 . By analyzing the training data, transformation rules are formed, which can subsequently be used to find lemmas in new texts, given the word forms.", "In addition, maching learning lemmatizers based on deep neural networks (DNNs) have recently emerged (see for example finnlem BIBREF5 for Finnish and LemmaTag BIBREF6 for German, Czech and Arabic). Along with the best rule-derived machine learning methods, these are now the state-of-the-art approaches to lemmatizers for morphologically complex languages. The biggest problem in lemmatization is the issue of unknown words, i.e. words not found in the training corpus or the underlying lexicon of the lemmatizer. This has been handled in various ways, such as by only looking at the suffix of a word to determine the lemma, thereby lemmatizing unseen words that (hopefully) share the same morphological rules as a known word BIBREF7 . DNN-based lemmatizers may prove useful in solving this issue, as they have their own inherent ways of handling these out-of-vocabulary (OOV) words, such as by using character-level context BIBREF8 .", "Previous to Nefnir, two lemmatization tools had been developed for Icelandic. We will now briefly mention these lemmatizers, before describing Nefnir further." ], [ "The CST Lemmatizer BIBREF4 is a rule-based lemmatizer that has been trained for Icelandic on the Icelandic Frequency Dictionary (IFD) corpus, consisting of about 590,000 tokens BIBREF9 . This is a language-independent lemmatizer that only looks at the suffix of the word as a way of lemmatizing OOV words, and can be used on both tagged and untagged input.", "The authors of Lemmald (see Section SECREF2 ) trained and evaluated the CST Lemmatizer on the IFD and observed a 98.99% accuracy on correctly tagged text and 93.15% accuracy on untagged text, in a 10-fold cross-validation, where each test set contained about 60,000 tokens. Another evaluation of this lemmatizer for Icelandic BIBREF10 reports around 90% accuracy on a random sample of 600 words from the IFD, when the input has been PoS tagged automatically (with a tagging accuracy of 91.5%). The PoS tagger used was IceTagger BIBREF11 , which is part of the IceNLP natural language processing toolkit BIBREF12 . These results indicate that the accuracy of this lemmatizer is very dependent upon the tags it is given. To our knowledge, the Icelandic CST Lemmatizer model is not openly available." ], [ "The second tool is Lemmald BIBREF13 , which is part of the IceNLP toolkit. It uses a mixed method of data-driven machine learning (using the IFD as a training corpus) and linguistic rules, as well as providing the option of looking up word forms in the DMII. Given correct PoS tagging of the input, Lemmald's accuracy measures at 98.54%, in a 10-fold cross-validation. The authors note that the CST Lemmatizer performs better than Lemmald when trained on the same data, without the added DMII lookup. The DMII lookup for Lemmald delivers a statistically significant improvement on the accuracy (99.55%), but it is not provided with the IceNLP distribution, so this enhancement is not available for public use. When used for lemmatization of the Icelandic Tagged Corpus (MÍM) BIBREF14 , the lemmatization accuracy of Lemmald was roughly estimated at around 90%." ], [ "The main difference between Nefnir and the two previously described lemmatizers for Icelandic, CST Lemmatizer and Lemmald, is that Nefnir derives its rules from a morphological database, the DMII, whereas the other two are trained on a corpus, the IFD. Note that the IFD only consists of about 590,000 tokens, while the DMII contains over 5.8 million inflectional forms.", "Nefnir uses suffix substitution rules, derived from the DMII to lemmatize tagged text. An example of such a rule is (ngar, nkfn, ar INLINEFORM0 ur), which can be applied to any word form with the suffix ngar that has the PoS tag nkfn (a masculine plural noun in the nominative case), transforming the suffix from ar to ur. This rule could, for example, be applied to the word form kettlingar “kittens” to obtain the corresponding lemma, kettlingur. Words are lemmatized using the rule with the longest shared suffix and the same tag.", "Each inflectional form in the DMII is annotated with a grammatical tag and lemma. As the DMII is limited to inflected words, the training data is supplemented with a hand-curated list of approximately 4,500 uninflected words (such as adverbs, conjunctions and prepositions) and abbreviations.", "To account for subtle differences between the tagsets used in the DMII and by the Icelandic PoS taggers, Nefnir translates all tags to an intermediate tagset which is a subset of both.", "Rules are successively generated and applied to the training set, with each new rule minimizing the number of remaining errors. Rules continue to be generated until the number of errors cannot be reduced. The process is as follows:", "Rules are only generated if they can correctly lemmatize at least two examples in the training set. A dictionary is created for words which are incorrectly lemmatized by the rules, for example because they require a unique transformation, such as from við “we” to ég “I”. Once trained, Nefnir lemmatizes words using the dictionary if they are present, or else with the most specific applicable rule.", "A rule is generated for every suffix in a word form, with some restrictions. For base words, Nefnir considers all suffixes, from the empty string to the full word. For skó “shoes”, an inflected form of the word skór “shoe”, rules are generated for the suffixes INLINEFORM0 , ó, kó and skó. However, Nefnir does not create rules for suffixes that are shorter than the transformation required to lemmatize the word. For example, for bækur “books”, which requires the transformation ækur INLINEFORM1 ók (the lemma for bækur is bók), only the suffixes ækur and bækur are considered.", "Compounding is highly productive in Icelandic and compound words comprise a very large portion of the vocabulary. This is reflected in the DMII, where over 88% of all words are compounds BIBREF15 . Any of the open word classes can be combined to form a compound, and there is no theoretical limit to how many words they can consist of. Due to the abundance of compounds in the training data, and the freedom with which they can be formed, Nefnir places additional restrictions on which suffixes to consider when generating rules for them. Suffixes for the final part of a compound are generated in the same manner as for base words, growing part by part thereafter. For example, the compound word fjall+göngu+skó “hiking boots” would yield rules for the suffixes INLINEFORM0 , ó, kó, skó, gönguskó and fjallgönguskó. Allowing suffixes to grow freely past the final part of the compound may result in overfitting as the rules adapt to incidental patterns in the training data." ], [ "We have evaluated the output of Nefnir against a reference corpus of 21,093 tokens and their correct lemmas.", "Samples for the reference corpus were extracted from two larger corpora, in order to obtain a diverse vocabulary:", "Samples were extracted at random from these two corpora, roughly 10,000 tokens from each, and the lemmas manually reviewed, following the criteria laid out in the preface of the IFD BIBREF9 .", "The incentive when performing the evaluation was to create a diverse corpus of text samples containing foreign words, misspellings and other OOV words. Such words are likely to appear in real-world NLP tasks, and pose special problems for lemmatizers. In the proofread and literature-heavy IFD corpus, which was used for training and evaluating the previous two lemmatizers, these OOV words are less prevalent. Consequently, the test corpus used here is not directly comparable with the corpus used to evaluate Lemmald and the CST Lemmatizer for Icelandic. On the other hand, it is more diverse and offers more challenging problems for the lemmatizer.", "One of the motivations of this work was to determine how well Nefnir performs when lemmatizing text which has been PoS tagged automatically, without any manual review, as such manual labour is usually not feasible in large-scale NLP tasks. For this purpose, we created two versions of the test corpus, one with the correct PoS tags, and another tagged using IceTagger BIBREF11 . The accuracy of IceTagger is further enhanced using data from the DMII. Measured against the correct PoS tags, the accuracy of the PoS tags in the reference corpus is 95.47%.", "Accuracy of the lemmatizaton was measured by comparing the reference corpus lemmas with the obtained lemmas from Nefnir. This was done for both the correctly tagged corpus (gold tags) and the automatically tagged one (IceTagger tags). As seen in Table TABREF10 , the accuracy for the test file with the correct PoS tags is 99.55%, with 94 errors in 21,093 tokens. For the text tagged automatically with IceTagger, the accuracy is 96.88%, with 658 errors.", "These results indicate that given correct PoS tags, Nefnir obtains high accuracy, with under a hundred errors in the whole corpus sample. This is comparable to the score reported for Lemmald, when DMII lookup has been added (99.55%). In fact, it can be argued that a higher score is hard to come by, as natural language always contains some unforeseen issues that are hard to accommodate for, such as OOV words, misspellings, colloquialisms, etc. When Nefnir bases its lemmas on the automatically PoS tagged text, the accuracy decreases, from 99.55% to 96.88%, resulting in six times as many errors.", "We can classify the errors made by Nefnir into the following main categories:", "The most prevalent error categories when the PoS tags are correct are foreign words and proper names, such as foreign names of people, products and companies. A special issue that often came up is the cliticized definite article in Icelandic proper names. This is quite common in organization names (Síminn, Samfylkingin), titles of works of art (Svanurinn), names of ships (Vonin), buildings (Kringlan), etc. Ultimately, it depends on the aim of the lemmatization how these should be handled, but in this evaluation we assume as a general rule that they should be lemmatized with the definite article (Síminn, and not sími or Sími). The same applies to the plural, in names such as Hjálmar “helmets” (band) and Katlar (place name).", "In the automatically tagged data, tagging errors are the most common source of lemmatization errors, such as when læknum (referring to the plural dative of the masculine noun læknir “doctor”) is tagged as being in the singular, which leads to it being incorrectly lemmatized as lækur “brook”. This was to be expected, as the rules learned from the DMII rely on the correct tagging of the input. However, as the authors of Lemmald comment, as long as the word class is correct, the lemmatizer can usually still find the correct lemma BIBREF13 .", "The main reason for the high accuracy in our view lies in the richness of the DMII data. No lexicon can ever include all words of a particular language, as new words appear every day, but most often, new words in Icelandic are compounds, created from words already present in the DMII. This explains how rare or unknown words such as the adjective fuglglaður “bird-happy”, which appears in the corpus data, can be correctly lemmatized using the suffix rule for glaður “happy”.", "As mentioned above, Nefnir, the CST Lemmatizer for Icelandic, and Lemmald have not been evaluated using the same reference corpus. The accuracy of the three lemmatizers are, therefore, not directly comparable, but our results indicate that Nefnir obtains the highest accuracy." ], [ "We described and evaluated Nefnir, a new open source lemmatizer for Icelandic. It uses suffix substitution rules, derived from a large morphological database, to lemmatize tagged text. Evaluation shows that Nefnir obtains high accuracy for both correctly and automatically PoS-tagged input.", "As taggers for Icelandic gradually get better, we can expect to see the lemmatization accuracy go up as well. Expanding the morphological database with more proper names may also help to achieve even higher accuracy." ] ] }
{ "question": [ "How are the substitution rules built?", "Which dataset do they use?" ], "question_id": [ "8c981f8b992cb583e598f71741c322f522c6d2ad", "16f33de90b76975a99572e0684632d5aedbd957c" ], "nlp_background": [ "infinity", "infinity" ], "topic_background": [ "familiar", "familiar" ], "paper_read": [ "no", "no" ], "search_query": [ "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "from the Database of Modern Icelandic Inflection (DMII) BIBREF1" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this paper, we describe and evaluate Nefnir BIBREF0 , a new open source lemmatizer for Icelandic. Nefnir uses suffix substitution rules derived (learned) from the Database of Modern Icelandic Inflection (DMII) BIBREF1 , which contains over 5.8 million inflectional forms." ], "highlighted_evidence": [ "In this paper, we describe and evaluate Nefnir BIBREF0 , a new open source lemmatizer for Icelandic. Nefnir uses suffix substitution rules derived (learned) from the Database of Modern Icelandic Inflection (DMII) BIBREF1 , which contains over 5.8 million inflectional forms." ] } ], "annotation_id": [ "fccbe7ca289ecda5ab2d089f221ebf8a77bad8fb" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "a reference corpus of 21,093 tokens and their correct lemmas" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We have evaluated the output of Nefnir against a reference corpus of 21,093 tokens and their correct lemmas.", "Samples for the reference corpus were extracted from two larger corpora, in order to obtain a diverse vocabulary:" ], "highlighted_evidence": [ "We have evaluated the output of Nefnir against a reference corpus of 21,093 tokens and their correct lemmas.\n\nSamples for the reference corpus were extracted from two larger corpora, in order to obtain a diverse vocabulary:" ] } ], "annotation_id": [ "1adb91b672a501864835f5f754005983252f0080" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Table 1: Results of the evaluation, with the accuracy and the total number of errors found." ], "file": [ "4-Table1-1.png" ] }
1911.03842
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Models often easily learn biases present in the training data, and their predictions directly reflect this bias. We analyze the presence of gender bias in dialogue and examine the subsequent effect on generative chitchat dialogue models. Based on this analysis, we propose a combination of three techniques to mitigate bias: counterfactual data augmentation, targeted data collection, and conditional training. We focus on the multi-player text-based fantasy adventure dataset LIGHT as a testbed for our work. LIGHT contains gender imbalance between male and female characters with around 1.6 times as many male characters, likely because it is entirely collected by crowdworkers and reflects common biases that exist in fantasy or medieval settings. We show that (i) our proposed techniques mitigate gender bias by balancing the genderedness of generated dialogue utterances; and (ii) they work particularly well in combination. Further, we show through various metrics---such as quantity of gendered words, a dialogue safety classifier, and human evaluation---that our models generate less gendered, but still engaging chitchat responses.
{ "section_name": [ "Introduction", "Sources of Bias in Dialogue Datasets ::: Bias in Character Personas", "Sources of Bias in Dialogue Datasets ::: Bias in Character Personas ::: Qualitative Examination.", "Sources of Bias in Dialogue Datasets ::: Bias in Character Personas ::: Quantitative Examination.", "Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances", "Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances ::: Qualitative Examination.", "Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances ::: Measuring Bias.", "Methodology: Mitigating Bias in Generative Dialogue", "Methodology: Mitigating Bias in Generative Dialogue ::: Models", "Methodology: Mitigating Bias in Generative Dialogue ::: Counterfactual Data Augmentation", "Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection", "Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: Gender-swapping Existing Personas", "Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: New and Diverse characters", "Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: New dialogues", "Methodology: Mitigating Bias in Generative Dialogue ::: Conditional Training", "Results", "Results ::: Bias is Amplified in Generation", "Results ::: Genderedness of Generated Text", "Results ::: Conditional Training Controls Gendered Words", "Results ::: Safety of Generated Text", "Results ::: Human Evaluation", "Conclusion" ], "paragraphs": [ [ "Since machine learning algorithms learn to model patterns present in training datasets, what they learn is affected by data quality. Analysis has found that model predictions directly reflect the biases found in training datasets, such as image classifiers learning to associate ethnicity with specific activities BIBREF1. Recent work in natural language processing has found similar biases, such as in word embeddings BIBREF2, BIBREF3, BIBREF4, object classification BIBREF5, natural language inference BIBREF6, and coreference resolution BIBREF7. Less work has focused on the biases present in dialogue utterances BIBREF8, BIBREF9, despite bias being clearly present in human interactions, and the rapid development of dialogue agents for real-world use-cases, such as interactive assistants. In this work we aim to address this by focusing on mitigating gender bias.", "We use the dialogue dataset from the LIGHT text adventure world BIBREF0 as a testbed for our investigation into de-biasing dialogues. The dataset consists of a set of crowd-sourced locations, characters, and objects, which form the backdrop for the dialogues between characters. In the dialogue creation phase, crowdworkers are presented with personas for characters—which themselves were written by other crowdworkers—that they should enact; the dialogues the crowdworkers generate from these personas form the dialogue dataset. Dialogue datasets are susceptible to reflecting the biases of the crowdworkers as they are often collected solely via crowdsourcing. Further, the game's medieval setting may encourage crowdworkers to generate text which accentuates the historical biases and inequalities of that time period BIBREF10, BIBREF11. However, despite the fact that the dialogues take place in a fantasy adventure world, LIGHT is a game and thus we are under no obligation to recreate historical biases in this environment, and can instead use creative license to shape it into a fun world with gender parity.", "We use the dialogues in LIGHT because we find that it is highly imbalanced with respect to gender: there are over 60% more male-gendered characters than female. We primarily address the discrepancy in the representation of male and female genders, although there are many characters that are gender neutral (like “trees\") or for which the gender could not be determined. We did not find any explicitly identified non-binary characters. We note that this is a bias in and of itself, and should be addressed in future work. We show that training on gender biased data leads existing generative dialogue models to amplify gender bias further. To offset this, we collect additional in-domain personas and dialogues to balance gender and increase the diversity of personas in the dataset. Next, we combine this approach with Counterfactual Data Augmentation and methods for controllable text generation to mitigate the bias in dialogue generation. Our proposed techniques create models that produce engaging responses with less gender bias." ], [ "Recent work in dialogue incorporates personas, or personality descriptions that ground speaker's chat, such as I love fishing BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. Personas have been shown to increase engagingness and improve consistency. However, they can be a starting point for bias BIBREF17, BIBREF18, BIBREF9, as bias in the personas propagates to subsequent conversations." ], [ "Analyzing the personas in LIGHT qualitatively, we find many examples of bias. For example, the character girl contains the line I regularly clean and cook dinner. Further examples are given in Table TABREF1." ], [ "We quantitatively analyze bias by first examining whether the existing personas are offensive, and second, evaluating their gender balance. To assess the pervasiveness of unsafe content present in personas, we asked three independent annotators to examine each character's persona for potentially offensive content. If annotators selected that the content was offensive or maybe offensive, they were asked to place it in one of four categories – racist, sexist, classist, other – and to provide a reason for their response. Just over 2% of personas were flagged by at least one annotator, and these personas are removed from the dataset.", "We further examined gender bias in personas. Annotators were asked to label the gender of each character based on their persona description (choosing “neutral\" if it was not explicit in the persona). This annotation is possible because some personas include lines such as I am a young woman, although the majority of personas do not mention an explicit gender. Annotators found nearly 50% more male-gendered characters than female-gendered characters (Table TABREF5).", "While annotators labeled personas as explicitly male, female, or gender-neutral, gender bias may still exist in personas beyond explicit sentences such as I am a young man. For example, personas can contain gendered references such as I want to follow in my father's footsteps rather than mother's footsteps. These relational nouns BIBREF19, BIBREF20 such as father encode a specific relationship that can be gender biased. In this example, that relationship would be between the character and a man, rather than a woman. We analyzed the frequency of references to other gendered characters in the personas by counting the appearance of gendered words using the list compiled by BIBREF21 (for example he vs. she), and find that men are disproportionately referred to in the personas: there are nearly 3x as many mentions of men than women." ], [ "After analyzing the bias in LIGHT personas, we go on to analyze the bias in dialogues created from those personas and how to quantify it." ], [ "In our analysis, we found many examples of biased utterances in the data used to train dialogue agents. For example, the character with a queen persona utters the line I spend my days embroidery and having a talk with the ladies. Another character in a dialogue admires a sultry wench with fire in her eyes. An example of persona bias propagating to the dialogue can be found in Table TABREF2." ], [ "Sexism is clearly present in many datasets BIBREF9, but finding a good way to measure sexism, especially at scale, can be challenging. A simple answer would be to rely on crowdworkers operating under their own notions of “sexism” to annotate the dialogues. However, in our experience, crowdworkers hold a range of views, often different from ours, as to what counts as sexism, making mere human evaluation far from sufficient. Note that the original LIGHT personas and dialogues were generated by crowdworkers, leaving little reason to believe that crowdworkers will be proficient at spotting the sexism that they themselves embued the dataset with in the first place. Therefore, we supplement our crowdworker-collected human annotations of gender bias with additional quantitative measurements: we measure the ratio of gendered words (taken from the union of several existing gendered word lists that were each created through either automatic means, or by experts BIBREF21, BIBREF22, BIBREF23), and we run an existing dialogue safety classifier BIBREF24 to measure offensiveness of the dialogues." ], [ "We explore both data augmentation and algorithmic methods to mitigate bias in generative Transformer dialogue models. We describe first our modeling setting and then the three proposed techniques for mitigating bias. Using (i) counterfactual data augmentation BIBREF25 to swap gendered words and (ii) additional data collection with crowdworkers, we create a gender-balanced dataset. Further, (iii) we describe a controllable generation method which moderates the male and female gendered words it produces." ], [ "Following BIBREF0, in all of our experiments we fine-tune a large, pre-trained Transformer encoder-decoder neural network on the dialogues in the LIGHT dataset. The model was pre-trained on Reddit conversations, using a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io. During pre-training, models were trained to generate a comment conditioned on the full thread leading up to the comment. Comments containing URLs or that were under 5 characters in length were removed from the corpus, as were all child comments, resulting in approximately $2,200$ million training examples. The model is a 8 layer encoder, 8 layer decoder with 512 dimensional embeddings and 16 attention heads, and is based on the ParlAI implementation of BIBREF26. For generation, we decode sequences with beam search with beam size 5." ], [ "One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21. For example, all instances of grandmother are swapped with grandfather." ], [ "To create a more gender-balanced dataset, we collect additional data using a Positive-Bias Data Collection (Pos. Data) strategy." ], [ "There are a larger number of male-gendered character personas than female-gendered character personas (see Section SECREF2), so we balance existing personas using gender-swapping. For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns. Additionally, we ask annotators to swap the gender of any characters that are referred to in the persona text for a given character." ], [ "As discussed in Section SECREF2, it is insufficient to simply balance references to men and women in the dataset, as there may be bias in the form of sexism. While it is challenging to detect sexism, we attempt to offset this type of bias by collecting a set of interesting and independent characters. We do this by seeding workers with examples like adventurer with the persona I am an woman passionate about exploring a world I have not yet seen. I embark on ambitious adventures. We give the additional instruction to attempt to create diverse characters. Even with this instruction, crowdworkers still created roughly 3x as many male-gendered characters as female-gendered characters. We exclude male-gendered characters created in this fashion.", "In combination with the gender swapped personas above, this yields a new set of 2,676 character personas (compared to 1,877 from the original dataset), for which the number of men and women and the number of references to male or female gendered words is roughly balanced: see Table TABREF5." ], [ "Finally, we collect additional dialogues with these newly created gender balanced character personas, favoring conversations that feature female gendered characters to offset the imbalance in the original data. We added further instructions for annotators to be mindful of gender bias during their conversations, and in particular to assume equality between genders – social, economic, political, or otherwise – in this fantasy setting. In total, we collect 507 new dialogues containing 6,658 new dialogue utterances in total (about 6% of the size of the full LIGHT dataset)." ], [ "Bias in dialogue can manifest itself in various forms, but one form is the imbalanced use of gendered words. For example, LIGHT contains far more male-gendered words than female-gendered words rather than an even split between words of both genders. To create models that can generate a gender-balanced number of gendered words, we propose Conditional Training (CT) for controlling generative model output BIBREF27, BIBREF28, BIBREF29, BIBREF30. Previous work proposed a mechanism to train models with specific control tokens so models learn to associate the control token with the desired text properties BIBREF28, then modifying the control tokens during inference to produce the desired result.", "Prior to training, each dialogue response is binned into one of four bins – $\\text{F}^{0/+}\\text{M}^{0/+}$ – where $\\text{F}^{0}$ indicates that there are zero female gendered words in the response and $\\text{F}^{+}$ indicates the presence of at least one female gendered word. The gendered words are determined via an aggregation of existing lists of gendered nouns and adjectives from BIBREF21, BIBREF22, BIBREF23. The bins are used to train a conditional model by appending a special token (indicating the bin for the target response) to the end of the input which is given to the encoder. At inference time, the bins can be manipulated to produce dialogue outputs with various quantities of gendered words." ], [ "We train generative Transformer models using each of these methods – Counterfactual Data Augmentation that augments with swaps of gendered words (CDA, §SECREF19), adding new dialogues (Positive-Bias Data Collection, §SECREF20), and controllable generation to control the quantity of gendered words (CT, §SECREF24) – and finally combine all of these methods together (ALL)." ], [ "Existing Transformer generative dialogue models BIBREF31, BIBREF32, BIBREF0 are trained to take as input the dialogue context and generate the next utterance. Previous work has shown that machine learning models reflect the biases present in data BIBREF4, BIBREF3, and that these biases can be easy to learn compared to more challenging reasoning BIBREF2, BIBREF33. Generative models often use beam search or top-k sampling BIBREF34 to decode, and these methods are well-known to produce generic text BIBREF35, which makes them susceptible statistical biases present in datasets.", "As shown in Table TABREF11, we find that existing models actually amplify bias. When the trained model generates gendered words (i.e., words from our gendered word list), it generates male-gendered words the vast majority of the time – even on utterances for which it is supposed to generate only female-gendered words (i.e., the gold label only contains female-gendered words), it generates male-gendered words nearly $78\\%$ of the time.", "Additionally, following BIBREF8, we run an offensive language classifier on the gold responses and the model generated utterances (Table TABREF16) and find that the model produces more offensive utterances than exist in the dataset." ], [ "We analyze the performance of the various techniques by dividing the test set using the four genderedness bins – $\\text{F}^{0}\\text{M}^{0}$, $\\text{F}^{0}\\text{M}^{+}$, $\\text{F}^{+}\\text{M}^{0}$, and $\\text{F}^{+}\\text{M}^{+}$ – and calculate the F1 word overlap with the gold response, the percentage of gendered words generated (% gend. words), and the percentage of male-gendered words generated (relative to the sum total of gendered words generated by the model). We compare to the gold labels from the test set and a baseline model that does not use any of the bias mitigation techniques. Results for all methods are displayed in Table TABREF11.", "Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combining all methods in one – the ALL model is the most advantageous. While ALL has more data than CDA and CT, more data alone is not enough — the Positive-Bias Data Collection model does not achieve as good results. Both the CT and ALL models benefit from knowing the data split ($\\text{F}^{0}\\text{M}^{0}$, for example), and both models yield a genderedness ratio closest to ground truth." ], [ "Our proposed CT method can be used to control the use of gendered words in generated dialogues. We examine the effect of such training by generating responses on the test set by conditioning the ALL model on a singular bin for all examples. Results are shown in Figure FIGREF12. Changing the bin radically changes the genderedness of generated text without significant changes to F1.", "Examples of generated text from both the baseline and the ALL model are shown in Table TABREF31. The baseline model generates male-gendered words even when the gold response contains no gendered words or only female-gendered words, even generating unlikely sequences such as “my name is abigail. i am the king of this kingdom.\"." ], [ "Using a dialogue safety classifier BIBREF24, we find that our proposed de-biased models are rated as less offensive compared to the baseline generative Transformer and the LIGHT data (see Table TABREF16)." ], [ "Finally, we use human evaluation to compare the quality of our de-biasing methods. We use the dialogue evaluation system Acute-Eval BIBREF36 to ask human evaluators to compare two conversations from different models and decide which model is more biased and which model is more engaging. Following Acute-Eval, we collect 100 human and model paired chats. Conversations from a human and baseline model are compared to conversations from a human and the ALL model with all generations set to the $\\text{F}^{0}\\text{M}^{0}$ gender-neutral control bin. Evaluators are asked which model is more engaging and for which model they find it more difficult to predict the gender of the speaker. We found that asking about difficulty of predicting a speaker's gender was much more effective than asking evaluators to evaluate sexism or gender bias. Figure FIGREF17 shows that evaluators rate the ALL model harder to predict the gender of (statistically significant at $p < 0.01$) while engagingness does not change. Our proposed methods are able to mitigate gender bias without degrading dialogue quality." ], [ "We analyze gender bias in dialogue and propose a general purpose method for understanding and mitigating bias in character personas and their associated dialogues. We present techniques using data augmentation and controllable generation to reduce gender bias in neural language generation for dialogue. We use the dataset LIGHT as a testbed for this work. By integrating these methods together, our models provide control over how gendered dialogue is and decrease the offensiveness of the generated utterances. Overall, our proposed methodology reduces the effect of bias while maintaining dialogue engagingness." ] ] }
{ "question": [ "What baseline is used to compare the experimental results against?", "How does counterfactual data augmentation aim to tackle bias?", "In the targeted data collection approach, what type of data is targetted?" ], "question_id": [ "d0b005cb7ed6d4c307745096b2ed8762612480d2", "9d9b11f86a96c6d3dd862453bf240d6e018e75af", "415f35adb0ef746883fb9c33aa53b79cc4e723c3" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "bias", "bias", "bias" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Transformer generation model" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combining all methods in one – the ALL model is the most advantageous. While ALL has more data than CDA and CT, more data alone is not enough — the Positive-Bias Data Collection model does not achieve as good results. Both the CT and ALL models benefit from knowing the data split ($\\text{F}^{0}\\text{M}^{0}$, for example), and both models yield a genderedness ratio closest to ground truth." ], "highlighted_evidence": [ "Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combining all methods in one – the ALL model is the most advantageous." ] } ], "annotation_id": [ "1adf5025419a86a5a9d6dfa3c94f2b10887ba8dc" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "The training dataset is augmented by swapping all gendered words by their other gender counterparts", "evidence": [ "One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21. For example, all instances of grandmother are swapped with grandfather." ], "highlighted_evidence": [ "One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21." ] } ], "annotation_id": [ "a4f3aaa96d4e166fbe45d5ff951d622f4f963863" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Gendered characters in the dataset", "evidence": [ "There are a larger number of male-gendered character personas than female-gendered character personas (see Section SECREF2), so we balance existing personas using gender-swapping. For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns. Additionally, we ask annotators to swap the gender of any characters that are referred to in the persona text for a given character." ], "highlighted_evidence": [ "For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns." ] } ], "annotation_id": [ "1bd3662ed99b0f0baec07e009286a85a87364f37" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ] }
{ "caption": [ "Table 1: Character persona examples from the LIGHT dataset. While there are relatively few examples of femalegendered personas, many of the existing ones exhibit bias. None of these personas were flagged by annotators during a review for offensive content.", "Table 2: An example dialogue from the LIGHT dataset, with the persona for the wife character provided. Bias from the persona informs and effects the dialogue task.", "Table 3: Analysis of gender in LIGHT Characters: the original dataset contains 1.6× as many male-gendered characters as female-gendered characters. New characters are collected to offset this imbalance.", "Table 4: We compare the performance of various bias mitigation methods – Counterfactual Data Augmentation (CDA), Positive-Bias Data Collection (Pos. Data), Conditional Training (CT), and combining these methods (ALL) – on the LIGHT test set, splitting the test set across the four genderedness bins: F0/+M0/+. X0 indicates there are no X-gendered words in the gold response, while, X+ indicates that there is at least one. We measure the percent of gendered words in the generated utterances (% gend. words) and the percent of male bias (% male bias), i.e. the percent of male-gendered words among all gendered words generated. While each of these methods yield some improvement, combining all of these methods in one yields the best control over the genderedness of the utterances while still maintaining a good F1-score.", "Figure 1: Comparing the performance of the ALL de-bias model when we fix the conditioning to a specific bin for all examples at test time. We report results for each possible conditioning bin choice. Across bins, the model maintains performance whilst radically changing the genderedness of the language generated.", "Table 5: Offensive language classification of model responses on the LIGHT dialogue test set.", "Figure 2: Human Evaluation of ALL model compared to baseline Transformer generative model. The control bins in ALL are set to F0M0 to reduce gendered words. Evaluators find it harder to predict the speaker gender when using our proposed techniques, while model engagingness is not affected by the method.", "Table 6: Example generations from the baseline model and the proposed de-biased models. In these examples, the gold truth either contains no gendered words or only female-gendered words, but the baseline model generates male-gendered words." ], "file": [ "2-Table1-1.png", "2-Table2-1.png", "3-Table3-1.png", "4-Table4-1.png", "4-Figure1-1.png", "4-Table5-1.png", "4-Figure2-1.png", "7-Table6-1.png" ] }
1707.02377
Efficient Vector Representation for Documents through Corruption
We present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time.
{ "section_name": [ "Introduction", "Related Works and Notations", "Method", "Corruption as data-dependent regularization", "Experiments", "Baselines", "Sentiment analysis", "Word analogy", "Document Classification", "Semantic relatedness", "Conclusion" ], "paragraphs": [ [ "Text understanding starts with the challenge of finding machine-understandable representation that captures the semantics of texts. Bag-of-words (BoW) and its N-gram extensions are arguably the most commonly used document representations. Despite its simplicity, BoW works surprisingly well for many tasks BIBREF0 . However, by treating words and phrases as unique and discrete symbols, BoW often fails to capture the similarity between words or phrases and also suffers from sparsity and high dimensionality.", "Recent works on using neural networks to learn distributed vector representations of words have gained great popularity. The well celebrated Word2Vec BIBREF1 , by learning to predict the target word using its neighboring words, maps words of similar meanings to nearby points in the continuous vector space. The surprisingly simple model has succeeded in generating high-quality word embeddings for tasks such as language modeling, text understanding and machine translation. Word2Vec naturally scales to large datasets thanks to its simple model architecture. It can be trained on billions of words per hour on a single machine.", "Paragraph Vectors BIBREF2 generalize the idea to learn vector representation for documents. A target word is predicted by the word embeddings of its neighbors in together with a unique document vector learned for each document. It outperforms established document representations, such as BoW and Latent Dirichlet Allocation BIBREF3 , on various text understanding tasks BIBREF4 . However, two caveats come with this approach: 1) the number of parameters grows with the size of the training corpus, which can easily go to billions; and 2) it is expensive to generate vector representations for unseen documents at test time.", "We propose an efficient model architecture, referred to as Document Vector through Corruption (Doc2VecC), to learn vector representations for documents. It is motivated by the observation that linear operations on the word embeddings learned by Word2Vec can sustain substantial amount of syntactic and semantic meanings of a phrase or a sentence BIBREF5 . For example, vec(“Russia”) + vec(“river”) is close to vec(“Volga River”) BIBREF6 , and vec(“king”) - vec(“man”) + vec(“women”) is close to vec(“queen”) BIBREF5 . In Doc2VecC, we represent each document as a simple average of the word embeddings of all the words in the document. In contrast to existing approaches which post-process learned word embeddings to form document representation BIBREF7 , BIBREF8 , Doc2VecC enforces a meaningful document representation can be formed by averaging the word embeddings during learning. Furthermore, we include a corruption model that randomly remove words from a document during learning, a mechanism that is critical to the performance and learning speed of our algorithm.", "Doc2VecC has several desirable properties: 1. The model complexity of Doc2VecC is decoupled from the size of the training corpus, depending only on the size of the vocabulary; 2. The model architecture of Doc2VecC resembles that of Word2Vec, and can be trained very efficiently; 3. The new framework implicitly introduces a data-dependent regularization, which favors rare or informative words and suppresses words that are common but not discriminative; 4. Vector representation of a document can be generated by simply averaging the learned word embeddings of all the words in the document, which significantly boost test efficiency; 5. The vector representation generated by Doc2VecC matches or beats the state-of-the-art for sentiment analysis, document classification as well as semantic relatedness tasks." ], [ "Text representation learning has been extensively studied. Popular representations range from the simplest BoW and its term-frequency based variants BIBREF9 , language model based methods BIBREF10 , BIBREF11 , BIBREF12 , topic models BIBREF13 , BIBREF3 , Denoising Autoencoders and its variants BIBREF14 , BIBREF15 , and distributed vector representations BIBREF8 , BIBREF2 , BIBREF16 . Another prominent line of work includes learning task-specific document representation with deep neural networks, such as CNN BIBREF17 or LSTM based approaches BIBREF18 , BIBREF19 .", "In this section, we briefly introduce Word2Vec and Paragraph Vectors, the two approaches that are most similar to ours. There are two well-know model architectures used for both methods, referred to as Continuous Bag-of-Words (CBoW) and Skipgram models BIBREF1 . In this work, we focus on CBoW. Extending to Skipgram is straightforward. Here are the notations we are going to use throughout the paper:" ], [ "Several works BIBREF6 , BIBREF5 showcased that syntactic and semantic regularities of phrases and sentences are reasonably well preserved by adding or subtracting word embeddings learned through Word2Vec. It prompts us to explore the option of simply representing a document as an average of word embeddings. Figure FIGREF9 illustrates the new model architecture.", "Similar to Word2Vec or Paragraph Vectors, Doc2VecC consists of an input layer, a projection layer as well as an output layer to predict the target word, “ceremony” in this example. The embeddings of neighboring words (“opening”, “for”, “the”) provide local context while the vector representation of the entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors, which directly learns a unique vector for each document, Doc2VecC represents each document as an average of the embeddings of words randomly sampled from the document (“performance” at position INLINEFORM0 , “praised” at position INLINEFORM1 , and “brazil” at position INLINEFORM2 ). BIBREF25 also proposed the idea of using average of word embeddings to represent the global context of a document. Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained. This corruption mechanism offers us great speedup during training as it significantly reduces the number of parameters to update in back propagation. At the same time, as we are going to detail in the next section, it introduces a special form of regularization, which brings great performance improvement.", "Here we describe the stochastic process we used to generate a global context at each update. The global context, which we denote as INLINEFORM0 , is generated through a unbiased mask-out/drop-out corruption, in which we randomly overwrites each dimension of the original document INLINEFORM1 with probability INLINEFORM2 . To make the corruption unbiased, we set the uncorrupted dimensions to INLINEFORM3 times its original value. Formally, DISPLAYFORM0 ", "Doc2VecC then defines the probability of observing a target word INLINEFORM0 given its local context INLINEFORM1 as well as the global context INLINEFORM2 as DISPLAYFORM0 ", "Here INLINEFORM0 is the length of the document. Exactly computing the probability is impractical, instead we approximate it with negative sampling BIBREF1 . DISPLAYFORM0 ", "here INLINEFORM0 stands for a uniform distribution over the terms in the vocabulary. The two projection matrices INLINEFORM1 and INLINEFORM2 are then learned to minimize the loss: DISPLAYFORM0 ", "Given the learned projection matrix INLINEFORM0 , we then represent each document simply as an average of the embeddings of the words in the document, DISPLAYFORM0 ", "We are going to elaborate next why we choose to corrupt the original document with the corruption model in eq.( EQREF10 ) during learning, and how it enables us to simply use the average word embeddings as the vector representation for documents at test time." ], [ "We approximate the log likelihood for each instance INLINEFORM0 in eq.( EQREF13 ) with its Taylor expansion with respect to INLINEFORM1 up to the second-order BIBREF26 , BIBREF27 , BIBREF28 . Concretely, we choose to expand at the mean of the corruption INLINEFORM2 : INLINEFORM3 ", "where INLINEFORM0 and INLINEFORM1 are the first-order (i.e., gradient) and second-order (i.e., Hessian) of the log likelihood with respect to INLINEFORM2 . Expansion at the mean INLINEFORM3 is crucial as shown in the following steps. Let us assume that for each instance, we are going to sample the global context INLINEFORM4 infinitely many times, and thus compute the expected log likelihood with respect to the corrupted INLINEFORM5 . INLINEFORM6 ", "The linear term disappears as INLINEFORM0 . We substitute in INLINEFORM1 for the mean INLINEFORM2 of the corrupting distribution (unbiased corruption) and the matrix INLINEFORM3 for the variance, and obtain DISPLAYFORM0 ", "As each word in a document is corrupted independently of others, the variance matrix INLINEFORM0 is simplified to a diagonal matrix with INLINEFORM1 element equals INLINEFORM2 . As a result, we only need to compute the diagonal terms of the Hessian matrix INLINEFORM3 .", "The INLINEFORM0 dimension of the Hessian's diagonal evaluated at the mean INLINEFORM1 is given by INLINEFORM2 ", "Plug the Hessian matrix and the variance matrix back into eq.( EQREF16 ), and then back to the loss defined in eq.( EQREF13 ), we can see that Doc2VecC intrinsically minimizes DISPLAYFORM0 ", "Each INLINEFORM0 in the first term measures the log likelihood of observing the target word INLINEFORM1 given its local context INLINEFORM2 and the document vector INLINEFORM3 . As such, Doc2VecC enforces that a document vector generated by averaging word embeddings can capture the global semantics of the document, and fill in information missed in the local context. The second term here is a data-dependent regularization. The regularization on the embedding INLINEFORM4 of each word INLINEFORM5 takes the following form, INLINEFORM6 ", "where INLINEFORM0 prescribes the confidence of predicting the target word INLINEFORM1 given its neighboring context INLINEFORM2 as well as the document vector INLINEFORM3 .", "Closely examining INLINEFORM0 leads to several interesting findings: 1. the regularizer penalizes more on the embeddings of common words. A word INLINEFORM1 that frequently appears across the training corpus, i.e, INLINEFORM2 often, will have a bigger regularization than a rare word; 2. on the other hand, the regularization is modulated by INLINEFORM3 , which is small if INLINEFORM4 . In other words, if INLINEFORM5 is critical to a confident prediction INLINEFORM6 when it is active, then the regularization is diminished. Similar effect was observed for dropout training for logistic regression model BIBREF27 and denoising autoencoders BIBREF28 ." ], [ "We evaluate Doc2VecC on a sentiment analysis task, a document classification task and a semantic relatedness task, along with several document representation learning algorithms. All experiments can be reproduced using the code available at https://github.com/mchen24/iclr2017" ], [ "We compare against the following document representation baselines: bag-of-words (BoW); Denoising Autoencoders (DEA) BIBREF14 , a representation learned from reconstructing original document INLINEFORM0 using corrupted one INLINEFORM1 . SDAs have been shown to be the state-of-the-art for sentiment analysis tasks BIBREF29 . We used Kullback-Liebler divergence as the reconstruction error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into account the non-zero elements of INLINEFORM2 in the reconstruction error and employed negative sampling for the remainings; Word2Vec BIBREF1 +IDF, a representation generated through weighted average of word vectors learned using Word2Vec; Doc2Vec BIBREF2 ; Skip-thought Vectors BIBREF16 , a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representations that apply to various natural language processing tasks. We also include RNNLM BIBREF11 , a recurrent neural network based language model in the comparison. In the semantic relatedness task, we further compare to LSTM-based methods BIBREF18 that have been reported on this dataset." ], [ "For sentiment analysis, we use the IMDB movie review dataset. It contains 100,000 movies reviews categorized as either positive or negative. It comes with predefined train/test split BIBREF30 : 25,000 reviews are used for training, 25,000 for testing, and the rest as unlabeled data. The two classes are balanced in the training and testing sets. We remove words that appear less than 10 times in the training set, resulting in a vocabulary of 43,375 distinct words and symbols.", "Setup. We test the various representation learning algorithms under two settings: one follows the same protocol proposed in BIBREF8 , where representation is learned using all the available data, including the test set; another one where the representation is learned using training and unlabeled set only. For both settings, a linear support vector machine (SVM) BIBREF31 is trained afterwards on the learned representation for classification. For Skip-thought Vectors, we used the generic model trained on a much bigger book corpus to encode the documents. A vector of 4800 dimensions, first 2400 from the uni-skip model, and the last 2400 from the bi-skip model, are generated for each document. In comparison, all the other algorithms produce a vector representation of size 100. The supervised RNN-LM is learned on the training set only. The hyper-parameters are tuned on a validation set subsampled from the training set.", "Accuracy. Comparing the two columns in Table TABREF20 , we can see that all the representation learning algorithms benefits from including the testing data during the representation learning phrase. Doc2VecC achieved similar or even better performance than Paragraph Vectors. Both methods outperforms the other baselines, beating the BOW representation by 15%. In comparison with Word2Vec+IDF, which applies post-processing on learned word embeddings to form document representation, Doc2VecC naturally enforces document semantics to be captured by averaged word embeddings during training. This leads to better performance. Doc2VecC reduces to Denoising Autoencoders (DEA) if the local context words are removed from the paradigm shown in Figure FIGREF9 . By including the context words, Doc2VecC allows the document vector to focus more on capturing the global context. Skip-thought vectors perform surprisingly poor on this dataset comparing to other methods. We hypothesized that it is due to the length of paragraphs in this dataset. The average length of paragraphs in the IMDB movie review dataset is INLINEFORM0 , much longer than the ones used for training and testing in the original paper, which is in the order of 10. As noted in BIBREF18 , the performance of LSTM based method (similarly, the gated RNN used in Skip-thought vectors) drops significantly with increasing paragraph length, as it is hard to preserve state over long sequences of words.", "Time. Table TABREF22 summarizes the time required by these algorithms to learn and generate the document representation. Word2Vec is the fastest one to train. Denoising Autoencoders and Doc2VecC second that. The number of parameters that needs to be back-propagated in each update was increased by the number of surviving words in INLINEFORM0 . We found that both models are not sensitive to the corruption rate INLINEFORM1 in the noise model. Since the learning time decreases with higher corruption rate, we used INLINEFORM2 throughout the experiments. Paragraph Vectors takes longer time to train as there are more parameters (linear to the number of document in the learning set) to learn. At test time, Word2Vec+IDF, DEA and Doc2VecC all use (weighted) averaging of word embeddings as document representation. Paragraph Vectors, on the other hand, requires another round of inference to produce the vector representation of unseen test documents. It takes Paragraph Vectors 4 minutes and 17 seconds to infer the vector representations for the 25,000 test documents, in comparison to 7 seconds for the other methods. As we did not re-train the Skip-thought vector models on this dataset, the training time reported in the table is the time it takes to generate the embeddings for the 25,000 training documents. Due to repeated high-dimensional matrix operations required for encoding long paragraphs, it takes fairly long time to generate the representations for these documents. Similarly for testing. The experiments were conducted on a desktop with Intel i7 2.2Ghz cpu.", "Data dependent regularization. As explained in Section SECREF15 , the corruption introduced in Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100 in this experiment. Table TABREF24 lists the words having the smallest INLINEFORM0 norm of embeddings found by different algorithms. The number inside the parenthesis after each word is the number of times this word appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words have embeddings that are close to zero, despite some of them being indicative of sentiment such as debacle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words.", "Subsampling frequent words. Note that for all the numbers reported, we applied the trick of subsampling of frequent words introduced in BIBREF6 to counter the imbalance between frequent and rare words. It is critical to the performance of simple Word2Vec+AVG as the sole remedy to diminish the contribution of common words in the final document representation. If we were to remove this step, the error rate of Word2Vec+AVG will increases from INLINEFORM0 to INLINEFORM1 . Doc2VecC on the other hand naturally exerts a stronger regularization toward embeddings of words that are frequent but uninformative, therefore does not rely on this trick." ], [ "In table TABREF24 , we demonstrated that the corruption model introduced in Doc2VecC dampens the embeddings of words which are common and non-discriminative (stop words). In this experiment, we are going to quantatively compare the word embeddings generated by Doc2VecC to the ones generated by Word2Vec, or Paragraph Vectors on the word analogy task introduced by BIBREF1 . The dataset contains five types of semantic questions, and nine types of syntactic questions, with a total of 8,869 semantic and 10,675 syntactic questions. The questions are answered through simple linear algebraic operations on the word embeddings generated by different methods. Please refer to the original paper for more details on the evaluation protocol.", "We trained the word embeddings of different methods using the English news dataset released under the ACL workshop on statistical machine translation. The training set includes close to 15M paragraphs with 355M tokens. We compare the performance of word embeddings trained by different methods with increasing embedding dimensionality as well as increasing training data.", "We observe similar trends as in BIBREF1 . Increasing embedding dimensionality as well as training data size improves performance of the word embeddings on this task. However, the improvement is diminishing. Doc2VecC produces word embeddings which performs significantly better than the ones generated by Word2Vec. We observe close to INLINEFORM0 uplift when we train on the full training corpus. Paragraph vectors on the other hand performs surprisingly bad on this dataset. Our hypothesis is that due to the large capacity of the model architecture, Paragraph Vectors relies mostly on the unique document vectors to capture the information in a text document instead of learning the word semantic or syntactic similarities. This also explains why the PV-DBOW BIBREF2 model architecture proposed in the original work, which completely removes word embedding layers, performs comparable to the distributed memory version.", "In table 5, we list a detailed comparison of the performance of word embeddings generated by Word2Vec and Doc2VecC on the 14 subtasks, when trained on the full dataset with embedding of size 100. We can see that Doc2VecC significantly outperforms the word embeddings produced by Word2Vec across almost all the subtasks." ], [ "For the document classification task, we use a subset of the wikipedia dump, which contains over 300,000 wikipedia pages in 100 categories. The 100 categories includes categories under sports, entertainment, literature, and politics etc. Examples of categories include American drama films, Directorial debut films, Major League Baseball pitchers and Sydney Swans players. Body texts (the second paragraph) were extracted for each page as a document. For each category, we select 1,000 documents with unique category label, and 100 documents were used for training and 900 documents for testing. The remaining documents are used as unlabeled data. The 100 classes are balanced in the training and testing sets. For this data set, we learn the word embedding and document representation for all the algorithms using all the available data. We apply a cutoff of 10, resulting in a vocabulary of size INLINEFORM0 .", "Table TABREF29 summarizes the classification error of a linear SVM trained on representations of different sizes. We can see that most of the algorithms are not sensitive to the size of the vector representation. Doc2Vec benefits most from increasing representation size. Across all sizes of representations, Doc2VecC outperform the existing algorithms by a significant margin. In fact, Doc2VecC can achieve same or better performance with a much smaller representation vector.", "Figure FIGREF30 visualizes the document representations learned by Doc2Vec (left) and Doc2VecC (right) using t-SNE BIBREF32 . We can see that documents from the same category are nicely clustered using the representation generated by Doc2VecC. Doc2Vec, on the other hand, does not produce a clear separation between different categories, which explains its worse performance reported in Table TABREF29 .", "Figure FIGREF31 visualizes the vector representation generated by Doc2VecC w.r.t. coarser categorization. we manually grouped the 100 categories into 7 coarse categories, television, albums, writers, musicians, athletes, species and actors. Categories that do no belong to any of these 7 groups are not included in the figure. We can see that documents belonging to a coarser category are grouped together. This subset includes is a wide range of sports descriptions, ranging from football, crickets, baseball, and cycling etc., which explains why the athletes category are less concentrated. In the projection, we can see documents belonging to the musician category are closer to those belonging to albums category than those of athletes or species." ], [ "We test Doc2VecC on the SemEval 2014 Task 1: semantic relatedness SICK dataset BIBREF33 . Given two sentences, the task is to determine how closely they are semantically related. The set contains 9,927 pairs of sentences with human annotated relatedness score, ranging from 1 to 5. A score of 1 indicates that the two sentences are not related, while 5 indicates high relatedness. The set is splitted into a training set of 4,500 instances, a validation set of 500, and a test set of 4,927.", "We compare Doc2VecC with several winning solutions of the competition as well as several more recent techniques reported on this dataset, including bi-directional LSTM and Tree-LSTM trained from scratch on this dataset, Skip-thought vectors learned a large book corpus BIBREF34 and produced sentence embeddings of 4,800 dimensions on this dataset. We follow the same protocol as in skip-thought vectors, and train Doc2VecC on the larger book corpus dataset. Contrary to the vocabulary expansion technique used in BIBREF16 to handle out-of-vocabulary words, we extend the vocabulary of the learned model directly on the target dataset in the following way: we use the pre-trained word embedding as an initialization, and fine-tune the word and sentence representation on the SICK dataset. Notice that the fine-tuning is done for sentence representation learning only, and we did not use the relatedness score in the learning. This step brings small improvement to the performance of our algorithm. Given the sentence embeddings, we used the exact same training and testing protocol as in BIBREF16 to score each pair of sentences: with two sentence embedding INLINEFORM0 and INLINEFORM1 , we concatenate their component-wise product, INLINEFORM2 and their absolute difference, INLINEFORM3 as the feature representation.", "Table TABREF35 summarizes the performance of various algorithms on this dataset. Despite its simplicity, Doc2VecC significantly out-performs the winning solutions of the competition, which are heavily feature engineered toward this dataset and several baseline methods, noticeably the dependency-tree RNNs introduced in BIBREF35 , which relies on expensive dependency parsers to compose sentence vectors from word embeddings. The performance of Doc2VecC is slightly worse than the LSTM based methods or skip-thought vectors on this dataset, while it significantly outperforms skip-thought vectors on the IMDB movie review dataset ( INLINEFORM0 error rate vs INLINEFORM1 ). As we hypothesized in previous section, while Doc2VecC is better at handling longer paragraphs, LSTM-based methods are superior for relatively short sentences (of length in the order of 10s). We would like to point out that Doc2VecC is much faster to train and test comparing to skip-thought vectors. It takes less than 2 hours to learn the embeddings on the large book corpus for Doc2VecC on a desktop with Intel i7 2.2Ghz cpu, in comparison to the 2 weeks on GPU required by skip-thought vectors." ], [ "We introduce a new model architecture Doc2VecC for document representation learning. It is very efficient to train and test thanks to its simple model architecture. Doc2VecC intrinsically makes sure document representation generated by averaging word embeddings capture semantics of document during learning. It also introduces a data-dependent regularization which favors informative or rare words while dampening the embeddings of common and non-discriminative words. As such, each document can be efficiently represented as a simple average of the learned word embeddings. In comparison to several existing document representation learning algorithms, Doc2VecC outperforms not only in testing efficiency, but also in the expressiveness of the generated representations." ] ] }
{ "question": [ "Which language models do they compare against?", "Is their approach similar to making an averaged weighted sum of word vectors, where weights reflect word frequencies?", "How do they determine which words are informative?" ], "question_id": [ "52f1a91f546b8a25a5d72325c503ec8f9c72de23", "bb5697cf352dd608edf119ca9b82a6b7e51c8d21", "98785bf06e60fcf0a6fe8921edab6190d0c2cec1" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "RNNLM BIBREF11" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We compare against the following document representation baselines: bag-of-words (BoW); Denoising Autoencoders (DEA) BIBREF14 , a representation learned from reconstructing original document INLINEFORM0 using corrupted one INLINEFORM1 . SDAs have been shown to be the state-of-the-art for sentiment analysis tasks BIBREF29 . We used Kullback-Liebler divergence as the reconstruction error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into account the non-zero elements of INLINEFORM2 in the reconstruction error and employed negative sampling for the remainings; Word2Vec BIBREF1 +IDF, a representation generated through weighted average of word vectors learned using Word2Vec; Doc2Vec BIBREF2 ; Skip-thought Vectors BIBREF16 , a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representations that apply to various natural language processing tasks. We also include RNNLM BIBREF11 , a recurrent neural network based language model in the comparison. In the semantic relatedness task, we further compare to LSTM-based methods BIBREF18 that have been reported on this dataset." ], "highlighted_evidence": [ "We also include RNNLM BIBREF11 , a recurrent neural network based language model in the comparison." ] } ], "annotation_id": [ "6db29a269f42efdb89beabbd9c34bc64102f33af" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained." ], "yes_no": null, "free_form_answer": "", "evidence": [ "Similar to Word2Vec or Paragraph Vectors, Doc2VecC consists of an input layer, a projection layer as well as an output layer to predict the target word, “ceremony” in this example. The embeddings of neighboring words (“opening”, “for”, “the”) provide local context while the vector representation of the entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors, which directly learns a unique vector for each document, Doc2VecC represents each document as an average of the embeddings of words randomly sampled from the document (“performance” at position INLINEFORM0 , “praised” at position INLINEFORM1 , and “brazil” at position INLINEFORM2 ). BIBREF25 also proposed the idea of using average of word embeddings to represent the global context of a document. Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained. This corruption mechanism offers us great speedup during training as it significantly reduces the number of parameters to update in back propagation. At the same time, as we are going to detail in the next section, it introduces a special form of regularization, which brings great performance improvement." ], "highlighted_evidence": [ "BIBREF25 also proposed the idea of using average of word embeddings to represent the global context of a document. Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained." ] } ], "annotation_id": [ "1ae7eca7804e1547227cce6d43ad9b403f8832ad" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Informative are those that will not be suppressed by regularization performed.", "evidence": [ "Data dependent regularization. As explained in Section SECREF15 , the corruption introduced in Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100 in this experiment. Table TABREF24 lists the words having the smallest INLINEFORM0 norm of embeddings found by different algorithms. The number inside the parenthesis after each word is the number of times this word appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words have embeddings that are close to zero, despite some of them being indicative of sentiment such as debacle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words." ], "highlighted_evidence": [ "As explained in Section SECREF15 , the corruption introduced in Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words.", "In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words." ] } ], "annotation_id": [ "9335468572f5556bcdc53f49d72dc01c47d6814b" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: A new framework for learning document vectors.", "Table 1: Classification error of a linear classifier trained on various document representations on the Imdb dataset.", "Table 2: Learning time and representation generation time required by different representation learning algorithms.", "Table 3: Words with embeddings closest to 0 learned by different algorithms.", "Figure 2: Accuracy on subset of the Semantic-Syntactic Word Relationship test set. Only questions containing words from the most frequent 30k words are included in the test.", "Table 4: Top 1 accuracy on the 5 type of semantics and 9 types of syntactic questions.", "Table 5: Classification error (%) of a linear classifier trained on various document representations on the Wikipedia dataset.", "Figure 3: Visualization of document vectors on Wikipedia dataset using t-SNE.", "Figure 4: Visualization of Wikipedia Doc2VecC vectors using t-SNE.", "Table 6: Test set results on the SICK semantic relatedness task. The first group of results are from the submission to the 2014 SemEval competition; the second group includes several baseline methods reported in (Tai et al., 2015); the third group are methods based on LSTM reported in (Tai et al., 2015) as well as the skip-thought vectors (Kiros et al., 2015)." ], "file": [ "3-Figure1-1.png", "6-Table1-1.png", "7-Table2-1.png", "7-Table3-1.png", "8-Figure2-1.png", "8-Table4-1.png", "9-Table5-1.png", "9-Figure3-1.png", "9-Figure4-1.png", "11-Table6-1.png" ] }
1911.06191
Microsoft Research Asia's Systems for WMT19
We Microsoft Research Asia made submissions to 11 language directions in the WMT19 news translation tasks. We won the first place for 8 of the 11 directions and the second place for the other three. Our basic systems are built on Transformer, back translation and knowledge distillation. We integrate several of our rececent techniques to enhance the baseline systems: multi-agent dual learning (MADL), masked sequence-to-sequence pre-training (MASS), neural architecture optimization (NAO), and soft contextual data augmentation (SCA).
{ "section_name": [ "Introduction", "Introduction ::: Multi-agent dual learning (MADL)", "Introduction ::: Masked sequence-to-sequence pretraining (MASS)", "Introduction ::: Neural architecture optimization (NAO)", "Introduction ::: Soft contextual data augmentation (SCA)", "Our Techniques ::: Multi-agent dual learning (MADL)", "Our Techniques ::: Masked sequence-to-sequence pre-training (MASS)", "Our Techniques ::: Neural architecture optimization (NAO)", "Our Techniques ::: Soft contextual data augmentation (SCA)", "Submitted Systems ::: English@!START@$\\leftrightarrow $@!END@German", "Submitted Systems ::: English@!START@$\\leftrightarrow $@!END@German ::: Dataset", "Submitted Systems ::: English@!START@$\\leftrightarrow $@!END@German ::: Model Configuration", "Submitted Systems ::: English@!START@$\\leftrightarrow $@!END@German ::: Training Pipeline", "Submitted Systems ::: English@!START@$\\leftrightarrow $@!END@German ::: Results", "Submitted Systems ::: German@!START@$\\leftrightarrow $@!END@French", "Submitted Systems ::: Chinese@!START@$\\rightarrow $@!END@English ::: Dataset", "Submitted Systems ::: Chinese@!START@$\\rightarrow $@!END@English ::: MASS Pre-training", "Submitted Systems ::: Chinese@!START@$\\rightarrow $@!END@English ::: Back Translation and Knowledge Distillation", "Submitted Systems ::: Chinese@!START@$\\rightarrow $@!END@English ::: Results", "Submitted Systems ::: Chinese@!START@$\\rightarrow $@!END@English ::: WMT19 Submission", "Submitted Systems ::: English@!START@$\\leftrightarrow $@!END@Lithuanian", "Submitted Systems ::: English@!START@$\\leftrightarrow $@!END@Finnish ::: Preprocess", "Submitted Systems ::: English@!START@$\\leftrightarrow $@!END@Finnish ::: Architecture search", "Submitted Systems ::: English@!START@$\\leftrightarrow $@!END@Finnish ::: Train single models", "Submitted Systems ::: English@!START@$\\leftrightarrow $@!END@Finnish ::: Re-ranking", "Submitted Systems ::: Russian@!START@$\\rightarrow $@!END@English ::: Dataset", "Submitted Systems ::: Russian@!START@$\\rightarrow $@!END@English ::: Our system", "Submitted Systems ::: Russian@!START@$\\rightarrow $@!END@English ::: Results", "Submitted Systems ::: English@!START@$\\rightarrow $@!END@Kazakh ::: Dataset", "Submitted Systems ::: English@!START@$\\rightarrow $@!END@Kazakh ::: Our system", "Submitted Systems ::: English@!START@$\\rightarrow $@!END@Kazakh ::: Result", "Conclusions", "Acknowledgments" ], "paragraphs": [ [ "We participated in the WMT19 shared news translation task in 11 translation directions. We achieved first place for 8 directions: German$\\leftrightarrow $English, German$\\leftrightarrow $French, Chinese$\\leftrightarrow $English, English$\\rightarrow $Lithuanian, English$\\rightarrow $Finnish, and Russian$\\rightarrow $English, and three other directions were placed second (ranked by teams), which included Lithuanian$\\rightarrow $English, Finnish$\\rightarrow $English, and English$\\rightarrow $Kazakh.", "Our basic systems are based on Transformer, back translation and knowledge distillation. We experimented with several techniques we proposed recently. In brief, the innovations we introduced are:" ], [ "The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\\mathcal {X}$ to domain $\\mathcal {Y}$) and dual task (mapping from domain $\\mathcal {Y}$ to $\\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models. It was integrated into our submitted systems for German$\\leftrightarrow $English and German$\\leftrightarrow $French translations." ], [ "Pre-training and fine-tuning have achieved great success in language understanding. MASS BIBREF3, a pre-training method designed for language generation, adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. It was integrated into our submitted systems for Chinese$\\rightarrow $English and English$\\rightarrow $Lithuanian translations." ], [ "As well known, the evolution of neural network architecture plays a key role in advancing neural machine translation. Neural architecture optimization (NAO), our newly proposed method BIBREF4, leverages the power of a gradient-based method to conduct optimization and guide the creation of better neural architecture in a continuous and more compact space given the historically observed architectures and their performances. It was applied in English$\\leftrightarrow $Finnish translations in our submitted systems." ], [ "While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is relatively limited. SCA BIBREF5 softly augments a randomly chosen word in a sentence by its contextual mixture of multiple related words, i.e., replacing the one-hot representation of a word by a distribution provided by a language model over the vocabulary. It was applied in Russian$\\rightarrow $English translation in our submitted systems." ], [ "MADL is an enhanced version of dual learning BIBREF1, BIBREF6. It leverages $N$ primal translation models $f_i$ and $N$ dual translation models $g_j$ for training, and eventually outputs one $f_0$ and one $g_0$ for inference, where $f_i:\\mathcal {X}\\mapsto \\mathcal {Y},g_j:\\mathcal {Y}\\mapsto \\mathcal {X}$, $i,j\\in \\lbrace 0,1,\\cdots ,N-1\\rbrace $. All these models are pre-trained on bilingual data . The $i$-th primal model $f_i$ has a non-negative weight $\\alpha _i$ and the $j$-th dual model $g_i$ has a non-negative weight $\\beta _j$. All the $\\alpha _\\cdot $'s and $\\beta _\\cdot $'s are hyper-parameters. Let $F_\\alpha $ denote a combined translation model from $\\mathcal {X}$ to $\\mathcal {Y}$, and $G_\\beta $ a combined translation model from $\\mathcal {Y}$ to $\\mathcal {X}$,", "$F_\\alpha $ and $G_\\beta $ work as follows: for any $x\\in \\mathcal {X}$ and $y\\in \\mathcal {Y}$,", "Let $\\mathcal {B}$ denote the bilingual dataset. Let $\\mathcal {M}_x$ and $\\mathcal {M}_y$ denote the monolingual data of $\\mathcal {X}$ and $\\mathcal {Y}$. The training objective function of MADL can be written as follows:", "Note that $f_{>0}$ and $g_{>0}$ will not be optimized during training and we eventually output $f_0$ and $g_0$ for translation. More details can be found in BIBREF0." ], [ "MASS is a pre-training method for language generation. For machine translation, it can leverage monolingual data in two languages to pre-train a translation model. Given a sentence $x \\in \\mathcal {X}$, we denote $x^{\\setminus u:v}$ as a modified version of $x$ where its fragment from position $u$ to $v$ are masked, $0<u<v<m$ and $m$ is the number of tokens of sentence $x$. We denote $k=v-u+1$ as the number of tokens being masked from position $u$ to $v$. We replace each masked token by a special symbol $[\\mathbb {M}]$, and the length of the masked sentence is not changed. $x^{u:v}$ denotes the sentence fragment of $x$ from $u$ to $v$.", "MASS pre-trains a sequence to sequence model by predicting the sentence fragment $x^{u:v}$ taking the masked sequence $x^{\\setminus u:v}$ as input. We use the log likelihood as the objective function:", "where $\\mathcal {X}$, $\\mathcal {Y}$ denote the source and target domain. In addition to zero/low-resource setting BIBREF7, we also extend MASS to supervised setting where bilingual sentence pair $(x, y) \\in (\\mathcal {X}, \\mathcal {Y})$ can be leveraged for pre-training. The log likelihood in the supervised setting is as follows:", "where $[\\cdot ;\\cdot ]$ represents the concatenation operation. $P(y|x^{\\setminus u:v};\\theta )$ and $P(x|y^{\\setminus u:v};\\theta )$ denote the probability of translating a masked sequence to another language, which encourage the encoder to extract meaningful representations of unmasked input tokens in order to predict the masked output sequence. $P(x^{u:v}|[x^{\\setminus u:v}; y^{\\setminus u:v}];\\theta )$ and $P(y^{u:v}|[x^{\\setminus u:v}; y^{\\setminus u:v}];\\theta )$ denote the probability of generating the masked source/target segment given both the masked source and target sequences, which encourage the model to extract cross-lingual information. $P(y^{u:v}|x^{\\setminus u:v};\\theta )$ and $P(x^{u:v}|y^{\\setminus u:v};\\theta )$ denote the probability of generating the masked fragment given only the masked sequence in another language. More details about MASS can be found in BIBREF3." ], [ "NAO BIBREF4 is a gradient based neural architecture search (NAS) method. It contains three key components: an encoder, an accuracy predictor, and a decoder, and optimizes a network architecture as follows. (1) The encoder maps a network architecture $x$ to an embedding vector $e_x$ in a continuous space $\\mathcal {E}$. (2) The predictor, a function $f$, takes $e_x\\in \\mathcal {E}$ as input and predicts the dev set accuracy of the architecture $x$. We perform a gradient ascent step, i.e., moving $e_x$ along the direction specified via the gradient $\\frac{\\partial f}{\\partial e_x}$, and get a new embedding vector $e_{x^{\\prime }}$:", "where $\\eta $ is the step size. (3) The decoder is used to map $e_{x^{\\prime }}$ back to the corresponding architecture $x^{\\prime }$. The new architecture $x^{\\prime }$ is assumed to have better performance compared with the original one $x$ due to the property of gradient ascent. NAO repeats the above three steps, and sequentially generates better and better architectures.", "To learn high-quality encoder, decoder and performance prediction function, it is essential to have a large quantity of paired training data in the form of $(x,y)$, where $y$ is the dev set accuracy of the architecture $x$. To reduce computational cost, we share weights among different architectures BIBREF8 to aid the generation of such paired training data.", "We use NAO to search powerful neural sequence-to-sequence architectures. The search space is illustrated in Fig. FIGREF13. Specifically, each network is composed of $N$ encoder layers and $N$ decoder layers. We set $N=6$ in our experiments. Each encoder layer further contains 2 nodes and each decoder layer contains 3 nodes. The node has two branches, respectively taking the output of other node as input, and applies a particular operator (OP), for example, identity, self-attention and convolution, to generate the output. The outputs of the two branches are added together as the output of the node. Each encoder layer contains two nodes while each decoder layer has three. For each layer, we search: 1) what is the operator at each branch of every node. For a comprehensive list of different OPs, please refer to the Appendix of this paper; 2) the topology of connection between nodes within each layer. In the middle part of Fig. FIGREF13, we plot possible connections within the nodes of a layer specified by all candidate architectures, with a particular highlight of Transformer BIBREF9.", "To construct the final network, we do not adopt the typically used way of stacking the same layer multiple times. Instead we assume that layers in encoder/decoder could have different architectures and directly search such personalized architecture for each layer. We found that such a design significantly improves the performance due to the more flexibility." ], [ "SCA is a data augmentation technology for NMT BIBREF5, which replaces a randomly chosen word in a sentence with its soft version. For any word $w \\in V$, its soft version is a distribution over the vocabulary of $|V|$ words: $P(w) = (p_1(w), p_2(w), ..., p_{|V|}(w))$, where $p_j(w) \\ge 0$ and $\\sum _{j=1}^{|V|}p_j(w) = 1$.", "Given the distribution $P(w)$, one may simply sample a word from this distribution to replace the original word $w$. Different from this method, we directly use this distribution vector to replace the randomly chosen word $w$ from the original sentence. Suppose $E$ is the embedding matrix of all the $|V|$ words. The embedding of the soft version of $w$ is", "which is the expectation of word embeddings over the distribution.", "In our systems, we leverage a pre-trained language model to compute $P(w)$ and condition on all the words preceding $w$. That is, for the $t$-th word $x_t$ in a sentence, we have", "where $LM(v_j|x_{<t})$ denotes the probability of the $j$-th word $v_j$ in the vocabulary appearing after the sequence $x_1, x_2, \\cdots , x_{t-1}$. The language model is pre-trained using the monolingual data." ], [ "We submit constrained systems to both English to German and German to English translations, with the same techniques." ], [ "We concatenate “Europarl v9”, “News Commentary v14”, “Common Crawl corpus” and “Document-split Rapid corpus” as the basic bilingual dataset (denoted as $\\mathcal {B}_0$). Since “Paracrawl” data is noisy, we select 20M bilingual data from this corpus using the script filter_interactive.py. The two parts of bilingual data are concatenated together (denoted as $\\mathcal {B}_1$). We clean $\\mathcal {B}_1$ by normalizing the sentences, removing non-printable characters, and tokenization. We share a vocabulary for the two languages and apply BPE for word segmentation with 35000 merge operations. (We tried different BPE merge operations but found no significant differences.) For monolingual data, we use $120M$ English sentences (denoted as $\\mathcal {M}_{\\text{en}}$) and $120M$ German sentences (denoted as $\\mathcal {M}_{\\text{de}}$) from Newscrawl, and preprocess them in the same way as bilingual data. We use newstest 2016 and the validation set and newstest 2018 as the test set." ], [ "We use the PyTorch implementation of Transformer. We choose the Transformer_big setting, in which both the encoder and decoder are of six layers. The dropout rate is fixed as $0.2$. We set the batchsize as 4096 and the parameter –update-freq as 16. We apply Adam BIBREF10 optimizer with learning rate $5\\times 10^{-4}$." ], [ "The pipeline consists of three steps:", "1. Pre-train two English$\\rightarrow $German translation models (denoted as $\\bar{f}_1$ and $\\bar{f}_2$) and two German$\\rightarrow $English translation models (denoted as $\\bar{g}_1$ and $\\bar{g}_2$) on $\\mathcal {B}_1$; pre-train another English$\\rightarrow $German (denoted as $\\bar{f}_3$) and German$\\rightarrow $English (denoted as $\\bar{g}_3$) on $\\mathcal {B}_0$.", "2. Apply back translation following BIBREF11, BIBREF12. We back-translate $\\mathcal {M}_{\\text{en}}$ and $\\mathcal {M}_{\\text{de}}$ using $\\bar{f}_3$ and $\\bar{g}_3$ with beam search, add noise to the translated sentences BIBREF12, merge the synthetic data with $\\mathcal {B}_1$, and train one English$\\rightarrow $German model $f_0$ and one German$\\rightarrow $English model $g_0$ for seven days on eight V100 GPUs.", "3. Apply MADL to $f_0$ and $g_0$. That is, the $F_\\alpha $ in Eqn.(DISPLAY_FORM8) is specified as the combination of $f_0,\\bar{f}_1,\\bar{f}_2$ with equal weights; and $G_\\beta $ consists of $g_0,\\bar{g}_1,\\bar{g}_2$. During training, we will only update $f_0$ and $g_0$. To speed up training, we randomly select $20M$ monolingual English and German sentences from $\\mathcal {M}_{\\text{en}}$ and $\\mathcal {M}_{\\text{de}}$ respectively instead of using all monolingual sentences. The eventual output models are denoted as $f_1$ and $g_1$ respectively. This step takes 3 days on four P40 GPUs." ], [ "The results are summarized in Table TABREF24, which are evaluated by sacreBLEU. The baseline is the average accuracy of models using only bitext, i.e., $\\bar{f}_1$ and $\\bar{f}_2$ for English$\\rightarrow $German translation and $\\bar{g}_1$ and $\\bar{g}_2$ for German$\\rightarrow $English, and BT is the accuracy of the model after back-translation training. As can be seen, back translation improves accuracy. For example, back-translation boosts the BLEU score from $45.6$ to $47.4$ on news18 English$\\rightarrow $German translation, which is $1.8$ point improvement. MADL further boosts BLEU to $50.4$, obtaining another 3-point improvement, demonstrating the effectiveness of our method.", "For the final submission, we accumulate many translation models (trained using bitext, back translation, and MADL, with different random seeds) and do knowledge distillation on the source sentences from WMT14 to WMT19 test sets. Take English$\\rightarrow $German translation as an example. Denote the English inputs as $\\mathcal {T}=\\lbrace s_i\\rbrace _{i=1}^{N_T}$, where $N_T$ is the size of the test set. For each $s$ in $\\mathcal {T}$, we translate $s$ to $d^\\prime $ using $M$ English$\\rightarrow $German models and eventually obtain", "where $f^{(j)}$ is the $j$-th translation model we accumulated, $\\mathcal {T}$ is the combination of inputs from WMT14 to WMT19. After obtaining $\\mathcal {E}$, we randomly select $N_TM$ bitext pairs (denoted as $\\mathcal {B}_2$) from $\\mathcal {B}_1$ and finetune model $f_1$ on $\\mathcal {B}_2\\cup \\mathcal {E}$. We stop tuning when the BLEU scores of WMT16 (i.e., the validation set) drops.", "We eventually obtain $44.9$ BLEU score for English$\\rightarrow $German and $42.8$ for German$\\rightarrow $English on WMT19 test sets and are ranked in the first place in these two translation tasks." ], [ "For German$\\leftrightarrow $French translation, we follow a similar process as the one used to English$\\leftrightarrow $German tasks introduced in Section SECREF17. We merge the “commoncrawl”, “europarl-v7” and part of “de-fr.bicleaner07” selected by filter_interactive.py as the bilingual data. We collect $20M$ monolingual sentences for French and $20M$ for German from newscrawl. The data pre-processing rule and training procedure are the same as that used in Section SECREF17. We split $9k$ sentences from the “dev08_14” as the validation set and use the remaining ones as the test set.", "The results of German$\\leftrightarrow $French translation on the test set are summarized in Table TABREF27.", "Again, our method achieves significant improvement over the baselines. Specifically, MADL boosts the baseline of German$\\rightarrow $French and French$\\rightarrow $German by 2 and $1.5$ points respectively.", "Our submitted German$\\rightarrow $French is a single system trained by MADL, achieving $37.3$ BLEU on WMT19. The French$\\rightarrow $German is an ensemble of three independently trained models, achieving $35.0$ BLEU score. Our systems are ranked in the first place for both German$\\rightarrow $French and French$\\rightarrow $German in the leaderboard." ], [ "For Chinese$\\rightarrow $English translation, we use all the bilingual and monolingual data provided by the WMT official website, and also extra bilingual and monolingual data crawled from the web. We filter the total 24M bilingual pairs from WMT using the script filter_interactive.py as described in Section SECREF17 and get 18M sentence pairs. We use the Chinese monolingual data from XMU monolingual corpus and English monolingual data from News Crawl as well as the English sentences from all English-XX language pairs in WMT. We use 100M additional parallel sentences drawn from UN data, Open Subtitles and Web crawled data, which is filtered using the same filter rule described above, as well as fast align and in/out-domain filter. Finally we get 38M bilingual pairs. We also crawled 80M additional Chinese monolingual sentences from Sougou, China News, Xinhua News, Sina News, Ifeng News, and 2M English monolingual sentences from China News and Reuters. We use newstest2017 and newstest2018 on Chinese-English as development datasets.", "We normalize the Chinese sentence from SBC case to DBC case, remove non-printable characters and tokenize with both Jieba and PKUSeg to increase diversity. For English sentences, we remove non-printable characters and tokenize with Moses tokenizer. We follow previous practice BIBREF13 and apply Byte-Pair Encoding (BPE) BIBREF14 separately for Chinese and English, each with 40K vocabulary." ], [ "We pre-train MASS (Transfomer_big) with both monolingual and bilingual data. We use 100M Chinese and 300M English monolingual sentences for the unsupervised setting (Equation DISPLAY_FORM10), and with a total of 18M and 56M bilingual sentence pairs for the supervised settings (Equation DISPLAY_FORM11). We share the encoder and decoder for all the losses in Equation DISPLAY_FORM10 and DISPLAY_FORM11. We then fine-tune the MASS pre-trained model on both 18M and 56M bilingual sentence pairs to get the baseline translation model for both Chinese$\\rightarrow $English and English$\\rightarrow $Chinese." ], [ "We randomly choose 40M monolingual sentences for Chinese and English respectively for back translation BIBREF11, BIBREF1 and knowledge distillation BIBREF15, BIBREF16. We iterate back translation and knowledge distillation multiple times, to gradually boost the performance of the model." ], [ "The results on newstest2017 and newstest2018 are shown in Table TABREF37. We list two baseline Transformer_big systems which use 18M bilingual data (constraint) and 56M bilingual data (unconstraint) respectively. The pre-trained model achieves about 1 BLEU point improvement after fine-tuning on both 18M and 56M bilingual data. After iterative back translation (BT) and knowledge distillation (KD), as well as re-ranking, our system achieves 30.8 and 30.9 BLEU points on newstest2017 and newstest2018 respectively." ], [ "For the WMT19 submission, we conduct fine-tuning and speculation to further boost the accuracy by using the source sentences in the WMT19 test set. We first filter the bilingual as well as pseudo-generated data according to the relevance to the source sentences. We use the filter method in BIBREF17 and continue to train the model on the filtered data. Second, we conduct speculation on the test source sentences following the practice in BIBREF17. The final BLEU score of our submission is 39.3, ranked in the first place in the leaderboard." ], [ "For English$\\leftrightarrow $Lithuanian translation, we follow the similar process as that for Chinese$\\rightarrow $English task introduced in Section SECREF28. We use all the WMT bilingual data, which is 2.24M after filtration. We use the same English monolingual data as used in Chinese-English. We select 100M Lithuanian monolingual data from official commoncrawl and use all the wiki and news Lithuanian monolingual data provided by WMT. In addition, we crawl 5M Lithuanian news data from LRT website. We share the BPE vocabulary between English and Lithuanian, and the vocabulary size is 65K.", "All the bilingual and monolingual data are used for MASS pre-training, and all the bilingual data are used for fine-tuning. For iterative back translation and knowledge distillation, we split 24M English monolingual data as well as 12M Lithuanian monolingual data into 5 parts through sampling with replacement, to get different models independently so as to increase diversity in re-ranking/ensemble. Each model uses 8M English monolingual data and 6M Lithuanian monolingual data. For our WMT19 submission, different from zh-en, speculation technology is not used.", "The BLEU scores on newsdev19 are shown in Table TABREF41. Our final submissions for WMT19 achieves 20.1 BLEU points for English$\\rightarrow $Lithuanian translation (ranked in the first place) and 35.6 for Lithuanian$\\rightarrow $English translation (ranked in the second place)." ], [ "We use the official English-Finnish data from WMT19, including both bilingual data and monolingual data. After de-duplicating, the bilingual data contains $8.8M$ aligned sentence pairs. We share the vocabulary for English and Finnish with $46k$ BPE units. We use the WMT17 and WMT18 English-Finnish test sets as two development datasets, and tune hyper-parameters based on the concatenation of them." ], [ "We use NAO to search sequence-to-sequence architectures for English-Finnish translation tasks, as introduced in subsection SECREF12. We use PyTorch for our implementations. Due to time limitations, we are not targeting at finding better neural architectures than Transformer; instead we target at models with comparable performance to Transformer, while providing diversity in the reranking process. The whole search process takes $2.5$ days on 16 P40 GPU cards and the discovered neural architecture, named as NAONet, is visualized in the Appendix." ], [ "The final system for English-Finnish is obtained through reranking of three strong model checkpoints, respectively from the Transformer model decoding from left to right (L2R Transformer), the Transformer model decoding from right to left (R2L Transformer) and NAONet decoding from left to right. All the models have 6-6 layers in encoder/decoder, and are obtained using the same process which is detailed as below.", "Step 1: Base models. Train two models $P_1(x|y)$ and $P_1(y|x)$ based on all the bilingual dataset ($8.8$M), respectively for English$\\rightarrow $Finnish and Finnish$\\rightarrow $English translations.", "Step 2: Back translation. Do the normal back translation BIBREF11, BIBREF1 using $P_1$ and $P_2$. Specifically we choose $10M$ monolingual English corpus, use $P_1(y|x)$ to generate the $10M$ pseudo bitext with beam search (beam size is set to 5), and mix it with the bilingual data to continue the training of $P_1(x|y)$. The ratio of mixing is set as $1:1$ through up-sampling. The model obtained through such a process is denoted as $P_2(x|y)$. The same process is applied to the opposite direction and the new model $P_2(y|x)$ is attained.", "Step 3: Back translation + knowledge distillation. In this step we generate more pseudo bitext by sequence level knowledge distillation BIBREF15 apart from using back translation. To be more concrete, as the first step, similar to Step 2, we choose $15M$ monolingual English and Finnish corpus, and generate the translations using $P_2(y|x)$ and $P_2(x|y)$, respectively. The resulting pseudo bitext is respectively denoted as $D_{x\\rightarrow y}$ and $D_{y\\rightarrow x}$. Then we concatenate all the bilingual data, $D_{x\\rightarrow y}$ and $D_{y\\rightarrow x}$, and use the whole corpus to train a new English-Finnish model from scratch. The attained model is denoted as $P_3(y|x)$.", "Step 4: Finetune. In this step we try a very simple data selection method to handle the domain mismatch problem in WMT. We remove all the bilingual corpus from Paracrawl which is generally assumed to be quite noisy BIBREF18 and use the remaining bilingual corpus ($4.5M$) to finetune $P_3(y|x)$ for one epoch. The resulting model is denoted as $P_4(y|x)$ which is set as the final model checkpoint.", "To investigate the effects of the four steps, we record the resulting BLEU scores on WMT17 and WMT18 test sets in Table TABREF46, taking the L2R Transformer model as an example. Furthermore, we report the final BLEU scores of the three models after the four steps in Table TABREF47. All the results are obtained via beam size 5 and length penalty $1.0$. The similar results for Finnish-English translation are shown in Table TABREF48." ], [ "We use n-best re-ranking to deliver the final translation results using the three model checkpoints introduced in the last subsection. The beam size is set as 12. The weights of the three models, as well as the length penalty in generation, are tuned on the WMT-18 test sets. The results are shown in the second row of Table TABREF50.", "We would also like to investigate what is the influence of the NAONet to the re-ranking results. To achieve that, in re-ranking we replace NAONet with another model from L2R Transformer, trained with the same process in subsection SECREF45 with the difference only in random seeds, while maintain the other two models unchanged. The results are illustrated in the last row of Table TABREF50. From the comparison of the two rows in Table TABREF50, we can see the new architecture NAONet discovered via NAO brings more diversity in the ranking, thus leading to better results. We also report the similar results for Finnish-English tasks in Table TABREF51.", "Our systems achieve $27.4$ for and $31.9$ for English$\\rightarrow $Finnish and Finnish$\\rightarrow $English, ranked in the first place and second place (by teams), respectively." ], [ "We use the bitext data from the several corpora: ParaCrawl, Common Crawl, News Commentary, Yandex Corpus, and UN Parallel Corpus. We also use News Crawl corpora as monolingual data. The data is filtered by rules such as sentence length, language identification, resulting a training dataset with 16M bilingual pairs and 40M monolingual sentences (20M for English and 20M for Russian). We use WMT17 and WMT18 test set as development data. The two languages use separate vocabularies, each with 50K BPE merge operations." ], [ "Our final system for Russian$\\rightarrow $English translation is a combination of Transformer network BIBREF9, back translation BIBREF11, knowledge distillation BIBREF15, soft contextual data augmentation BIBREF5, and model ensemble. We use Transformer_big as network architecture. We first train two models, English$\\rightarrow $Russian and Russian$\\rightarrow $English respectively, on bilingual pairs as baseline model. Based on these two models, we perform back translation and knowledge distillation on monolingual data, generating 40M synthetic data. Combining both bilingual and synthetic data, we get a large train corpus with 56M pairs in total. We upsample the bilingual pairs and shuffle the combined corpus to ensure the balance between bilingual and synthetic data. Finally, we train the Russian$\\rightarrow $English model from scratch. During the training, we also use soft contextual data augmentation to further enhance training. Following the above procedures, 5 different models are trained and ensembled for final submission." ], [ "Our final submission achieves 40.1 BLEU score, ranked first in the leaderboard. Table TABREF56 reports the results of our system on the development set." ], [ "We notice that most of the parallel data are out of domain. Therefore, we crawl some external data:", "(1) We crawl all news articles from inform.kz, a Kazakh-English news website. Then we match an English new article to a Kazakh one by matching their images with image hashing. In this way, we find 10K pairs of bilingual news articles. We use their title as additional parallel data. These data are in-domain and useful in training.", "(2) We crawl 140K parallel sentence pairs from glosbe.com. Although most of these sentences are out-of-domain, they significantly extended the size of our parallel dataset and lead to better results.", "Because most of our parallel training data are noisy, we filter these data with some rules: (1) For the KazakhTV dataset, we remove any sentence pair with an alignment score less than 0.05. (2) For the Wiki Titles dataset, we remove any sentence pair that starts with User or NGC. (3) For all datasets, we remove any sentence pair in which the English sentence contains no lowercase alphabets. (4) For all datasets, we remove any sentence pair where the length ratio is greater than 2.5:1.", "We tokenize all our data using the Moses Decoder. We learn a shared BPE BIBREF14 from all our data (including all WMT19 parallel data, WMT19 monolingual data, glosbe, inform.kz news titles, and inform.kz news contents) and get a shared vocabulary of 49,152 tokens. Finally, our dataset consists of 300K bilingual sentence pairs, 700K Kazakh monolingual sentences, and many English monolingual sentences." ], [ "Our model is based on the Transformer BIBREF9. We vary the hyper-parameters to increase the diversity of our model. Our models usually have 6 encoder layers, 6/7 decoder layers, ReLU/GELU BIBREF19 activation function, and an embedding dimension of 640.", "We train 4 English-Kazakh models and 4 Kazakh-English models with different random seeds and hyper-parameters. Then we apply back-translation BIBREF12 and knowledge distillation BIBREF15 for 6 rounds. In each round, we", "1. Sample 4M sentences from English monolingual data and back-translate them to Kazakh with the best EN-KK model (on the dev set) in the previous round.", "2. Back-translate all Kazakh monolingual data to English with the best KK-EN model in the previous round.", "3. Sample 200K sentences from English monolingual data and translate them to Kazakh using the ensemble of all EN-KK models in the previous round.", "4. Train 4 English-Kazakh models with BT data from step 2 and KD data from step 3. We up-sample bilingual sentence pairs by 2x.", "5. Train 4 Kazakh-English models with BT data from step 1. We up-sample bilingual sentence pairs by 3x." ], [ "Our final submission achieves 10.6 BLEU score, ranked second by teams in the leaderboard." ], [ "This paper describes Microsoft Research Asia's neural machine translation systems for the WMT19 shared news translation tasks. Our systems are built on Transformer, back translation and knowledge distillation, enhanced with our recently proposed techniques: multi-agent dual learning (MADL), masked sequence-to-sequence pre-training (MASS), neural architecture optimization (NAO), and soft contextual data augmentation (SCA). Due to time and GPU limitations, we only apply each technique to a subset of translation tasks. We believe combining them together will further improve the translation accuracy and will conduct experiments in the future. Furthermore, some other techniques such as deliberation learning BIBREF20, adversarial learning BIBREF21, and reinforcement learning BIBREF22, BIBREF23 could also hep and are worthy of exploration." ], [ "This work is supported by Microsoft Machine Translation team." ] ] }
{ "question": [ "What is their best performance on the largest language direction dataset?", "How does soft contextual data augmentation work?", "How does muli-agent dual learning work?", "Which language directions are machine translation systems of WMT evaluated on?" ], "question_id": [ "9846f84747b89f5c692665c4ea7111671ad9839a", "eecf62e18a790bcfdd8a56f0c4f498927ff2fb47", "acda028a21a465c984036dcbb124b7f03c490b41", "42af0472e6895eaf7b9392674b0d956e64e86b03" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ " ", " ", " ", " " ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "1aea0d1b6dfca764711dae6781f02be4b7599a0d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "softly augments a randomly chosen word in a sentence by its contextual mixture of multiple related words", "replacing the one-hot representation of a word by a distribution provided by a language model over the vocabulary" ], "yes_no": null, "free_form_answer": "", "evidence": [ "While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is relatively limited. SCA BIBREF5 softly augments a randomly chosen word in a sentence by its contextual mixture of multiple related words, i.e., replacing the one-hot representation of a word by a distribution provided by a language model over the vocabulary. It was applied in Russian$\\rightarrow $English translation in our submitted systems." ], "highlighted_evidence": [ "SCA BIBREF5 softly augments a randomly chosen word in a sentence by its contextual mixture of multiple related words, i.e., replacing the one-hot representation of a word by a distribution provided by a language model over the vocabulary." ] } ], "annotation_id": [ "889e5eef86b4b23437c46dbbdfd7996c04922382" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models." ], "yes_no": null, "free_form_answer": "", "evidence": [ "The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\\mathcal {X}$ to domain $\\mathcal {Y}$) and dual task (mapping from domain $\\mathcal {Y}$ to $\\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models. It was integrated into our submitted systems for German$\\leftrightarrow $English and German$\\leftrightarrow $French translations." ], "highlighted_evidence": [ "The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\\mathcal {X}$ to domain $\\mathcal {Y}$) and dual task (mapping from domain $\\mathcal {Y}$ to $\\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models." ] } ], "annotation_id": [ "d6240058c47bb96cd7954375683b83f335e9863f" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "German$\\leftrightarrow $English, German$\\leftrightarrow $French, Chinese$\\leftrightarrow $English, English$\\rightarrow $Lithuanian, English$\\rightarrow $Finnish, and Russian$\\rightarrow $English", "Lithuanian$\\rightarrow $English, Finnish$\\rightarrow $English, and English$\\rightarrow $Kazakh" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We participated in the WMT19 shared news translation task in 11 translation directions. We achieved first place for 8 directions: German$\\leftrightarrow $English, German$\\leftrightarrow $French, Chinese$\\leftrightarrow $English, English$\\rightarrow $Lithuanian, English$\\rightarrow $Finnish, and Russian$\\rightarrow $English, and three other directions were placed second (ranked by teams), which included Lithuanian$\\rightarrow $English, Finnish$\\rightarrow $English, and English$\\rightarrow $Kazakh." ], "highlighted_evidence": [ "We achieved first place for 8 directions: German$\\leftrightarrow $English, German$\\leftrightarrow $French, Chinese$\\leftrightarrow $English, English$\\rightarrow $Lithuanian, English$\\rightarrow $Finnish, and Russian$\\rightarrow $English, and three other directions were placed second (ranked by teams), which included Lithuanian$\\rightarrow $English, Finnish$\\rightarrow $English, and English$\\rightarrow $Kazakh." ] } ], "annotation_id": [ "2f4b1eed7834e6bd0d45abc95138c521d3d05863" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: Visualization of different levels of the search space, from the network, to the layer, to the node. For each of the different layers, we search its unique layer space. The lines in the middle part denote all possible connections between the three nodes (constituting the layer space) as specified via each architecture, while among them the deep black lines indicate the particular connection in Transformer. The right part similarly contains the two branches used in Node2 of Transformer.", "Table 1: Results of English↔German by sacreBLEU.", "Table 2: Results of German↔French by sacreBLEU.", "Table 3: BLEU scores on Chinese→English test sets.", "Table 4: BLEU scores for English↔Lithuanian on the newsdev19 set.", "Table 5: BLEU scores of L2R Transformer on English→Finnish test sets.", "Table 6: The final BLEU scores on English→Finnish test sets, for the three models: L2R Transformer, R2L Transformer and NAONet, after the four steps of training.", "Table 9: Finnish→English BLEU scores of re-ranking using the three models.", "Table 7: The final BLEU scores on Finnish→English test sets, for the three models: L2R Transformer, R2L Transformer and NAONet, after the four steps of training.", "Table 8: English→Finnish BLEU scores of re-ranking using the three models. “news” is short for “newstest”.", "Table 10: Russian→English BLEU scores." ], "file": [ "4-Figure1-1.png", "4-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png", "7-Table4-1.png", "7-Table5-1.png", "7-Table6-1.png", "8-Table9-1.png", "8-Table7-1.png", "8-Table8-1.png", "9-Table10-1.png" ] }
1701.06538
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
{ "section_name": [ "Conditional Computation", "Our Approach: The Sparsely-Gated Mixture-of-Experts Layer", "Related work on Mixtures of Experts", "The Structure of the Mixture-of-Experts layer", "Gating Network", "The Shrinking Batch Problem", "Network Bandwidth", "Balancing Expert Utilization", "1 Billion Word Language Modeling Benchmark", "100 Billion Word Google News Corpus", "Machine Translation (Single Language Pair)", "Multilingual Machine Translation", "Conclusion", "Appendices", "Load-Balancing Loss", "Hierachical Mixture of Experts", "1 Billion Word Language Modeling Benchmark - Experimental Details", "100 Billion Word Google News Corpus - Experimental Details", "Machine Translation - Experimental Details", "Strictly Balanced Gating", "Attention Function" ], "paragraphs": [ [ "Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , images BIBREF4 , BIBREF5 , and audio BIBREF6 , BIBREF7 . For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand.", "Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions.", "While these ideas are promising in theory, no work to date has yet demonstrated massive improvements in model capacity, training time, or model quality. We blame this on a combination of the following challenges:", "Modern computing devices, especially GPUs, are much faster at arithmetic than at branching. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision.", "Large batch sizes are critical for performance, as they amortize the costs of parameter transfers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network.", "Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be computationally efficient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional computation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity.", "Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. BIBREF13 use three such terms. These issues can affect both model quality and load-balancing.", "Model capacity is most critical for very large data sets. The existing literature on conditional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient signal to adequately train a model with millions, let alone billions of parameters.", "In this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets." ], [ "Our approach to conditional computation is to introduce a new type of general purpose neural network component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a number of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure FIGREF8 ). All parts of the network are trained jointly by back-propagation.", "While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to benefit from very large models. In particular, we apply a MoE convolutionally between stacked LSTM layers BIBREF15 , as in Figure FIGREF8 . The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix SECREF84 Table TABREF92 ). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost." ], [ "Since its introduction more than two decades ago BIBREF16 , BIBREF17 , the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs BIBREF18 , Gaussian Processes BIBREF19 , BIBREF20 , BIBREF21 , Dirichlet Processes BIBREF22 , and deep networks. Other work has focused on different expert configurations such as a hierarchical structure BIBREF23 , infinite numbers of experts BIBREF24 , and adding experts sequentially BIBREF25 . BIBREF26 suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model.", "The works above concern top-level mixtures of experts. The mixture of experts is the whole model. BIBREF10 introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex problems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation.", "Our work builds on this use of MoEs as a general purpose neural network component. While BIBREF10 uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity." ], [ "The Mixture-of-Experts (MoE) layer consists of a set of INLINEFORM0 “expert networks\" INLINEFORM1 , and a “gating network\" INLINEFORM2 whose output is a sparse INLINEFORM3 -dimensional vector. Figure FIGREF8 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters.", "Let us denote by INLINEFORM0 and INLINEFORM1 the output of the gating network and the output of the INLINEFORM2 -th expert network for a given input INLINEFORM3 . The output INLINEFORM4 of the MoE module can be written as follows: DISPLAYFORM0 ", "We save computation based on the sparsity of the output of INLINEFORM0 . Wherever INLINEFORM1 , we need not compute INLINEFORM2 . In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts\", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix SECREF60 .", "Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in BIBREF12 . A MoE whose experts have one hidden layer is similar to the block-wise dropout described in BIBREF13 , where the dropped-out layer is sandwiched between fully-activated layers." ], [ "A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0 ", "We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1 ", "We train the gating network by simple back-propagation, along with the rest of the model. If we choose INLINEFORM0 , the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in BIBREF9 with respect to noisy rectifiers. Gradients also back-propagate through the gating network to its inputs. Our method differs here from BIBREF13 who use boolean gates and a REINFORCE-style approach to train the gating network." ], [ "On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses INLINEFORM0 out of INLINEFORM1 experts for each example, then for a batch of INLINEFORM2 examples, each expert receives a much smaller batch of approximately INLINEFORM3 examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size:", "In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over INLINEFORM0 devices, and each device processes a batch of size INLINEFORM1 , each expert receives a batch of approximately INLINEFORM2 examples. Thus, we achieve a factor of INLINEFORM3 improvement in expert batch size.", "In the case of a hierarchical MoE (Section SECREF60 ), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device.", "This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion-parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware.", "In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps.", "We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last paragraph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. BIBREF27 describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size." ], [ "Another major performance concern in distributed computing is network bandwidth. Since the experts are stationary (see above) and the number of gating parameters is small, most of the communication involves sending the inputs and outputs of the experts across the network. To maintain computational efficiency, the ratio of an expert's computation to the size of its input and output must exceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes INLINEFORM0 _ INLINEFORM1 _ INLINEFORM2 and INLINEFORM3 _ INLINEFORM4 _ INLINEFORM5 , the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers." ], [ "We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. BIBREF10 describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. BIBREF13 include a soft constraint on the batch-wise average of each gate.", "We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss INLINEFORM0 , which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor INLINEFORM1 . This additional loss encourages all experts to have equal importance. DISPLAYFORM0 DISPLAYFORM1 ", "While this loss function can ensure equal importance, experts may still receive very different numbers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, INLINEFORM0 , which ensures balanced loads. Appendix SECREF51 contains the definition of this function, along with experimental results." ], [ "This dataset, introduced by BIBREF28 consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words.", "The best previously published results BIBREF2 use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers BIBREF15 , BIBREF29 . The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure FIGREF32 -right.", "Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure FIGREF8 ). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix SECREF65 .", "To investigate the effects of adding capacity, we trained a series of MoE models all with roughly equal computational costs: about 8 million multiply-and-adds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input.", "The results of these models are shown in Figure FIGREF32 -left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set.", "In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix UID77 . Results of these three models form the bottom line of Figure FIGREF32 -right. Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.", "We trained our models using TensorFlow BIBREF30 on clusters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efficiency in TFLOPS/GPU by dividing the number of floating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the floating point operations involved in the experts represent between 37% and 46% of the total.", "For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74-0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix SECREF65 , Table TABREF76 ." ], [ "On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure FIGREF32 -left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements.", "We constructed a similar training set consisting of shuffled unique sentences from Google's internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix SECREF78 .", "Figure FIGREF37 shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets.", "Even at 65536 experts (99.994% layer sparsity), computational efficiency for the model stays at a respectable 0.72 TFLOPS/GPU." ], [ "Our model was a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix SECREF84 .", "We benchmarked our method on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental protocols were also similar to those in BIBREF3 : newstest2014 was used as the test set to compare against previous work BIBREF31 , BIBREF32 , BIBREF3 , while the combination of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Google's Production English to French data.", "Tables TABREF42 , TABREF43 , and TABREF44 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time." ], [ " BIBREF35 train a single GNMT BIBREF3 model on a very large combined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this experiment with a single MoE-augmented model. See Appendix SECREF84 for details on model architecture. We train our model on the same dataset as BIBREF35 and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model.", "Results for the single-pair GNMT models, the multilingual GNMT model and the multilingual MoE model are given in Table TABREF50 . The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English INLINEFORM0 Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus.", "" ], [ "This work is the first to demonstrate major wins from conditional computation in deep networks. We carefully identified the design considerations and challenges of conditional computing and addressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large training sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come." ], [ "tocsectionAppendices" ], [ "As discussed in section SECREF4 , for load-balancing purposes, we want to define an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in back-propagation. Instead, we define a smooth estimator INLINEFORM0 of the number of examples assigned to each expert for a batch INLINEFORM1 of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We define INLINEFORM2 as the probability that INLINEFORM3 is nonzero, given a new random choice of noise on element INLINEFORM4 , but keeping the already-sampled choices of noise on the other elements. To compute INLINEFORM5 , we note that the INLINEFORM6 is nonzero if and only if INLINEFORM7 is greater than the INLINEFORM8 -greatest element of INLINEFORM9 excluding itself. The probability works out to be: DISPLAYFORM0 ", "Where INLINEFORM0 means the kth highest component of INLINEFORM1 , excluding component INLINEFORM2 . Simplifying, we get: DISPLAYFORM0 ", "Where INLINEFORM0 is the CDF of the standard normal distribution. DISPLAYFORM0 ", "We can now define the load loss to be the square of the coefficient of variation of the load vector, multiplied by a hand-tuned scaling factor INLINEFORM0 . DISPLAYFORM0 ", "To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices INLINEFORM0 and INLINEFORM1 to all zeros, which yields no signal and some noise.", "We trained a set of models with identical architecture (the MoE-256 model described in Appendix SECREF65 ), using different values of INLINEFORM0 and INLINEFORM1 . We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in INLINEFORM2 and INLINEFORM3 , as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches.", "Results are reported in Table TABREF58 . All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of INLINEFORM0 had lower loads on the most overloaded expert." ], [ "If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts\", each of which is itself a secondary mixture-of-experts with its own gating network. If the hierarchical MoE consists of INLINEFORM0 groups of INLINEFORM1 experts each, we denote the primary gating network by INLINEFORM2 , the secondary gating networks by INLINEFORM3 , and the expert networks by INLINEFORM4 . The output of the MoE is given by: DISPLAYFORM0 ", "Our metrics of expert utilization change to the following: DISPLAYFORM0 DISPLAYFORM1 ", " INLINEFORM0 and INLINEFORM1 deonte the INLINEFORM2 functions for the primary gating network and INLINEFORM3 secondary gating network respectively. INLINEFORM4 denotes the subset of INLINEFORM5 for which INLINEFORM6 .", "It would seem simpler to let INLINEFORM0 , but this would not have a gradient with respect to the primary gating network, so we use the formulation above." ], [ "Our model consists of five layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer BIBREF15 , BIBREF29 , a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput BIBREF43 to the layer output, dropping each activation with probability INLINEFORM0 , otherwise dividing by INLINEFORM1 . After dropout, the output of the previous layer is added to the layer output. This residual connection encourages gradient flow BIBREF37 .", "Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains INLINEFORM0 parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096-h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section UID14 ) with INLINEFORM1 for the ordinary MoE layers and INLINEFORM2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M.", "The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity:", "MoE-1-Wide: The MoE layer consists of a single \"expert\" containing one ReLU-activated hidden layer of size 4096.", "MoE-1-Deep: The MoE layer consists of a single \"expert\" containing four ReLU-activated hidden layers, each with size 1024.", "4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers.", "LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The output of the LSTM is projected down to 512 dimensions BIBREF41 . The next timestep of the LSTM receives the projected output. This is identical to one of the models published in BIBREF2 . We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones.", "The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section SECREF3 . Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efficiently using importance sampling similarly to the models in BIBREF2 . For each model, we performed a hyper-parmeter search to find the best dropout probability, in increments of 0.1.", "To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .", "We evaluate our model using perplexity on the holdout dataset, used by BIBREF28 , BIBREF2 . We follow the standard procedure and sum over all the words including the end of sentence symbol. Results are reported in Table TABREF76 . For each model, we report the test perplexity, the computational budget, the parameter counts, the value of INLINEFORM0 , and the computational efficiency.", "We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 BIBREF41 . MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best INLINEFORM0 for each model, and trained each model for 10 epochs.", "The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 ." ], [ "The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32, 32, 64, 128, 256 and 256, respectively.", "Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the parameters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words.", "We implement several memory optimizations in order to fit up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage:", "The Adam optimizer BIBREF39 keeps first and second moment estimates of the per-parameter gradients. This triples the required memory. To avoid keeping a first-moment estimator, we set INLINEFORM0 . To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad BIBREF36 .", "We evaluate our model using perplexity on a holdout dataset. Results are reported in Table TABREF81 . Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing BIBREF40 ." ], [ "Our model is a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention . All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient flow BIBREF37 . Similar to GNMT, to effectively deal with rare words, we used sub-word units (also known as “wordpieces\") BIBREF42 for inputs and outputs in our system.", "We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in BIBREF3 .", "We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with flat MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use INLINEFORM0 and the hierarchical MoE models use INLINEFORM1 at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains INLINEFORM2 parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix SECREF93 .", "We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section UID14 , not the scheme from Appendix SECREF93 . The MoE layers in the encoder and decoder are non-hierarchical MoEs with INLINEFORM0 experts, and INLINEFORM1 . Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep.", "We trained our networks using the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to BIBREF3 , we applied dropout BIBREF43 to the output of all embedding, LSTM and MoE layers, using INLINEFORM0 . Training was done synchronously on a cluster of up to 64 GPUs as described in section SECREF3 . Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU.", "To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .", "We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in BIBREF31 .", "Tables TABREF42 , TABREF43 and TABREF44 in Section SECREF39 show comparisons of our results to other published methods. Figure FIGREF91 shows test perplexity as a function of number of words in the (training data's) source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve.", "We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table TABREF92 . For example, one expert is used when the indefinite article “a\" introduces the direct object in a verb phrase indicating importance or leadership." ], [ "Due to some peculiarities in our infrastructure which have since been fixed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below.", "Recall that we define the softmax gating function to be: DISPLAYFORM0 ", "To obtain a sparse gating vector, we multiply INLINEFORM0 component-wise with a sparse mask INLINEFORM1 and normalize the output. The mask itself is a function of INLINEFORM2 and specifies which experts are assigned to each input example: DISPLAYFORM0 ", "To implement top-k gating in this formulation, we would let INLINEFORM0 , where: DISPLAYFORM0 ", "To force each expert to receive the exact same number of examples, we introduce an alternative mask function, INLINEFORM0 , which operates over batches of input vectors. Instead of keeping the top INLINEFORM1 values per example, we keep the top INLINEFORM2 values per expert across the training batch, where INLINEFORM3 , so that each example is sent to an average of INLINEFORM4 experts. DISPLAYFORM0 ", "As our experiments suggest and also observed in BIBREF38 , using a batchwise function during training (such as INLINEFORM0 ) requires modifications to the inference when we may not have a large batch of examples. Our solution to this is to train a vector INLINEFORM1 of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time: DISPLAYFORM0 ", "To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical. DISPLAYFORM0 " ], [ "The attention mechanism described in GNMT BIBREF3 involves a learned “Attention Function\" INLINEFORM0 which takes a “source vector\" INLINEFORM1 and a “target vector\" INLINEFORM2 , and must be computed for every source time step INLINEFORM3 and target time step INLINEFORM4 . In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size INLINEFORM5 . It can be expressed as: DISPLAYFORM0 ", "Where INLINEFORM0 and INLINEFORM1 are trainable weight matrices and INLINEFORM2 is a trainable weight vector.", "For performance reasons, in our models, we used a slightly different attention function: DISPLAYFORM0 ", "With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions." ] ] }
{ "question": [ "Approximately how much computational cost is saved by using this model?", "What improvement does the MOE model make over the SOTA on machine translation?", "What improvement does the MOE model make over the SOTA on language modelling?", "How is the correct number of experts to use decided?", "What equations are used for the trainable gating network?" ], "question_id": [ "a85698f19a91ecd3cd3a90a93a453d2acebae1b7", "af073d84b8a7c968e5822c79bef34a28655886de", "e8fcfb1412c3b30da6cbc0766152b6e11e17196c", "0cd90e5b79ea426ada0203177c28812a7fc86be5", "f01a88e15ef518a68d8ca2bec992f27e7a3a6add" ], "nlp_background": [ "five", "five", "five", "five", "five" ], "topic_background": [ "", "", "", "", "" ], "paper_read": [ "", "", "", "", "" ], "search_query": [ "", "", "", "", "" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "c24cfd0839faf733f7671147bea2e508dc3f0869" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3", "perplexity scores are also better", "On the Google Production dataset, our model achieved 1.01 higher test BLEU score" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Tables TABREF42 , TABREF43 , and TABREF44 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time." ], "highlighted_evidence": [ "As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time." ] } ], "annotation_id": [ "1aeb9c43d0169356e7c33c2abe1301084252deea" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Perpexity is improved from 34.7 to 28.0.", "evidence": [ "The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .", "In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix UID77 . Results of these three models form the bottom line of Figure FIGREF32 -right. Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.", "FLOAT SELECTED: Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C." ], "highlighted_evidence": [ "The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .", " Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.", "FLOAT SELECTED: Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C." ] } ], "annotation_id": [ "63a2c138011f68edde041195331abe0c5176e64e" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "varied the number of experts between models" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains INLINEFORM0 parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096-h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section UID14 ) with INLINEFORM1 for the ordinary MoE layers and INLINEFORM2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M." ], "highlighted_evidence": [ "We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts." ] } ], "annotation_id": [ "79c6e303c769cf6b075f42fc27820b4a2f8ee791" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "DISPLAYFORM0", "DISPLAYFORM0 DISPLAYFORM1" ], "yes_no": null, "free_form_answer": "", "evidence": [ "A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0", "We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1" ], "highlighted_evidence": [ "A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0\n\nWe add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1", "A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0\n\nWe add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1" ] } ], "annotation_id": [ "c43b627b74d1c4b68aa374fa022b32080faf292f" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: A Mixture of Experts (MoE) layer embedded within a recurrent language model. In this case, the sparse gating function selects two experts to perform computations. Their outputs are modulated by the outputs of the gating network.", "Figure 2: Model comparison on 1-Billion-Word Language-Modeling Benchmark. On the left, we plot test perplexity as a function of model capacity for models with similar computational budgets of approximately 8-million-ops-per-timestep. On the right, we plot test perplexity as a function of computational budget. The top line represents the LSTM models from (Jozefowicz et al., 2016). The bottom line represents 4-billion parameter MoE models with different computational budgets.", "Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C.", "Figure 3: Language modeling on a 100 billion word corpus. Models have similar computational budgets (8 million ops/timestep).", "Table 2: Results on WMT’14 En→ Fr newstest2014 (bold values represent best results).", "Table 4: Results on the Google Production En→ Fr dataset (bold values represent best results).", "Table 5: Multilingual Machine Translation (bold values represent best results).", "Table 7: Model comparison on 1 Billion Word Language Modeling Benchmark. Models marked with * are from (Jozefowicz et al., 2016).", "Table 8: Model comparison on 100 Billion Word Google News Dataset", "Figure 4: Perplexity on WMT’14 En→ Fr (left) and Google Production En→ Fr (right) datasets as a function of number of words processed. The large differences between models at the beginning of training are due to different batch sizes. All models incur the same computational budget (85M ops/timestep) except the one with no experts.", "Table 9: Contexts corresponding to a few of the 2048 experts in the MoE layer in the encoder portion of the WMT’14 En→ Fr translation model. For each expert i, we sort the inputs in a training batch in decreasing order of G(x)i, and show the words surrounding the corresponding positions in the input sentences." ], "file": [ "2-Figure1-1.png", "6-Figure2-1.png", "7-Table1-1.png", "7-Figure3-1.png", "8-Table2-1.png", "8-Table4-1.png", "9-Table5-1.png", "15-Table7-1.png", "16-Table8-1.png", "18-Figure4-1.png", "18-Table9-1.png" ] }
1905.10810
Evaluation of basic modules for isolated spelling error correction in Polish texts
Spelling error correction is an important problem in natural language processing, as a prerequisite for good performance in downstream tasks as well as an important feature in user-facing applications. For texts in Polish language, there exist works on specific error correction solutions, often developed for dealing with specialized corpora, but not evaluations of many different approaches on big resources of errors. We begin to address this problem by testing some basic and promising methods on PlEWi, a corpus of annotated spelling extracted from Polish Wikipedia. These modules may be further combined with appropriate solutions for error detection and context awareness. Following our results, combining edit distance with cosine distance of semantic vectors may be suggested for interpretable systems, while an LSTM, particularly enhanced by ELMo embeddings, seems to offer the best raw performance.
{ "section_name": [ "Introduction", "Problems of spelling correction for Polish", "Baseline methods", "Vector distance", "Recurrent neural networks", "Experimental setup", "Results", "Conclusion" ], "paragraphs": [ [ "Spelling error correction is a fundamental NLP task. Most language processing applications benefit greatly from being provided clean texts for their best performance. Human users of computers also often expect competent help in making spelling of their texts correct.", "Because of the lack of tests of many common spelling correction methods for Polish, it is useful to establish how they perform in a simple scenario. We constrain ourselves to the pure task of isolated correction of non-word errors. They are traditionally separated in error correction literature BIBREF0 . Non-word errors are here incorrect word forms that not only differ from what was intended, but also do not constitute another, existing word themselves. Much of the initial research on error correction focused on this simple task, tackled without means of taking the context of the nearest words into account.", "It is true that, especially in the case of neural networks, it is often possible and desirable to combine problems of error detection, correction and context awareness into one task trained with a supervised training procedure. In language correction research for English language also grammatical and regular spelling errors have been treated uniformly with much success BIBREF1 .", "However, when more traditional methods are used, because of their predictability and interpretability for example, one can mix and match various approaches to dealing with the subproblems of detection, correction and context handling (often equivalent to employing some kind of a language model). We call it a modular approach to building spelling error correction systems. There is recent research where this paradigm was applied, interestingly, to convolutional networks trained separately for various subtasks BIBREF2 . In similar setups it is more useful to assess abilities of various solutions in isolation. The exact architecture of a spelling correction system should depend on characteristics of texts it will work on.", "Similar considerations eliminated from our focus handcrafted solutions for the whole spelling correction pipeline, primarily the LanguageTool BIBREF3 . Its performance in fixing spelling of Polish tweets was already tested BIBREF4 . For our purposes it would be given an unfair advantage, since it is a rule-based system making heavy use of words in context of the error." ], [ "Published work on language correction for Polish dates back at least to 1970s, when simplest Levenshtein distance solutions were used for cleaning mainframe inputs BIBREF5 , BIBREF6 . Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 .", "These existing works pointed out more general, potentially useful qualities specific to spelling errors in Polish language texts. It is, primarily, the problem of leaving out diacritical signs, or, more rarely, adding them in wrong places. This phenomenon stems from using a variant of the US keyboard layout, where combinations of AltGr with some alphabetic keys produces characters unique to Polish. When the user forgets or neglects to press the AltGr key, typos such as writing *olowek instead of ołówek appear. In fact, BIBREF4 managed to get substantial performance on Twitter corpus by using this ”diacritical swapping” alone." ], [ "The methods that we evaluated are baselines are the ones we consider to be basic and with moderate potential of yielding particularly good results. Probably the most straightforward approach to error correction is selecting known words from a dictionary that are within the smallest edit distance from the error. We used the Levenshtein distance metric BIBREF8 implemented in Apache Lucene library BIBREF9 . It is a version of edit distance that treats deletions, insertions and replacements as adding one unit distance, without giving a special treatment to character swaps. The SGJP – Grammatical Dictionary of Polish BIBREF10 was used as the reference vocabulary.", "Another simple approach is the aforementioned diacritical swapping, which is a term that we introduce here for referring to a solution inspired by the work of BIBREF4 . Namely, from the incorrect form we try to produce all strings obtainable by either adding or removing diacritical marks from characters. We then exclude options that are not present in SGJP, and select as the correction the one within the smallest edit distance from the error. It is possible for the number of such diacritically-swapped options to become very big. For example, the token Modlin-Zegrze-Pultusk-Różan-Ostrołęka-Łomża-Osowiec (taken from PlEWi corpus of spelling errors, see below) can yield over INLINEFORM0 states with this method, such as Módłiń-Żęgrzę-Pułtuśk-Roźąń-Óśtróleką-Lómzą-Óśówięć. The actual correction here is just fixing the ł in Pułtusk. Hence we only try to correct in this way tokens that are shorter than 17 characters." ], [ "A promising method, adapted from work on correcting texts by English language learners BIBREF11 , expands on the concept of selecting a correction nearest to the spelling error according to some notion of distance. Here, the Levenshtein distance is used in a weighted sum to cosine distance between word vectors. This is based on the observation that trained vectors models of distributional semantics contain also representations of spelling errors, if they were not pruned. Their representations tend to be similar to those of their correct counterparts. For example, the token enginir will appear in similar contexts as engineer, and therefore will be assigned a similar vector embedding.", "The distance between two tokens INLINEFORM0 and INLINEFORM1 is thus defined as INLINEFORM2 ", "Here INLINEFORM0 is just Levenshtein distance between strings, and INLINEFORM1 – cosine distance between vectors. INLINEFORM2 denotes the word vector for INLINEFORM3 . Both distance metrics are in our case roughly in the range [0,1] thanks to the scaling of edit distance performed automatically by Apache Lucene. We used a pretrained set of word embeddings of Polish BIBREF12 , obtained with the flavor word2vec procedure using skipgrams and negative sampling BIBREF13 ." ], [ "Another powerful approach, if conceptually simple in linguistic terms, is using a character-based recurrent neural network. Here, we test uni- and bidirectional Long Short-Term Memory networks BIBREF14 that are fed characters of the error as their input and are expected to output its correct form, character after character. This is similar to traditional solutions conceptualizing the spelling error as a chain of characters, which are used as evidence to predict the most likely chain of replacements (original characters). This was done with n-gram methods, Markov chains and other probabilistic models BIBREF15 . Since nowadays neural networks enjoy a large awareness as an element of software infrastructure, with actively maintained packages readily available, their evaluation seems to be the most practically useful. We used the PyTorch BIBREF16 implementation of LSTM in particular.", "The bidirectional version BIBREF17 of LSTM reads the character chains forward and backwards at the same time. Predictions from networks running in both directions are averaged.", "In order to provide the network an additional, broad picture peek at the whole error form we also evaluated a setup where the internal state of LSTM cells, instead of being initialized randomly, is computed from an ELMo embedding BIBREF18 of the token. The ELMo embedder is capable of integrating linguistic information carried by the whole form (probably often not much in case of errors), as well as the string as a character chain. The latter is processed with a convolutional neural network. How this representation is constructed is informed by the whole corpus on which the embedder was trained. The pretrained ELMo model that we used BIBREF19 was trained on Wikipedia and Common Crawl corpora of Polish.", "The ELMo embedding network outputs three layers as matrices, which are supposed to reflect subsequent compositional layers of language, from phonetic phenomena at the bottom to lexical ones at the top. A weighted sum of these layers is computed, with weights trained along with the LSTM error-correcting network. Then we apply a trained linear transformation, followed by INLINEFORM0 non-linearity: INLINEFORM1 ", "(applied cellwise) in order to obtain the initial setting of parameters for the main LSTM. Our ELMo-augmented LSTM is bidirectional." ], [ "PlEWi BIBREF20 is an early version of WikEd BIBREF21 error corpus, containing error type annotations allowing us to select only non-word errors for evaluation. Specifically, PlEWi supplied 550,755 [error, correction] pairs, from which 298,715 were unique. The corpus contains data extracted from histories of page versions of Polish Wikipedia. An algorithm designed by the corpus author determined where the changes were correcting spelling errors, as opposed to expanding content and disagreements among Wikipedia editors.", "The corpus features texts that are descriptive rather than conversational, contain relatively many proper names and are more likely to have been at least skimmed by the authors before submitting for online publication. Error cases provided by PlEWi are, therefore, not a balanced representation of spelling errors in written Polish language. PlEWi does have the advantage of scale in comparison to existing literature, such as BIBREF4 operating on a set of only 740 annotated errors in tweets.", "All methods were tested on a test subset of 25% of cases, with 75% left for training (where needed) and 5% for development.", "The methods that required training – namely recurrent neural networks – had their loss measured as cross-entropy loss measure between correct character labels and predictions. This value was minimized with Adam algorithm BIBREF22 . The networks were trained for 35 epochs." ], [ "The experimental results are presented in Table TABREF4 . Diacritic swapping showed a remarkably poor performance, despite promising mentions in existing literature. This might be explained by the already mentioned feature of Wikipedia edits, which can be expected to be to some degree self-reviewed before submission. This can very well limit the number of most trivial mistakes.", "On the other hand, the vector distance method was able to bring a discernible improvement over pure Levenshtein distance, comparable even with the most basic LSTM. It is possible that assigning more fine-tuned weights to edit distance and semantic distance would make the quality of predictions even higher. The idea of using vector space measurements explicitly can be also expanded if we were to consider the problem of contextualizing corrections. For example, the semantic distance of proposed corrections to the nearest words is likely to carry much information about their appropriateness. Looking from another angle, searching for words that seem semantically off in context may be a good heuristic for detecting errors that are not nonword (that is, they lead to wrong forms appearing in text which are nevertheless in-vocabulary).", "The good performance of recurrent network methods is hardly a surprise, given observed effectiveness of neural networks in many NLP tasks in the recent decade. It seems that bidirectional LSTM augmented with ELMo may already hit the limit for correcting Polish spelling errors without contextual information. While it improves accuracy in comparison to LSTM initialized withrandom noise, it makes the test cross-entropy slightly worse, which hints at overfitting. The perplexity measures actually increase sharply for more sophisticated architectures. Perplexity should show how little probability is assigned by the model to true answers. We measure it as INLINEFORM0 ", "where INLINEFORM0 is a sequence of INLINEFORM1 characters, forming the correct version of the word, and INLINEFORM2 is the estimated probability of the INLINEFORM3 th character, given previous predicted characters and the incorrect form. The observed increase of perplexity for increasingly accurate models is most likely due to more refined predicted probability distributions, which go beyond just assigning the bulk of probability to the best answer.", "Interesting insights can be gained from weights assigned by optimization to layers of ELMo network, which are taken as the word form embedding (Table TABREF5 ). The first layer, and the one that is nearest to input of the network, is given relatively the least importance, while the middle one dominates both others taken together. This suggests that in error correction, at least for Polish, the middle level of morphemes and other characteristic character chunks is more important than phenomena that are low-level or tied to some specific words. This observation should be taken into account in further research on practical solutions for spelling correction." ], [ "Among the methods tested the bidirectional LSTM, especially initialized by ELMo embeddings, offers the best accuracy and raw performance. Adding ELMo to a straightforward PyTorch implementation of LSTM may be easier now than at the time of performing our tests, as since then the authors of ELMoForManyLangs package BIBREF19 improved their programmatic interface. However, if a more interpretable and explainable output is required, some version of vector distance combined with edit distance may be the best direction. It should be noted that this method produces multiple candidate corrections with their similarity scores, as opposed to only one “best guess“ correction that can be obtained from a character-based LSTM. This is important in applications where it is up to humans to the make the final decision, and they are only to be aided by a machine.", "It is desirable for further reasearch to expand the corpus material into a wider and more representative set of texts. Nevertheless, the solution for any practical case has to be tailored to its characteristic error patterns. Works on language correction for English show that available corpora can be ”boosted” BIBREF1 , i.e. expanded by generating new errors consistent with a generative model inferred from the data. This may greatly aid in developing models that are dependent on learning from error corpora.", "A deliberate omission in this paper are the elements accompanying most real-word error correction solutions. Some fairly obvious approaches to integrating evidence from context include n-grams and Markov chains, although the possibility of using measurements in spaces of semantic vectors was already mentioned in this article. Similarly, non-word errors can be easily detected with comparing tokens against reference vocabulary, but in practice one should have ways of detecting mistakes masquerading as real words and fixing bad segmentation (tokens that are glued together or improperly separated). Testing how performant are various methods for dealing with these problems in Polish language is left for future research." ] ] }
{ "question": [ "What is the difference in performance between the interpretable system (e.g. vectors and cosine distance) and LSTM with ELMo system?", "What solutions are proposed for error detection and context awareness?", "How is PIEWi annotated?", "What methods are tested in PIEWi?", "Which specific error correction solutions have been proposed for specialized corpora in the past?" ], "question_id": [ "44104668796a6ca10e2ea3ecf706541da1cec2cf", "bbcd77aac74989f820e84488c52f3767d0405d51", "6a31bd676054222faf46229fc1d283322478a020", "e4d16050f0b457c93e590261732a20401def9cde", "b25e7137f49f77e7e67ee2f40ca585d3a377f8b5" ], "nlp_background": [ "five", "five", "five", "five", "five" ], "topic_background": [ "", "", "", "", "" ], "paper_read": [ "", "", "", "", "" ], "search_query": [ "", "", "", "", "" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Accuracy of best interpretible system was 0.3945 while accuracy of LSTM-ELMo net was 0.6818.", "evidence": [ "The experimental results are presented in Table TABREF4 . Diacritic swapping showed a remarkably poor performance, despite promising mentions in existing literature. This might be explained by the already mentioned feature of Wikipedia edits, which can be expected to be to some degree self-reviewed before submission. This can very well limit the number of most trivial mistakes.", "FLOAT SELECTED: Table 1: Test results for all the methods used. The loss measure is cross-entropy." ], "highlighted_evidence": [ "The experimental results are presented in Table TABREF4 .", "FLOAT SELECTED: Table 1: Test results for all the methods used. The loss measure is cross-entropy." ] } ], "annotation_id": [ "91f989a06bf11f012960b7cdad07de1c33d7d969" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "645cb2f15db2bd0a712c0159a71fd64f152c98d3" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "[error, correction] pairs" ], "yes_no": null, "free_form_answer": "", "evidence": [ "PlEWi BIBREF20 is an early version of WikEd BIBREF21 error corpus, containing error type annotations allowing us to select only non-word errors for evaluation. Specifically, PlEWi supplied 550,755 [error, correction] pairs, from which 298,715 were unique. The corpus contains data extracted from histories of page versions of Polish Wikipedia. An algorithm designed by the corpus author determined where the changes were correcting spelling errors, as opposed to expanding content and disagreements among Wikipedia editors." ], "highlighted_evidence": [ "Specifically, PlEWi supplied 550,755 [error, correction] pairs, from which 298,715 were unique." ] } ], "annotation_id": [ "1afa01b50f65043288ee2dc5ca7f521c49bf4694" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Levenshtein distance metric BIBREF8", "diacritical swapping", "Levenshtein distance is used in a weighted sum to cosine distance between word vectors", "ELMo-augmented LSTM" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The methods that we evaluated are baselines are the ones we consider to be basic and with moderate potential of yielding particularly good results. Probably the most straightforward approach to error correction is selecting known words from a dictionary that are within the smallest edit distance from the error. We used the Levenshtein distance metric BIBREF8 implemented in Apache Lucene library BIBREF9 . It is a version of edit distance that treats deletions, insertions and replacements as adding one unit distance, without giving a special treatment to character swaps. The SGJP – Grammatical Dictionary of Polish BIBREF10 was used as the reference vocabulary.", "Another simple approach is the aforementioned diacritical swapping, which is a term that we introduce here for referring to a solution inspired by the work of BIBREF4 . Namely, from the incorrect form we try to produce all strings obtainable by either adding or removing diacritical marks from characters. We then exclude options that are not present in SGJP, and select as the correction the one within the smallest edit distance from the error. It is possible for the number of such diacritically-swapped options to become very big. For example, the token Modlin-Zegrze-Pultusk-Różan-Ostrołęka-Łomża-Osowiec (taken from PlEWi corpus of spelling errors, see below) can yield over INLINEFORM0 states with this method, such as Módłiń-Żęgrzę-Pułtuśk-Roźąń-Óśtróleką-Lómzą-Óśówięć. The actual correction here is just fixing the ł in Pułtusk. Hence we only try to correct in this way tokens that are shorter than 17 characters.", "A promising method, adapted from work on correcting texts by English language learners BIBREF11 , expands on the concept of selecting a correction nearest to the spelling error according to some notion of distance. Here, the Levenshtein distance is used in a weighted sum to cosine distance between word vectors. This is based on the observation that trained vectors models of distributional semantics contain also representations of spelling errors, if they were not pruned. Their representations tend to be similar to those of their correct counterparts. For example, the token enginir will appear in similar contexts as engineer, and therefore will be assigned a similar vector embedding.", "(applied cellwise) in order to obtain the initial setting of parameters for the main LSTM. Our ELMo-augmented LSTM is bidirectional." ], "highlighted_evidence": [ "We used the Levenshtein distance metric BIBREF8 implemented in Apache Lucene library BIBREF9 .", "Another simple approach is the aforementioned diacritical swapping, which is a term that we introduce here for referring to a solution inspired by the work of BIBREF4 .", "A promising method, adapted from work on correcting texts by English language learners BIBREF11 , expands on the concept of selecting a correction nearest to the spelling error according to some notion of distance. Here, the Levenshtein distance is used in a weighted sum to cosine distance between word vectors.", "Our ELMo-augmented LSTM is bidirectional." ] } ], "annotation_id": [ "abc39352a914939a293c4c3a9ea06fc6ee432add" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "spellchecking mammography reports and tweets BIBREF7 , BIBREF4" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Published work on language correction for Polish dates back at least to 1970s, when simplest Levenshtein distance solutions were used for cleaning mainframe inputs BIBREF5 , BIBREF6 . Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 ." ], "highlighted_evidence": [ "Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 ." ] } ], "annotation_id": [ "b632e06c7bb1119cf80527670e985d1f07f6e97d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 1: Test results for all the methods used. The loss measure is cross-entropy.", "Table 2: Discovered optimal weights for summing layers of ELMo embedding for initializing an error-correcting LSTM. The layers are numbered from the one that directly processes character and word input to the most abstract one." ], "file": [ "3-Table1-1.png", "3-Table2-1.png" ] }
2002.12328
Few-shot Natural Language Generation for Task-Oriented Dialog
As a crucial component in task-oriented dialog systems, the Natural Language Generation (NLG) module converts a dialog act represented in a semantic form into a response in natural language. The success of traditional template-based or statistical models typically relies on heavily annotated data, which is infeasible for new domains. Therefore, it is pivotal for an NLG system to generalize well with limited labelled data in real applications. To this end, we present FewShotWoz, the first NLG benchmark to simulate the few-shot learning setting in task-oriented dialog systems. Further, we develop the SC-GPT model. It is pre-trained on a large set of annotated NLG corpus to acquire the controllable generation ability, and fine-tuned with only a few domain-specific labels to adapt to new domains. Experiments on FewShotWoz and the large Multi-Domain-WOZ datasets show that the proposed SC-GPT significantly outperforms existing methods, measured by various automatic metrics and human evaluations.
{ "section_name": [ "Introduction", "Background", "Semantically Conditioned GPT", "Semantically Conditioned GPT ::: Massive Plain Language Pre-training.", "Semantically Conditioned GPT ::: Dialog-Act Controlled Pre-training.", "Semantically Conditioned GPT ::: Fine-tuning.", "Dataset: FewShotWOZ ::: Revisiting NLG Benchmarks.", "Dataset: FewShotWOZ ::: FewShotWOZ.", "Dataset: FewShotWOZ ::: Collection Protocols.", "Related Work ::: Pre-trained Models.", "Related Work ::: Dialog.", "Experiments", "Experiments ::: Experimental Setup ::: Implementation details.", "Experiments ::: Experimental Setup ::: Automatic metrics.", "Experiments ::: Experimental Setup ::: Human evaluation.", "Experiments ::: Experimental Setup ::: Baselines.", "Experiments ::: FewShotWOZ", "Experiments ::: MultiWOZ", "Experiments ::: Analysis", "Conclusion and Future Work" ], "paragraphs": [ [ "Task-oriented dialog systems are becoming increasingly popular, as they can assist users in various daily activities such as ticket booking and restaurant reservations. In a typical task-oriented dialog system, the Natural Language Generation (NLG) module plays a crucial role: it converts a system action (often specified in a semantic form selected by a dialog policy) into a final response in natural language. Hence, the response should be adequate to represent semantic dialog actions, and fluent to engage users' attention. As the ultimate interface to interacts with users, NLG plays a significant impact on the users' experience.", "Existing methods for NLG can be broadly summarized into two major categories. $({1})$ Template-based methods require domain experts to handcraft templates for each domain, and the system fills in slot-values afterward BIBREF0, BIBREF1. Thus, the produced responses are often adequate to contain the required semantic information, but not always fluent and nature, hurting users' experiences. $({2})$ Statistical language models such as neural networks BIBREF2 learn to generate fluent responses via training from labelled corpus. One canonical model is semantically conditioned LSTM (SC-LSTM) BIBREF3, which encodes dialog acts with one-hot representations and uses it as an extra feature to inform the sentence generation process. Despite its good performance on simple domains, it requires large amounts of domain-specific annotated data which is not available for many domains in real-world applications. Even worse, this renders severe scalability issues when the number of possible combinations of dialog acts grows exponentially with the number of slots in more complex domains.", "We revisit the current research benchmarks for NLG, and notice that each dialog domain is extensively labelled to favor model training. However, this is in contrast to the real-world application scenarios, where only very limited amounts of labelled data are available for new domains. To simulate such a few-shot learning setting, we have developed a new benchmark dataset, called FewShotWOZ, based on the MultiWOZ BIBREF4 and Cambridge NLG datasets BIBREF5. FewShotWOZ consists of dialog utterances from 7 domains. For each domain, we provide less than 50 labeled utterances for fine-tuning. We believe that FewShotWOZ can better inspire research to address the challenge of learning data-hungry statistical models with very limited amounts of labelled data in real-world scenarios.", "To deal with the challenge of few-shot learning, we develop the SC-GPT model. SC-GPT is a multi-layer Transformer neural language model, trained in three steps: $({1})$ Pre-trained on plain text, similar to GPT-2 BIBREF6; $({2})$ Continuously pre-trained on large amounts of dialog-act labeled utterances corpora to acquire the ability of controllable generation; $({3})$ Fine-tuned for a target domain using very limited amounts of domain labels. Unlike GPT-2, SC-GPT generates semantically controlled responses that are conditioned on the given semantic form, similar to SC-LSTM but requiring much less domain labels to generalize to new domains.", "In summary, our key contributions are three-fold:", "A new benchmark FewShotWOZ is introduced to simulate the few-shot adaptation setting where only a handful of training data from each domain is available.", "We propose a new model SC-GPT. To our best knowledge, this work is the first study of exploiting state-of-the-art pre-trained language models for NLG in task-oriented dialog systems.", "On the MultiWOZ dataset, SC-GPT creates a new SOTA, outperforming previous models by 4 points in BLEU. On FewShotWOZ, SC-GPT outperforms several strong baselines such as SC-LSTM and HDSA BIBREF7, showing that SC-GPT adapts to new domain much more effectively, requiring much smaller amounts of in-domain labels. We release our code and dataset for reproducible research." ], [ "A typical task-oriented spoken dialog system uses a pipeline architecture, as shown in Figure FIGREF2 (a), where each dialog turn is processed using a four-step procedure. $({1})$ Transcriptions of user’s input are first passed to the natural language understanding (NLU) module, where the user’s intention and other key information are extracted. $({2})$ This information is then formatted as the input to dialog state tracking (DST), which maintains the current state of the dialog. $({3})$ Outputs of DST are passed to the dialog policy module, which produces a dialog act based on the facts or entities retrieved from external resources (such as a database or a knowledge base). $({4})$ The dialog act emitted by the dialog policy module serves as the input to the NLG, through which a system response in natural language is generated. In this paper, we focus on the NLG component of task-oriented dialog systems, how to produce natural language responses conditioned on dialog acts.", "Specifically, dialog act $$ is defined as the combination of intent $$ and slot-value pairs $\\lbrace (s_i, v_i)\\rbrace ^P_{i=1}$:", "where $P$ is the number of pairs, which varies in different dialog acts.", "Intents are usually used to distinguish different types of system actions. Typical examples include inform, request, confirm, select", "Slot-value pairs indicate the category and content of the information to express in the utterance, respectively.", "The goal of NLG is to translate $$ into a natural language response $= [x_1, \\cdots , x_T]$, where $T$ is the sequence length. In Figure FIGREF2 (b), we show an example of the dialog act: $\\textit {\\texttt {confirm}~(name=Hilton, area=center)}$, and the corresponding natural language response is “Let me confirm that you are searching for Hilton in the center area”." ], [ "We tackle this generation problem using conditional neural language models. Given training data of $N$ samples $=\\lbrace (_n, _n)\\rbrace _{n=1}^{N}$, our goal is to build a statistical model parameterized by $$ to characterize $p_{}(| )$. To leverage the sequential structure of response, one may further decompose the joint probability of $$ using the chain rule, casting an auto-regressive generation process as follows:", "where $x_{<t}$ indicates all tokens before $t$.", "Learning $$ is performed via maximizing the log-likelihood (MLE) of the conditional probabilities in (DISPLAY_FORM13) over the entire training dataset:", "In this paper, we employ the Transformers BIBREF8 to parameterize the conditionals in (DISPLAY_FORM13). To enable strong generalization and controllable ability for the learned model, we propose the following three-stage procedure as the training recipe." ], [ "Large models trained on massive training corpus usually generalize better to new domains. Inspired by this, we inherit the GPT-2 architecture BIBREF6 as the backbone language model. GPT-2 is an auto-regressive language model that leverages 12-24 layers of masked, multi-head self-attention Transformers. GPT-2 is pre-trained on extremely massive text data OpenWebText BIBREF6. It has demonstrated superior performance on characterizing human language data distribution and knowledge transfer. Given text prompts, GPT-2 can often generate realistic sentences." ], [ "To enable the guidance of dialog act in response generation, we propose to continuously pre-train the GPT-2 model on large amounts of annotated (dialog act, response) pairs. The pre-training dataset includes annotated training pairs from Schema-Guided Dialog corpus, MultiWOZ corpus, Frame corpus, and Facebook Multilingual Dialog Corpus. The total size of the pre-training corpus is around 400k examples.", "We firstly pre-process dialog act $$ into a sequence of control codes using the following format:", "Meanwhile, the output sequence $^{\\prime }$ is pre-processed via appending $$ with a special start token [BOS] and an end token [EOS]. Finally, the sequentialized dialog act $^{\\prime }$ is concatenated with its augmented response $^{\\prime }$, and then fed into GPT-2. During training, the prediction loss is only computed for $^{\\prime }$, and $^{\\prime }$ provides the attended conditions. Since the dialog act represents the semantics of the generated sentences, we follow the naming convention of SC-LSTM, and term our model as Semantically Conditioned Generative Pre-training (SC-GPT). The overall architecture of SC-GPT is illustrated in Figure FIGREF12." ], [ "For a new domain, a dialog act usually contains novel intents or slot-value pairs, and annotated training samples are often limited. We fine-tune SC-GPT on limited amounts of domain-specific labels for adaptation. The fine-tuning follows the same procedure of dialog-act controlled pre-training, as described above, but uses only a few dozens of domain labels.", "It is worth noticing that the above recipe has several favorable properties:", "Flexibility. SC-GPT operates on a sequence of tokens without delexicalization, which means that SC-GPT does not assume a fixed one-hot or tree-structured dialog act representation vectors. Hence, it has great flexibility in extending to novel dialog acts.", "Controllability. In contrast to GPT-2 that generates natural sentences without high-level semantic guidance, SC-GPT can generate sentences with adequate intent and slot-value information and maintain its fluency.", "Generalizability. SC-GPT is able to generalize significantly better than SC-LSTM, due to the pre-training on massive plain text corpora and annotated dialog datasets." ], [ "The three commonly used NLG datasets in developing and evaluating task-oriented dialog systems are E2E NLG BIBREF9 BAGEL BIBREF10 and RNNLG BIBREF5, as summarized in Table TABREF23. We observe two issues from their shared statistics: $({1})$ All the datasets contain a large number of labelled training samples for each domain, ranging from hundreds to tens of thousands. However, the cost of labeling is high in practice, labeling 50 utterances is 5 hours per domain. Creating such an extensively annotated dataset for each new domain is prohibitively expensive. $({2})$ The percentage of distinct delexicalised dialog acts between training and testing data is quite small. For example, the delexicalised dialog acts in testing is 100% covered by the training set for the E2E NLG dataset. It renders difficulties in evaluating the model's generalization ability for new domains." ], [ "To build a setting for more pragmatic NLG scenarios, we introduce a new dataset FewShotWOZ to better reflect real application complexity, and encourage the community to develop algorithms that are capable of generalizing with only a few domain-specific labels for each (new) domain. The dataset statistics are shown in the last column of Table TABREF23. We see that FewShotWOZ is different from the other datasets in three aspects: $({1})$ More domains. FewShotWOZ contains seven domains in total, which is larger than any existing NLG datasets. $({2})$ Less training instances. Importantly, FewShotWOZ has a much smaller number of training instances per domain, aiming to evaluate the few-shot learning ability. $({3})$ Lower training/testing overlap. FewShotWOZ has only 8.82% overlap, significantly smaller than the other datasets, which amount to more than 90% overlap. The average number of intents per instance in $\\mathtt {Attraction}$/ $\\mathtt {Taxi}$/ $\\mathtt {Train}$ domain is 2, 1.33, and 2.05, respectively. In contrast, there is only one intent for each example in the other datasets. The NLG task defined on FewShotWOZ requires the models to learn to generalize over new compositions of intents. The details of FewShotWOZ is shown in Table TABREF26." ], [ "We construct FewShotWOZ via re-organizing data samples from RNNLG and MultiWOZ datasets BIBREF4. For each domain in RNNLG, we first group utterances according to their delexicalised dialog acts, and keep only one utterance as the target sentence. To ensure diversity, we consider three domains from MultiWOZ: $\\mathtt {Attraction}$, $\\mathtt {Taxi}$, and $\\mathtt {Train}$. Since MultiWOZ is a cross-domain dataset, the dialog act of an utterance may exist in multiple domains. We choose to keep utterances whose dialog act appears only in one domain. Similar delexicalising processing is applied to ensure that each dialog act has only one target utterance. Finally, to simulate the few-shot learning in practice, we randomly sample 50 training examples for each domain, except the $\\mathtt {Taxi}$ domain, which has 40 examples." ], [ "Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. PLMs are often trained to predict words based on their context on massive text data, and the learned models can be fine-tuned to adapt to various downstream tasks. The closest line of research to ours are GPT-2 BIBREF6, CTRL BIBREF15 and Grover BIBREF17. GPT-2 first investigated missive Transformer-based auto-regressive language models with large-scale text data for pre-training. After fine-tuning, GPT-2 achieves drastic improvements on several generation tasks. One drawback of GPT-2 is the lack of high-level semantic controlling ability in language generation. To alleviate this issue, CTRL BIBREF15 was introduced to train the model based on pre-defined codes such as text style, content description, and task-specific behavior, meanwhile Grover BIBREF17 was proposed to generate news articles conditioned on authors, dates Although conceptually similar to our SC-GPT, CTRL and Grover cannot be readily applied to NLG in task-oriented dialog systems, as the conditioning codes are quite different. Another controllable generation work for GPT-2 is PPLM BIBREF18, which provides a decoding scheme to guide the generation process using key-words or classifiers, without re-training the model. In this paper, we focus on pre-training an NLG model conditioned on finer-grained semantic dialog acts, which are more desirable for dialog systems." ], [ "Various dialog systems have been developed BIBREF2, including task-oriented dialog systems such as Rasa, Microsoft Bot Framework, and Conversational Learner, and chit-chat systems such as XiaoIce BIBREF19, DialoGPT BIBREF20, Meena BIBREF21. In this paper, we focus on task-oriented systems, particularly the NLG module. With the blooming of deep learning, neural sequential models have shown powerful capability and flexibility in NLG. Extensive efforts have been made, including new architecture choices such as RNNs BIBREF22, attention RNNs BIBREF23, SC-LSTM BIBREF3 and its variants BIBREF24, BIBREF25, as well as learning objectives BIBREF26. However, they all require large amounts of annotated data to reach satisfactory performance. A more realistic scenario is to require much less labeling and improve the sample efficiency of models, This is especially important when deploying the models to new domains, where dialog acts need to be labelled from scratch. Our paper aims to formally set up such a research scenario by proposing a new dataset FewShotWOZ, and a new model SC-GPT." ], [ "In this section, we evaluate the proposed SC-GPT on the FewShotWOZ and MultiWOZ datasets to answer two research questions: $({1})$ Is SC-GPT an effective model for strong generalization and controllability in dialog response generation? $({2})$ Does FewShotWOZ meet the goal of effectively evaluating the generalization of NLG models in the few-shot learning setting?" ], [ "The model was built upon Huggingface Pytorch Transformer BIBREF27. We use GPT2-Medium with 345M parameters as the initial checkpoint, and byte pair encodings BIBREF28 for the tokenization. Linear rate scheduler with start rate as 5e-5 was used for both pre-training and fine-tuning. Adam BIBREF29 with weight decay was used to optimize the parameters. For pre-training, the model was trained with a mini-batch of 8 on an 8 Nvidia V100 machine until observing no significant progress on validation loss or up to 20 epochs, whichever is earlier. For fine-tuning on FewShotWOZ, models were trained on each domain separately with five epochs." ], [ "Following BIBREF3, BLEU scores and the slot error rate (ERR) are reported. BLEU score evaluates how natural the generated utterance is compared with human readers. ERR measures the exact matching of the slot tokens in the candidate utterances. $\\text{ERR}=(p+q)/M$, where $M$ is the total number of slots in the dialog act, and $p$, $q$ is the number of missing and redundant slots in the given realisation. For each dialog act, we generate five utterances and select the top one with the lowest ERR as the final output." ], [ "We conducted the human evaluation using Amazon Mechanical Turk to assess subjective quality. We recruit master level workers (who have good prior approval rates) to perform a human comparison between generated responses from two systems (which are randomly sampled from comparison systems). The workers are required to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness. Informativeness indicates the extent to which generated utterance contains all the information specified in the dialog act. Naturalness denotes whether the utterance is as natural as a human does. To reduce judgement bias, we distribute each question to three different workers. Finally, we collected in total of 5800 judges." ], [ "We compare with three baseline methods. $({1})$ SC-LSTM BIBREF3 is a canonical model and a strong baseline that uses an additional dialog act vector and a reading gate to guide the utterance generation. $({2})$ GPT-2 BIBREF6 is used to directly fine-tune on the domain-specific labels, without pre-training on the large-scale corpus of (dialog act, response) pairs. $({3})$ HDSA BIBREF7 is a state-of-the-art model on MultiWOZ. It leverages dialog act structures to enable transfer in the multi-domain setting, showing superior performance than SC-LSTM." ], [ "Table TABREF33 reports the automatic evaluation performance of different methods on FewShotWOZ. SC-LSTM fails to learn the generation effectively in this few-shot learning setting. The generated utterances are poor in quality and suffer from inaccurate slot rendering. In addition, GPT-2 performs consistently better than SC-LSTM in all the domains. It reveals the feasibility of using a pre-trained language model for NLG, though only limited annotations are available for fine-tuning. Importantly, SC-GPT performs significantly better than GPT and SC-LSTM in terms of both BLEU and ERR. In all the domains, SC-GPT reduces the ERR to a significantly lower level, revealing its strong controllability power. This verifies the importance of pre-training on large annotated dialog data, as SC-GPT learns how to generate utterances specified by the dialog acts accurately.", "Table TABREF34 shows the human assessment on FewShotWOZ. The results exhibit the same trend with automatic evaluation. SC-GPT outperforms GPT-2 and SC-LSTM significantly in both metrics, SC-GPT can better control the generation to convey information in the dialog act while maintaining good fluency. Note that the gap between SC-GPT and human annotation is still large, indicating that the proposed FewShotWOZ exhibits an under-explored research area, and provides a large space to encourage future research for improvement." ], [ "The results on MultiWOZ are shown in Table TABREF42. Following BIBREF7, Entity F1 BIBREF30 is used to evaluate the entity coverage accuracy (including all slot values, days, numbers, and reference, ). Again, SC-GPT achieves the best performance on BLEU score. Note that GPT-2 performs similarly with SC-GPT on the full MultiWOZ dataset, this is because MultiWOZ contains 57k utterances, which is large enough for GPT-2 to achieve good performance. The results also confirm that with enough annotated data, conditional language model formulation performs significantly better than HDSA, a strong competitor that leverages graph/tree-structure information to encode dialog acts.", "To study how SC-GPT performs with different training data sizes. We further conduct experiments with varying percentages of training data on MultiWOZ, ranging from 0.1% (50 examples) to 50%. As shown in Table TABREF43, the observations are consistent with FewShotWOZ. SC-GPT performs consistently better than GPT-2, HDSA, and SC-LSTM for a wide range of dataset sizes, and the improvement is more substantial when the fewer numbers of in-domain labels are used for fine-tuning.", "Table TABREF44 shows the human assessment results on MultiWOZ. The results are consistent with the automatic evaluation. It is interesting to see that $({1})$ the gap between the new state-of-the-art method (SC-GPT ) and human performance on FewShotWOZ (as shown in Table TABREF34) is much larger than that on MultiWOZ; $({2})$ the human rating on the naturalness of SC-GPT is even higher than humans on MultiWOZ, while there is a visible gap on FewShotWOZ. These results demonstrate that FewShotWOZ presents a challenging few-shot learning setting, SG-GPT serves as a simple and strong baseline in this setting, and the combined provides a platform for researchers to develop NLG models that are able to generalize to new domains and generate semantically conditioned and fluent responses." ], [ "We perform detailed analysis to investigate SG-GPT's flexibility, controllability and generalizability. The test set is split into two subsets - seen and unseen. If a dialog act of an example appears in the training set, the example is marked as seen; otherwise, it is marked as unseen. Table TABREF48 compares different models on the seen and unseen subsets in the $\\mathtt {restaurant}$ domain. SC-GPT yields higher BLEU and lower ERR, and the improvement is more significant on the unseen set. For example, SC-GPT reduces ERR to 4.96, an order of magnitude lower than SC-LSTM and only 1/3 of GPT-2. This demonstrates that SC-GPT generalizes well to novel dialog acts, and is able to precisely ground in them to compose fluent responses. This is further confirmed by the quantitative comparison in Table TABREF45, where we compare the generated utterance examples of different models. While the baseline methods prone to over-generate or miss important slots, SC-GPT can successfully generate fluent natural language utterances that share precise semantic conditions with the ground-truth references.", "We further simulate the process when deploying SC-GPT for a new domain, using the examples provided in the RASA dialog toolkit . We first fine-tune SC-GPT using a few training examples (only 16 instances in this new domain), and then generate utterances based on novel dialog acts that are unseen in training data, shown in Table TABREF49. In practice, it is desirable for an NLG system to deal with an extending domain whose dialog acts change dynamically. We simulate the setting by editing the original input dialog acts, such as inserting or deleting a slot, or substituting a slot value.", "Since SC-LSTM is infeasible in the setting of an extending domain, we compare SC-GPT with GPT-2. Results show that SC-GPT produces better utterances than GPT-2. SC-GPT can generate reasonably good natural language responses with different combinations of editing operations, showing its high flexibility to generalize to new dialog acts with very limited training data, and produce controllable responses.", "" ], [ "In this paper, we have made two major contributions towards developing a more pragmatic NLG module for task-oriented dialog systems: $({1})$ A new benchmark FewShotWOZ is introduced to simulate the few-shot learning scenarios with scarce labelled data in real-world applications. $({2})$ A new model SC-GPT is proposed to endow the NLG module with strong semantically controlling and generalization ability. Empirical results on both FewShotWOZ and MultiWOZ show that SC-GPT achieves the best overall performance in both automatic and human evaluations.", "There are two interesting directions for future work. The first is to design mechanisms to generate more interpersonal responses which are proven to help improve user experiences BIBREF31, BIBREF19. The other is to generalize the generative pre-training idea to all four modules in the dialog system pipeline for end-to-end training. Since these four modules process information in order, one may organize their input/output as segments, and pre-train a segment-level auto-regressive model." ] ] }
{ "question": [ "What was the criteria for human evaluation?", "What automatic metrics are used to measure performance of the system?", "What existing methods is SC-GPT compared to?" ], "question_id": [ "d803b782023553bbf9b36551fbc888ad189b1f29", "fc5f9604c74c9bb804064f315676520937131e17", "b37fd665dfa5fad43977069d5623f4490a979305" ], "nlp_background": [ "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We conducted the human evaluation using Amazon Mechanical Turk to assess subjective quality. We recruit master level workers (who have good prior approval rates) to perform a human comparison between generated responses from two systems (which are randomly sampled from comparison systems). The workers are required to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness. Informativeness indicates the extent to which generated utterance contains all the information specified in the dialog act. Naturalness denotes whether the utterance is as natural as a human does. To reduce judgement bias, we distribute each question to three different workers. Finally, we collected in total of 5800 judges." ], "highlighted_evidence": [ "We conducted the human evaluation using Amazon Mechanical Turk to assess subjective quality. We recruit master level workers (who have good prior approval rates) to perform a human comparison between generated responses from two systems (which are randomly sampled from comparison systems). The workers are required to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness. Informativeness indicates the extent to which generated utterance contains all the information specified in the dialog act. Naturalness denotes whether the utterance is as natural as a human does. To reduce judgement bias, we distribute each question to three different workers. Finally, we collected in total of 5800 judges." ] } ], "annotation_id": [ "1b3a97b47cf79da59ee7307695c3bb14380b41ae" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "BLEU scores and the slot error rate (ERR)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Following BIBREF3, BLEU scores and the slot error rate (ERR) are reported. BLEU score evaluates how natural the generated utterance is compared with human readers. ERR measures the exact matching of the slot tokens in the candidate utterances. $\\text{ERR}=(p+q)/M$, where $M$ is the total number of slots in the dialog act, and $p$, $q$ is the number of missing and redundant slots in the given realisation. For each dialog act, we generate five utterances and select the top one with the lowest ERR as the final output." ], "highlighted_evidence": [ "Following BIBREF3, BLEU scores and the slot error rate (ERR) are reported. BLEU score evaluates how natural the generated utterance is compared with human readers. ERR measures the exact matching of the slot tokens in the candidate utterances. $\\text{ERR}=(p+q)/M$, where $M$ is the total number of slots in the dialog act, and $p$, $q$ is the number of missing and redundant slots in the given realisation. For each dialog act, we generate five utterances and select the top one with the lowest ERR as the final output." ] } ], "annotation_id": [ "406ab4d0ccb31fb86ceb3b92d00112cb85fd62ce" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "$({1})$ SC-LSTM BIBREF3", "$({2})$ GPT-2 BIBREF6 ", "$({3})$ HDSA BIBREF7" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We compare with three baseline methods. $({1})$ SC-LSTM BIBREF3 is a canonical model and a strong baseline that uses an additional dialog act vector and a reading gate to guide the utterance generation. $({2})$ GPT-2 BIBREF6 is used to directly fine-tune on the domain-specific labels, without pre-training on the large-scale corpus of (dialog act, response) pairs. $({3})$ HDSA BIBREF7 is a state-of-the-art model on MultiWOZ. It leverages dialog act structures to enable transfer in the multi-domain setting, showing superior performance than SC-LSTM." ], "highlighted_evidence": [ "We compare with three baseline methods. $({1})$ SC-LSTM BIBREF3 is a canonical model and a strong baseline that uses an additional dialog act vector and a reading gate to guide the utterance generation. $({2})$ GPT-2 BIBREF6 is used to directly fine-tune on the domain-specific labels, without pre-training on the large-scale corpus of (dialog act, response) pairs. $({3})$ HDSA BIBREF7 is a state-of-the-art model on MultiWOZ. It leverages dialog act structures to enable transfer in the multi-domain setting, showing superior performance than SC-LSTM." ] } ], "annotation_id": [ "fdd4a991d6f5087b656d774850b46b7ee1c7c91e" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1: Illustration of the NLG module in the overall task-oriented dialog system. (a) The NLG module is highlighted with glowing black bounding boxes. (b) One example of dialog act (including intent and slot-value pairs) and its corresponding natural language response.", "Figure 2: Illustration of SC-GPT. In this example, SC-GPT generates a new word token (e.g., “confirm” or “center”) by attending the entire dialog act and word tokens on the left within the response.", "Table 1: Comparison of existing NLG datasets, including E2E NLG (Novikova et al., 2017), BAGEL(Mairesse et al., 2010), Cambridge NLG(Wen et al., 2016a) and the proposed FEWSHOTWOZ.", "Table 2: FEWSHOTWOZ statistics over 7 different domains.", "Table 4: Performance of different methods on FEWSHOTWOZ", "Table 4: Human evaluation on FEWSHOTWOZ. Statistical significance is computed with a twotailed t-t st.", "Table 5: Performance on MultiWOZ", "Table 4: Performance on MultiWoz", "Table 6: BLEU score of different models on MultiWOZ using training data f different sizes.", "Table 8: Examples of generated utterances from different models, along with its corresponding dialog acts (DAs) and references. The first two examples are sampled from FEWSHOTWOZ and the last one is from MultiWOZ. Each generated utterance is followed by a brief description explaining the errors", "Table 9: Performance of different methods on seen DAs and unseen DAs in restaurant domain.", "Table 10: Examples of generated utterances with novel dialog acts. SC-GPT produces better utterances than GPT-2 for with edited dialog acts. Since both models produce similar responses to references" ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png", "5-Table2-1.png", "6-Table4-1.png", "6-Table4-2.png", "7-Table5-1.png", "7-Table4-1.png", "7-Table6-1.png", "8-Table8-1.png", "8-Table9-1.png", "9-Table10-1.png" ] }
1910.07481
Using Whole Document Context in Neural Machine Translation
In Machine Translation, considering the document as a whole can help to resolve ambiguities and inconsistencies. In this paper, we propose a simple yet promising approach to add contextual information in Neural Machine Translation. We present a method to add source context that capture the whole document with accurate boundaries, taking every word into account. We provide this additional information to a Transformer model and study the impact of our method on three language pairs. The proposed approach obtains promising results in the English-German, English-French and French-English document-level translation tasks. We observe interesting cross-sentential behaviors where the model learns to use document-level information to improve translation coherence.
{ "section_name": [ "Introduction", "Related Work", "Approach", "Experiments", "Experiments ::: Training and test sets", "Experiments ::: Training details", "Experiments ::: Results", "Experiments ::: Manual Analysis", "Conclusion" ], "paragraphs": [ [ "Neural machine translation (NMT) has grown rapidly in the past years BIBREF0, BIBREF1. It usually takes the form of an encoder-decoder neural network architecture in which source sentences are summarized into a vector representation by the encoder and are then decoded into target sentences by the decoder. NMT has outperformed conventional statistical machine translation (SMT) by a significant margin over the past years, benefiting from gating and attention techniques. Various models have been proposed based on different architectures such as RNN BIBREF0, CNN BIBREF2 and Transformer BIBREF1, the latter having achieved state-of-the-art performances while significantly reducing training time.", "However, by considering sentence pairs separately and ignoring broader context, these models suffer from the lack of valuable contextual information, sometimes leading to inconsistency in a translated document. Adding document-level context helps to improve translation of context-dependent parts. Previous study BIBREF3 showed that such context gives substantial improvement in the handling of discourse phenomena like lexical disambiguation or co-reference resolution.", "Most document-level NMT approaches focus on adding contextual information by taking into account a set of sentences surrounding the current pair BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. While giving significant improvement over the context-agnostic versions, none of these studies consider the whole document with well delimited boundaries. The majority of these approaches also rely on structural modification of the NMT model BIBREF6, BIBREF7, BIBREF8, BIBREF9. To the best of our knowledge, there is no existing work considering whole documents without structural modifications.", "Contribution: We propose a preliminary study of a generic approach allowing any model to benefit from document-level information while translating sentence pairs. The core idea is to augment source data by adding document information to each sentence of a source corpus. This document information corresponds to the belonging document of a sentence and is computed prior to training, it takes every document word into account. Our approach focuses on pre-processing and consider whole documents as long as they have defined boundaries. We conduct experiments using the Transformer base model BIBREF1. For the English-German language pair we use the full WMT 2019 parallel dataset. For the English-French language pair we use a restricted dataset containing the full TED corpus from MUST-C BIBREF10 and sampled sentences from WMT 2019 dataset. We obtain important improvements over the baseline and present evidences that this approach helps to resolve cross-sentence ambiguities." ], [ "Interest in considering the whole document instead of a set of sentences preceding the current pair lies in the necessity for a human translator to account for broader context in order to keep a coherent translation. The idea of representing and using documents for a model is interesting, since the model could benefit from information located before or after the current processed sentence.", "Previous work on document-level SMT started with cache based approaches, BIBREF11 suggest a conjunction of dynamic, static and topic-centered cache. More recent work tend to focus on strategies to capture context at the encoder level. Authors of BIBREF5 propose an auxiliary context source with a RNN dedicated to encode contextual information in addition to a warm-start of encoder and decoder states. They obtain significant gains over the baseline.", "A first extension to attention-based neural architectures is proposed by BIBREF6, they add an encoder devoted to capture the preceding source sentence. Authors of BIBREF7 introduce a hierarchical attention network to model contextual information from previous sentences. Here the attention allows dynamic access to the context by focusing on different sentences and words. They show significant improvements over a strong NMT baseline. More recently, BIBREF9 extend Transformer architecture with an additional encoder to capture context and selectively merge sentence and context representations. They focus on co-reference resolution and obtain improvements in overall performances.", "The closest approach to ours is presented by BIBREF4, they simply concatenate the previous source sentence to the one being translated. While they do not make any structural modification to the model, their method still does not take the whole document into account." ], [ "We propose to use the simplest method to estimate document embeddings. The approach is called SWEM-aver (Simple Word Embedding Model – average) BIBREF12. The embedding of a document $k$ is computed by taking the average of all its $N$ word vectors (see Eq. DISPLAY_FORM2) and therefore has the same dimension. Out of vocabulary words are ignored.", "Despite being straightforward, our approach raises the need of already computed word vectors to keep consistency between word and document embeddings. Otherwise, fine-tuning embeddings as the model is training would shift them in a way that totally wipes off the connection between document and word vectors.", "To address this problem, we adopt the following approach: First, we train a baseline Transformer model (noted Baseline model) from which we extract word embeddings. Then, we estimate document embeddings using the SWEM-aver method and train an enhanced model (noted Document model) benefiting from these document embeddings and the extracted word embeddings. During training, the Document model does not fine-tune its embeddings to preserve the relation between words and document vectors. It should be noted that we could directly use word embeddings extracted from another model such as Word2Vec BIBREF13, in practice we obtain better results when we get these vectors from a Transformer model. In our case, we simply extract them from the Baseline after it has been trained.", "Using domain adaptation ideas BIBREF14, BIBREF15, BIBREF16, we associate a tag to each sentence of the source corpus, which represents the document information. This tag takes the form of an additional token placed at the first position in the sentence and corresponds to the belonging document of the sentence (see Table TABREF1). The model considers the tag as an additional word and replace it with the corresponding document embedding. The Baseline model is trained on a standard corpus that does not contain document tags, while the Document model is trained on corpus that contains document tags.", "The proposed approach requires strong hypotheses about train and test data. The first downfall is the need for well defined document boundaries that allow to mark each sentence with its document tag. The second major downfall is the need to compute an embedding vector for each new document fed in the model, adding a preprocessing step before inference time." ], [ "We consider two different models for each language pair: the Baseline and the Document model. We evaluate them on 3 test sets and report BLEU and TER scores. All experiments are run 8 times with different seeds, we report averaged results and p-values for each experiment.", "Translation tasks are English to German, proposed in the first document-level translation task at WMT 2019 BIBREF17, English to French and French to English, following the IWSLT translation task BIBREF18." ], [ "Table TABREF4 describes the data used for the English-German language pair. These corpora correspond to the WMT 2019 document-level translation task. Table TABREF5 describes corpora for the English-French language pair, the same data is used for both translation directions.", "For the English-German pair, only 10.4% (3.638M lines) of training data contains document boundaries. For English-French pair, we restricted the total amount of training data in order to keep 16.1% (602K lines) of document delimited corpora. To achieve this we randomly sampled 10% of the ParaCrawl V3. It means that only a fraction of the source training data contains document context. The enhanced model learns to use document information only when it is available.", "All test sets contain well delimited documents, Baseline models are evaluated on standard corpora while Document models are evaluated on the same standard corpora that have been augmented with document context. We evaluate the English-German systems on newstest2017, newstest2018 and newstest2019 where documents consist of newspaper articles to keep consistency with the training data. English to French and French to English systems are evaluated over IWSLT TED tst2013, tst2014 and tst2015 where documents are transcriptions of TED conferences (see Table TABREF5).", "Prior to experiments, corpora are tokenized using Moses tokenizer BIBREF19. To limit vocabulary size, we adopt the BPE subword unit approach BIBREF20, through the SentencePiece toolkit BIBREF21, with 32K rules." ], [ "We use the OpenNMT framework BIBREF22 in its TensorFlow version to create and train our models. All experiments are run on a single NVIDIA V100 GPU. Since the proposed approach relies on a preprocessing step and not on structural enhancement of the model, we keep the same Transformer architecture in all experiments. Our Transformer configuration is similar to the baseline of BIBREF1 except for the size of word and document vectors that we set to $d_{model} = 1024$, these vectors are fixed during training. We use $N = 6$ as the number of encoder layers, $d_{ff} = 2048$ as the inner-layer dimensionality, $h = 8$ attention heads, $d_k = 64$ as queries and keys dimension and $Pdrop = 0.1$ as dropout probability. All experiments, including baselines, are run over 600k training steps with a batch size of approximately 3000 tokens.", "For all language pairs we trained a Baseline and a Document model. The Baseline is trained on a standard parallel corpus and is not aware of document embeddings, it is blind to the context and cannot link the sentences of a document. The Document model uses extracted word embeddings from the Baseline as initialization for its word vectors and also benefits from document embeddings that are computed from the extracted word embeddings. It is trained on the same corpus as the Baseline one, but the training corpus is augmented with (see Table TABREF1) and learns to make use of the document context.", "The Document model does not consider its embeddings as tunable parameters, we hypothesize that fine-tuning word and document vectors breaks the relation between them, leading to poorer results. We provide evidence of this phenomena with an additional system for the French-English language pair, noted Document+tuning (see Table TABREF7) that is identical to the Document model except that it adjusts its embeddings during training.", "The evaluated models are obtained by taking the average of their last 6 checkpoints, which were written at 5000 steps intervals. All experiments are run 8 times with different seeds to ensure the statistical robustness of our results. We provide p-values that indicate the probability of observing similar or more extreme results if the Document model is actually not superior to the Baseline." ], [ "Table TABREF6 presents results associated to the experiments for the English to German translation task, models are evaluated on the newstest2017, neswtest2018 and newstest2019 test sets. Table TABREF7 contains results for both English to French and French to English translation tasks, models are evaluated on the tst2013, tst2014 and tst2015 test sets.", "En$\\rightarrow $De: The Baseline model obtained State-of-The-Art BLEU and TER results according to BIBREF23, BIBREF24. The Document system shows best results, up to 0.85 BLEU points over the Baseline on the newstest2019 corpus. It also surpassed the Baselinee by 0.18 points on the newstest2017 with strong statistical significance, and by 0.15 BLEU points on the newstest2018 but this time with no statistical evidence. These encouraging results prompted us to extend experiments to another language pair: English-French.", "En$\\rightarrow $Fr: The Document system obtained the best results considering all metrics on all test sets with strong statistical evidence. It surpassed the Baseline by 1.09 BLEU points and 0.85 TER points on tst2015, 0.75 BLEU points and 0.76 TER points on tst2014, and 0.48 BLEU points and 0.68 TER points on tst2013.", "Fr$\\rightarrow $En: Of all experiments, this language pair shows the most important improvements over the Baseline. The Document model obtained substantial gains with very strong statistical evidence on all test sets. It surpassed the Baseline model by 1.81 BLEU points and 1.02 TER points on tst2015, 1.50 BLEU points and 0.96 TER points on tst2014, and 1.29 BLEU points and 0.83 TER points on tst2013.", "The Document+tuning system, which only differs from the fact that it tunes its embeddings, shows little or no improvement over the Baseline, leading us to the conclusion that the relation between word and document embeddings described by Eq. DISPLAY_FORM2 must be preserved for the model to fully benefit from document context." ], [ "In this analysis we present some of the many cases that suggest the Document model can handle ambiguous situations. These examples are often isolated sentences where even a human translator could not predict the good translation without looking at the document, making it almost impossible for the Baseline model which is blind to the context. Table TABREF10 contains an extract of these interesting cases for the French-English language pair.", "Translation from French to English is challenging and often requires to take the context into account. The personal pronoun \"lui\" can refer to a person of feminine gender, masculine gender or even an object and can therefore be translated into \"her\", \"him\" or \"it\". The first example in Table TABREF10 perfectly illustrate this ambiguity: the context clearly indicates that \"lui\" in the source sentence refers to \"ma fille\", which is located three sentences above, and should be translated into \"her\". In this case, the Baseline model predict the personal pronoun \"him\" while the Document model correctly predicts \"her\". It seems that the Baseline model does not benefit from any valuable information in the source sentence. Some might argue that the source sentence actually contains clues about the correct translation, considering that \"robe à paillettes\" (\"sparkly dress\") and \"baguette magique\" (\"magic wand\") probably refer to a little girl, but we will see that the model makes similar choices in more restricted contexts. This example is relevant mainly because the actual reference to the subject \"ma fille\" is made long before the source sentence.", "The second example in Table TABREF10 is interesting because none of our models correctly translate the source sentence. However, we observe that the Baseline model opts for a literal translation of \"je peux faire le poirier\" (\"I can stand on my head\") into \"I can do the pear\" while the Document model predicts \"I can wring\". Even though these translations are both incorrect, we observe that the Document model makes a prediction that somehow relates to the context: a woman talking about her past disability, who has become more flexible thanks to yoga and can now twist her body.", "The third case in table TABREF10 is a perfect example of isolated sentence that cannot be translated correctly with no contextual information. This example is tricky because the word \"Elle\" would be translated into \"She\" in most cases if no additional information were provided, but here it refers to \"la conscience\" (\"consciousness\") from the previous sentence and must be translated into \"It\". As expected the Baseline model does not make the correct guess and predicts the personal pronoun \"She\" while the Document model correctly predicts \"It\". This example present a second difficult part, the word \"son\" from the source sentence is ambiguous and does not, in itself, inform the translator if it must be translated into \"her\", \"his\" or \"its\". With contextual information we know that it refers to \"[le] monde physique\" (\"[the] physical world\") and that the correct choice is the word \"its\". Here the Baseline incorrectly predicts \"her\", possibly because of its earlier choice for \"She\" as the subject. The Document model makes again the correct translation.", "According to our results (see Table TABREF7), the English-French language pair also benefits from document-level information but to a lesser extent. For this language pair, ambiguities about personal pronouns are less frequent. Other ambiguous phenomena like the formal mode (use of \"vous\" instead of \"tu\") appear. TableTABREF11 presents an example of this kind of situation where the word \"You\" from the source sentence does not indicate if the correct translation is \"Vous\" or \"Tu\". However it refers to the narrator of the story who is an old police officer. In this case, it is very likely that the use of formal mode is the correct translation. The Baseline model incorrectly predicts \"Tu\" and the Document model predicts \"Vous\".", "" ], [ "In this work, we presented a preliminary study of a simple approach for document-level translation. The method allows to benefit from the whole document context at the sentence level, leading to encouraging results. In our experimental setup, we observed improvement of translation outcomes up to 0.85 BLEU points in the English to German translation task and exceeding 1 BLEU point in the English to French and French to English translation tasks. Looking at the translation outputs, we provided evidence that the approach allows NMT models to disambiguate complex situations where the context is absolutely necessary, even for a human translator.", "The next step is to go further by investigating more elaborate document embedding approaches and to bring these experiments to other languages (e.g.: Asian, Arabic, Italian, Spanish, etc.). To consider a training corpus with a majority of document delimited data is also very promising." ] ] }
{ "question": [ "Which language-pair had the better performance?", "Which datasets were used in the experiment?", "What evaluation metrics did they use?" ], "question_id": [ "c1f4d632da78714308dc502fe4e7b16ea6f76f81", "749a307c3736c5b06d7b605dc228d80de36cbabe", "102de97c123bb1e247efec0f1d958f8a3a86e2f6" ], "nlp_background": [ "", "", "" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "French-English", "evidence": [ "FLOAT SELECTED: Table 5: Results obtained for the English-French and French-English translation tasks, scored on three test sets using BLEU and TER metrics. p-values are denoted by * and correspond to the following values: ∗< .05, ∗∗< .01, ∗∗∗< .001." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Results obtained for the English-French and French-English translation tasks, scored on three test sets using BLEU and TER metrics. p-values are denoted by * and correspond to the following values: ∗< .05, ∗∗< .01, ∗∗∗< .001." ] } ], "annotation_id": [ "408e8c7aa8047ab454e61244dddecc43adcd7511" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "WMT 2019 parallel dataset", "a restricted dataset containing the full TED corpus from MUST-C BIBREF10", "sampled sentences from WMT 2019 dataset" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Contribution: We propose a preliminary study of a generic approach allowing any model to benefit from document-level information while translating sentence pairs. The core idea is to augment source data by adding document information to each sentence of a source corpus. This document information corresponds to the belonging document of a sentence and is computed prior to training, it takes every document word into account. Our approach focuses on pre-processing and consider whole documents as long as they have defined boundaries. We conduct experiments using the Transformer base model BIBREF1. For the English-German language pair we use the full WMT 2019 parallel dataset. For the English-French language pair we use a restricted dataset containing the full TED corpus from MUST-C BIBREF10 and sampled sentences from WMT 2019 dataset. We obtain important improvements over the baseline and present evidences that this approach helps to resolve cross-sentence ambiguities." ], "highlighted_evidence": [ "For the English-German language pair we use the full WMT 2019 parallel dataset. For the English-French language pair we use a restricted dataset containing the full TED corpus from MUST-C BIBREF10 and sampled sentences from WMT 2019 dataset. " ] } ], "annotation_id": [ "1bacdb1587b2b671bbe431b57f4662320224f95a" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "BLEU and TER scores" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We consider two different models for each language pair: the Baseline and the Document model. We evaluate them on 3 test sets and report BLEU and TER scores. All experiments are run 8 times with different seeds, we report averaged results and p-values for each experiment." ], "highlighted_evidence": [ "We consider two different models for each language pair: the Baseline and the Document model. We evaluate them on 3 test sets and report BLEU and TER scores. All experiments are run 8 times with different seeds, we report averaged results and p-values for each experiment." ] } ], "annotation_id": [ "3f35eaf73310dbf6df624b004fe5e620d4ed1432" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Table 1: Example of augmented parallel data used to train theDocumentmodel. The source corpus contains document tags while the target corpus remains unchanged.", "Table 2: Detail of training and evaluation sets for the English-German pair, showing the number of lines, words in English (EN) and words in German (DE). Corpora with document boundaries are denoted by †.", "Table 3: Detail of training and evaluation sets for the English-French pair in both directions, showing the number of lines, words in English (EN) and words in French (FR). Corpora with document boundaries are denoted by †.", "Table 4: Results obtained for the English-German translation task, scored on three test sets using BLEU and TER metrics. p-values are denoted by * and correspond to the following values: ∗< .05, ∗∗< .01, ∗∗∗< .001.", "Table 5: Results obtained for the English-French and French-English translation tasks, scored on three test sets using BLEU and TER metrics. p-values are denoted by * and correspond to the following values: ∗< .05, ∗∗< .01, ∗∗∗< .001.", "Table 6: Translation examples for the French-English pair. We took the best models of all runs for both the Baseline and the Document enhanced model", "Table 7: Translation example for the English-French pair." ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "3-Table3-1.png", "4-Table4-1.png", "4-Table5-1.png", "5-Table6-1.png", "5-Table7-1.png" ] }
1610.09516
Finding Street Gang Members on Twitter
Most street gang members use Twitter to intimidate others, to present outrageous images and statements to the world, and to share recent illegal activities. Their tweets may thus be useful to law enforcement agencies to discover clues about recent crimes or to anticipate ones that may occur. Finding these posts, however, requires a method to discover gang member Twitter profiles. This is a challenging task since gang members represent a very small population of the 320 million Twitter users. This paper studies the problem of automatically finding gang members on Twitter. It outlines a process to curate one of the largest sets of verifiable gang member profiles that have ever been studied. A review of these profiles establishes differences in the language, images, YouTube links, and emojis gang members use compared to the rest of the Twitter population. Features from this review are used to train a series of supervised classifiers. Our classifier achieves a promising F1 score with a low false positive rate.
{ "section_name": [ "Introduction and Motivation", "Related Work", "Discovering Gang Member Profiles", "Data collection", "Data analysis", "Learning algorithms", "Evaluation", "Experimental results", "Evaluation Over Unseen Profiles", "Conclusion and Future Work", "Acknowledgement" ], "paragraphs": [ [ "The crime and violence street gangs introduce into neighborhoods is a growing epidemic in cities around the world. Today, over 1.23 million people in the United States are members of a street gang BIBREF0 , BIBREF1 , which is a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise BIBREF2 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Moreover, data from the Centers for Disease Control in the United States suggests that the victims of at least 1.3% of all gang-related homicides are merely innocent bystanders who live in gang occupied neighborhoods BIBREF3 .", "Street gang members have established online presences coinciding with their physical occupation of neighborhoods. The National Gang Threat Assessment Report confirms that at least tens of thousands of gang members are using social networking websites such as Twitter and video sharing websites such as YouTube in their daily life BIBREF0 . They are very active online; the 2007 National Assessment Center's survey of gang members found that 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF4 . Gang members typically use social networking sites and social media to develop online respect for their street gang BIBREF5 and to post intimidating, threatening images or videos BIBREF6 . This “Cyber-” or “Internet banging” BIBREF7 behavior is precipitated by the fact that an increasing number of young members of the society are joining gangs BIBREF8 , and these young members have become enamored with technology and with the notion of sharing information quickly and publicly through social media. Stronger police surveillance in the physical spaces where gangs congregate further encourages gang members to seek out virtual spaces such as social media to express their affiliation, to sell drugs, and to celebrate their illegal activities BIBREF9 .", "Gang members are able to post publicly on Twitter without fear of consequences because there are few tools law enforcement can use to surveil this medium BIBREF10 . Police departments across the United States instead rely on manual processes to search social media for gang member profiles and to study their posts. For example, the New York City police department employs over 300 detectives to combat teen violence triggered by insults, dares, and threats exchanged on social media, and the Toronto police department teaches officers about the use of social media in investigations BIBREF11 . Officer training is broadly limited to understanding policies on using Twitter in investigations and best practices for data storage BIBREF12 . The safety and security of city neighborhoods can thus be improved if law enforcement were equipped with intelligent tools to study social media for gang activity.", "The need for better tools for law enforcement cannot be underscored enough. Recent news reports have shown that many incidents involving gangs start on Twitter, escalate over time, and lead to an offline event that could have been prevented by an early warning. For example, the media reported on a possible connection between the death of a teenage rapper from Illinois and the final set of tweets he posted. One of his last tweets linked to a video of him shouting vulgar words at a rival gang member who, in return, replied “I'ma kill you” on social media. In a following tweet, the teenage rapper posted “im on 069”, revealing his location, and was shot dead soon after that post. Subsequent investigation revealed that the rivalry leading to his death began and was carried out entirely on social media. Other reporting has revealed how innocent bystanders have also become targets in online fights, leaving everyone in a neighborhood at risk.", "This paper investigates whether gang member profiles can be identified automatically on Twitter, which can enable better surveillance of gang members on social media. Classifying Twitter profiles into particular types of users has been done in other contexts BIBREF13 , BIBREF14 , BIBREF15 , but gang member profiles pose unique challenges. For example, many Twitter profile classifiers search for contextual clues in tweets and profile descriptions BIBREF16 , but gang member profiles use a rapidly changing lexicon of keywords and phrases that often have only a local, geographic context. This is illustrated in Figure FIGREF6 , which shows the Twitter profile descriptions of two verified deceased gang members. The profile of @OsoArrogantJoJo provides evidence that he belongs to a rival gang of the Black Disciples by #BDK, a hashtag that is only known to those involved with gang culture in Chicago. @PappyNotPapi's profile mentions #PBG and our investigations revealed that this hashtag is newly founded and stands for the Pooh Bear Gang, a gang that was formerly known as the Insane Cutthroat Gangsters. Given the very local, rapidly changing lexicon of gang members on social media, building a database of keywords, phrases, and other identifiers to find gang members nationally is not feasible. Instead, this study proposes heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage, profile images, and links to YouTube videos reflecting their music culture. A large set of gang member profiles, obtained through a careful data collection process, is compared against non-gang member profiles to find contrasting features. Experimental results show that using these sets of features, we can build a classifier that has a low false positive rate and a promising INLINEFORM0 -score of 0.7755.", "This paper is organized as follows. Section SECREF2 discusses the related literature and positions how this work differs from other related works. Section SECREF3 discusses the data collection, manual feature selection and our approach to identify gang member profiles. Section SECREF4 gives a detailed explanation for evaluation of the proposed method and the results in detail. Section SECREF5 concludes the work reported while discussing the future work planned." ], [ "Gang violence is a well studied social science topic dating back to 1927 BIBREF17 . However, the notions of “Cyber-” or “Internet banging”, which is defined as “the phenomenon of gang affiliates using social media sites to trade insults or make violent threats that lead to homicide or victimization” BIBREF7 , was only recently introduced BIBREF18 , BIBREF10 . Patton et al. introduced the concept of “Internet banging” and studied how social media is now being used as a tool for gang self-promotion and as a way for gang members to gain and maintain street credibility BIBREF7 . They also discussed the relationship between gang-related crime and hip-hop culture, giving examples on how hip-hop music shared on social media websites targeted at harassing rival gang members often ended up in real-world collisions among those gangs. Decker et al. and Patton et al. have also reported that street gangs perform Internet banging with social media posts of videos depicting their illegal behaviors, threats to rival gangs, and firearms BIBREF19 , BIBREF20 .", "The ability to take action on these discoveries is limited by the tools available to discover gang members on social media and to analyze the content they post BIBREF18 . Recent attempts to improve our abilities include a proposed architecture for a surveillance system that can learn the structure, function, and operation of gangs through what they post on social media BIBREF10 . However, the architecture requires a set of gang member profiles for input, thus assuming that they have already been discovered. Patton et al. BIBREF20 devised a method to automatically collect tweets from a group of gang members operating in Detroit, MI. However, their approach required the profile names of the gang members to be known beforehand, and data collection was localized to a single city in the country.", "This work builds upon existing methods to automatically discover gang member profiles on Twitter. This type of user profile classification problem has been explored in a diverse set of applications such as political affiliation BIBREF13 , ethnicity BIBREF13 , gender BIBREF15 , predicting brand loyalty BIBREF13 , and user occupations BIBREF16 . However, these approaches may utilize an abundance of positive examples in their training data, and only rely on a single feature type (typically, tweet text). Whereas most profile classifiers focus on a single type of feature (e.g. profile text), we consider the use of a variety of feature types, including emoji, YouTube links, and photo features." ], [ "This section discusses the methodology we followed to study and classify the Twitter profiles of gang members automatically. It includes a semi-automatic data collection process to discover a large set of verifiable gang member profiles, an evaluation of the tweets of gang and non-gang member posts to identify promising features, and the deployment of multiple supervised learning algorithms to perform the classification." ], [ "Discovering gang member profiles on Twitter to build training and testing datasets is a challenging task. Past strategies to find these profiles were to search for keywords, phrases, and events that are known to be related to gang activity in a particular city a priori BIBREF10 , BIBREF20 . However, such approaches are unlikely to yield adequate data to train an automatic classifier since gang members from different geographic locations and cultures use local languages, location-specific hashtags, and share information related to activities in a local region BIBREF10 . Such region-specific tweets and profiles may be used to train a classifier to find gang members within a small region but not across the Twitterverse. To overcome these limitations, we adopted a semi-automatic workflow, illustrated in Figure FIGREF7 , to build a dataset of gang member profiles suitable for training a classifier. The steps of the workflow are:", "1. Seed Term Discovery: Following the success of identifying gang member profiles from Chicago BIBREF10 , we began our data collection with discovering universal terms used by gang members. We first searched for profiles with hashtags for Chicago gangs noted in BIBREF10 , namely #BDK (Black Disciple Killers) and #GDK (Gangster Disciples Killers). Those profiles were analyzed and manually verified as explained in Step 3. Analysis of these profiles identified a small set of hashtags they all use in their profile descriptions. Searching Twitter profiles using those hashtags, we observed that gang members across the U.S. use them, thus we consider those terms to be location neutral. For example, gang members post #FreeDaGuys in their profile to support their fellow members who are in jail, #RIPDaGuys to convey the grieving for fallen gang members, and #FuckDaOpps to show their hatred towards police officers. We used these terms as keywords to discover Twitter profiles irrespective of geographical location. We used the Followerwonk Web service API and Twitter REST API to search Twitter profile descriptions by keywords #FreeDaGuys, #FreeMyNigga, #RIPDaGuys, and #FuckDaOpps. Since there are different informal ways people spell a word in social media, we also considered variations on the spelling of each keyword; for example, for #FreeDaGuys, we searched both #FreeDaGuys, and #FreeTheGuys.", "2. Gang Affiliated Rappers' Twitter Profile Discovery: Finding profiles by a small set of keywords is unlikely to yield sufficient data. Thus, we sought additional gang member profiles with an observation from Patton et al. BIBREF7 that the influence of hip-hop music and culture on offline gang member activities can also be seen in their social media posts. We thus also consider the influence of hip-hop culture on Twitter by exploring the Twitter network of known gangster rappers who were murdered in 2015 due to gang-related incidents. We searched for these rapper profiles on Twitter and manually checked that the rapper was affiliated to a gang.", "3. Manual verification of Twitter profiles: We verified each profile discovered manually by examining the profile picture, profile background image, recent tweets, and recent pictures posted by a user. During these checks, we searched for terms, activities, and symbols that we believed could be associated with a gang. For example, profiles whose image or background included guns in a threatening way, stacks of money, showing gang hand signs and gestures, and humans holding or posing with a gun, appeared likely to be from a gang member. Such images were often identified in profiles of users who submitted tweets that contain messages of support or sadness for prisoners or recently fallen gang members, or used a high volume of threatening and intimidating slang language. Only profiles where the images, words, and tweets all suggested gang affiliation were labeled as gang affiliates and added to our dataset. Although this manual verification does have a degree of subjectivity, in practice, the images and words used by gang members on social media are so pronounced that we believe any reasonable analyst would agree that they are gang members. We found that not all the profiles collected belonged to gang members; we observed relatives and followers of gang members posting the same hashtags as in Step 1 to convey similar feelings in their profile descriptions.", "4. Using Retweets to discover more profiles: From the set of verified profiles, we explored their retweet and follower networks as a way to expand the dataset. We first considered authors of tweets which were retweeted by a gang member in our seed set. In Twitter, “retweeting” is a mechanism by which a user can share someone else's tweet to their follower audience. Assuming that a user only retweets things that they believe or their audience would be interested in, it may be reasonable to assume that gang members would only be interested in sharing what other gang members have to say, and hence, the authors of gang members' retweets could also be gang members.", "5. Using Followers and Followees to discover more profiles: We analyzed followers and followees of our seed gang member profiles to find more gang member profiles. A Twitter user can follow other Twitter users so that the individual will be subscribed to their tweets as a follower and they will be able to start a private conversation by sending direct messages to the individual. Motivated by the sociological concept of homophily, which claims that individuals have a tendency to associate and bond with similar others, we hypothesized that the followers and followees of Twitter profiles from the seed set may also be gang members. Manual verification of Twitter profiles collected from retweets, followers, and followees of gang members showed that a majority of those profiles are non-gang members who are either family members, hip-hop artists, women or profiles with pornographic content. To ensure that our dataset is not biased towards a specific gang or geographic location, only a limited number of profiles were collected via retweets, followers and followees.", "Table TABREF8 summarizes the number of profiles manually verified as gang members from Twitter profiles collected in step 1, 2, 4 and 5. Altogether we collected 400 gang member's Twitter profiles. This is a large number compared to previous studies of gang member activities on social media that curated a maximum of 91 profiles BIBREF10 . Moreover, we believe the profiles collected represent a diverse set of gang members that are not biased toward a particular geographic area or lingo as our data collection process used location-independent terms proven to be used by gang members when they express themselves." ], [ "We next explore differences between gang and non-gang member Twitter usage to find promising features for classifying profiles. For this purpose, profiles of non-gang members were collected from the Twitter Streaming API. We collected a random sample of tweets and the profiles of the users who authored the tweets in the random sample. We manually verified that all Twitter profiles collected in this approach belong to non-gang members. The profiles selected were then filtered by location to remove non-U.S. profiles by reverse geo-coding the location stated in their profile description by the Google Maps API. Profiles with location descriptions that were unspecified or did not relate to a location in the U.S. were discarded. We collected 2,000 non-gang member profiles in this manner. In addition, we added 865 manually verified non-gang member profiles collected using the location neutral keywords discussed in Section SECREF3 . Introducing these profiles, which have some characteristics of gang members (such as cursing frequently or cursing at law enforcement) but are not, captures local languages used by family/friends of gang members and ordinary people in a neighborhood where gangs operate.", "With the Twitter REST API, we collected the maximum number of most recent tweets that can be retrieved (3,200) along with profile descriptions and images (profile and cover photos) of every gang and non-gang member profile. The resulting dataset consists of 400 gang member Twitter profiles and 2,865 non-gang member Twitter profiles. The dataset has a total of 821,412 tweets from gang member profiles and 7,238,758 tweets from non-gang member profiles. Prior to analyzing any text content, we removed all of the seed words used to find gang member profiles, all stop words, and performed stemming across all tweets and profile descriptions.", "Figure FIGREF14 summarizes the words seen most often in the gang and non-gang members' tweets as clouds. They show a clear difference in language. For example, we note that gang members more frequently use curse words in comparison to ordinary users. Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, which is nearly five times more than the average curse word usage on Twitter. The clouds also reflect the fact that gang members often talk about drugs and money with terms such as smoke, high, hit, and money, while ordinary users hardly speak about finances and drugs. We also noticed that gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us. These differences make it clear that the individual words used by gang and non-gang members will be relevant features for gang profile classification.", "On Twitter, a user can give a self-description as a part of the user's profile. A comparison of the top 10 words in gang members' and non-gang members' Twitter profile descriptions is shown in Figure FIGREF21 . The first 10 words are the most frequently used words in non-gang members' profiles and the latter 10 words are the most frequently used words in gang members' profiles. Word comparison shows that gang members prefer to use curse words (nigga, fuck, shit) in their profile descriptions while non-gang members use words related to their feelings or interests (love, life, live, music, book). The terms rip and free which appear in approximately INLINEFORM0 of all gang member Twitter profiles, suggest that gang members use their profile descriptions as a space to grieve for their fallen or incarcerated gang members. The term gang in gang members' profile descriptions suggest that gang members like to self-identify themselves on Twitter. Such lexical features may therefore be of great importance for automatically identifying gang member profiles. We take counts of unigrams from gang and non-gang members' Twitter profile descriptions as classification features.", "It has been recognized that music is a key cultural component in an urban lifestyle and that gang members often want to emulate the scenarios and activities the music conveys BIBREF7 . Our analysis confirms that the influence of gangster rap is expressed in gang members' Twitter posts. We found that 51.25% of the gang members collected have a tweet that links to a YouTube video. Following these links, a simple keyword search for the terms gangsta and hip-hop in the YouTube video description found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre. Moreover, this high proportion is not driven by a small number of profiles that prolifically share YouTube links; eight YouTube links are shared on average by a gang member.", "Recognizing the frequency with which gang members post YouTube links on gangster rap and hip-hop, we consider the YouTube videos posted in a user's tweets as features for the classifier. In particular, for each YouTube video tweeted, we used the YouTube API to retrieve the video's description and its comments. Further analysis of YouTube data showed a difference between terms in gang members' YouTube data and non-gang members' YouTube data. For example, the top 5 terms (after stemming and stop word removal) used in YouTube videos shared by gang members are shit, like, nigga, fuck, lil while like, love, peopl, song, get are the top 5 terms in non-gang member video data. To represent a user profile based on their music interests, we generated a bag of words from the video descriptions and comments from all shared videos.", "Motivated by recent work involving the use of emojis by gang members BIBREF22 , we also studied if and how gang and non-gang members use emoji symbols in their tweets. Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets. The frequency of each emoji symbol used across the set of user's tweets are thus considered as features for our classifier.", "In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash. Descriptions of these images may thus empower our classifier. Thus, we translated profile images into features with the Clarifai web service. Clarifai offers a free API to query a deep learning system that tags images with a set of scored keywords that reflect what is seen in the image. We tagged the profile image and cover image for each profile using 20 tags identified by Clarifai. Figure FIGREF36 offers the 20 most often used tags applied to gang and non-gang member profiles. Since we take all the tags returned for an image, we see common words such as people and adult coming up in the top 20 tag set. However, gang member profile images were assigned unique tags such as trigger, bullet, worship while non-gang images were uniquely tagged with beach, seashore, dawn, wildlife, sand, pet. The set of tags returned by Clarifai were thus considered as features for the classifier." ], [ "The unigrams of tweets, profile text, and linked YouTube video descriptions and comments, along with the distribution of emoji symbols and the profile image tags were used to train four different classification models: a Naive Bayes net, a Logistic Regression, a Random Forest, and a Support Vector Machine (SVM). These four models were chosen because they are known to perform well over text features, which is the dominant type of feature considered. The performance of the models are empirically compared to determine the most suitable classification technique for this problem. Data for the models are represented as a vector of term frequencies where the terms were collected from one or more feature sets described above." ], [ "We next evaluate the performance of classifiers that use the above features to discover gang member profiles on Twitter. For this purpose, we use the training set discussed in Section SECREF3 with 400 gang member profiles (the `positive'/`gang' class) and 2,865 non-gang member profiles (the `negative'/`non-gang' class). We trained and evaluated the performance of the classifiers mentioned in Section SECREF31 under a 10-fold cross validation scheme. For each of the four learning algorithms, we consider variations involving only tweet text, emoji, profile, image, or music interest (YouTube comments and video description) features, and a final variant that considers all types of features together. The classifiers that use a single feature type were intended to help us study the quality of their predictive power by itself. When building these single-feature classifiers, we filtered the training dataset based on the availability of the single feature type in the training data. For example, we only used the twitter profiles that had at least a single emoji in their tweets to train classifiers that consider emoji features. We found 3,085 such profiles out of the 3,265 profiles in the training set. When all feature types were considered, we developed two different models:", "Because a Twitter profile may not have every feature type, Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. In this model, the non-occurrence of a feature is represented by `zeroing out' the feature value during model training. Model(2) represents the ideal scenario where all profiles contain every feature type. For this model, we used 1,358 training instances (42% of all training instances), out of which 172 were gang members (43% of all gang members) and 1,186 were non-gang members (41% of all non-gang members). We used version 0.17.1 of scikit-learn machine learning library to implement the classifiers.", "For each 10-fold cross validation experiment, we report three evaluation metrics for the `gang' and `non-gang' classes, namely, the Precision = INLINEFORM0 , Recall = INLINEFORM1 , and INLINEFORM2 -score = INLINEFORM3 , where INLINEFORM4 is the number of true positives, INLINEFORM5 is the number of false positives, INLINEFORM6 is the number of true negatives, and INLINEFORM7 is the number of false negatives. We report these metrics for the positive `gang' and negative `non-gang' classes separately because of class imbalance in our dataset." ], [ "Table TABREF37 presents the average precision, recall, and INLINEFORM0 -score over the 10 folds for the single-feature and combined feature classifiers. The table includes, in braces (`{ }'), the number of gang and non-gang profiles that contain a particular feature type, and hence the number of profiles used for the 10-fold cross validation. It is reasonable to expect that any Twitter profile is not that of a gang member, predicting a Twitter user as a non-gang member is much easier than predicting a Twitter user as a gang member. Moreover false positive classifications of the `gang' class may be detrimental to law enforcement investigations, which may go awry as they surveil an innocent person based on the classifier's suggestion. We thus believe that a small false positive rate of the `gang' class to be an especially important evaluation metric. We say that a classifier is `ideal' if it demonstrates high precision, recall, and INLINEFORM1 -score for the `gang' class while performing well on the `non-gang' class as well.", "The best performing classifier that considers single features is a Random Forest model over tweet features (T), with a reasonable INLINEFORM0 -score of 0.7229 for the `gang' class. It also features the highest INLINEFORM1 -score for the `non-gang' class (0.9671). Its strong performance is intuitive given the striking differences in language as shown in Figure FIGREF14 and discussed in Section UID22 . We also noted that music features offer promising results, with an INLINEFORM2 -score of 0.6505 with a Naive Bayes classifier, as well as emoji features with an INLINEFORM3 -score of 0.6067 also achieved by a Naive Bayes classifier. However, the use of profile data and image tags by themselves yield relatively poor INLINEFORM4 -scores no matter which classifier considered. There may be two reasons for this despite the differences we observed in Section SECREF17 . First, these two feature types did not generate a large number of specific features for learning. For example, descriptions are limited to just 160 characters per profile, leading to a limited number of unigrams (in our dataset, 10 on average) that can be used to train the classifiers. Second, the profile images were tagged by a third party Web service which is not specifically designed to identify gang hand signs, drugs and guns, which are often shared by gang members. This led to a small set of image tags in their profiles that were fairly generic, i.e., the image tags in Figure FIGREF36 such as `people', `man', and `adult'.", "Combining these diverse sets of features into a single classifier yields even better results. Our results for Model(1) show that the Random Forest achieves the highest INLINEFORM0 -scores for both `gang' (0.7364) and `non-gang' (0.9690) classes and yields the best precision of 0.8792, which corresponds to a low false positive rate when labeling a profile as a gang member. Despite the fact that it has lower positive recall compared to the second best performing classifier (a Random Forest trained over only tweet text features (T)), for this problem setting, we should be willing to increase the chance that a gang member will go unclassified if it means reducing the chance of applying a `gang' label to a non-gang member. When we tested Model(2), a Random Forrest classifier achieved an INLINEFORM1 -score of 0.7755 (improvement of 7.28% with respect to the best performing single feature type classifier (T)) for `gang' class with a precision of 0.8961 (improvement of 6.26% with respect to (T)) and a recall of 0.6994 (improvement of 9.26% with respect to (T)). Model(2) thus outperforms Model(1), and we expect its performance to improve with the availability of more training data with all feature types.", "px" ], [ "We also tested the trained classifiers using a set of Twitter profiles from a separate data collection process that may emulate the classifier's operation in a real-time setting. For this experiment, we captured real-time tweets from Los Angeles, CA and from ten South Side, Chicago neighborhoods that are known for gang-related activities BIBREF10 using the Twitter streaming API. We consider these areas with known gang presence on social media to ensure that some positive profiles would appear in our test set. We ultimately collected 24,162 Twitter profiles: 15,662 from Los Angeles, and 8,500 from Chicago. We populated data for each profile by using the 3,200 most recent tweets (the maximum that can be collected from Twitter's API) for each profile. Since the 24,162 profiles are far too many to label manually, we qualitatively study those profiles the classifier placed into the `gang' class.", "We used the training dataset to train our best performing random forest classifier (which use all feature types) and tested it on the test dataset. We then analyzed the Twitter profiles that our classifier labeled as belonging to the `gang' class. Each of those profiles had several features which overlap with gang members such as displaying hand signs and weapons in their profile images or in videos posted by them, gang names or gang-related hashtags in their profile descriptions, frequent use of curse words, and the use of terms such as “my homie\" to refer to self-identified gang members. Representative tweets extracted from those profiles are depicted in Figure FIGREF41 . The most frequent words found in tweets from those profiles were shit, nigga, got, bitch, go, fuck etc. and their user profiles had terms such as free, artist, shit, fuck, freedagang, and ripthefallen. They had frequently used emojis such as face with tears of joy, hundred points symbol, fire, skull, money bag, and pistol. For some profiles, it was less obvious that the classifier correctly identified a gang member. Such profiles used the same emojis and curse words commonly found in gang members profiles, but their profile picture and tweet content was not indicative of a gang affiliation. In conclusion, we find that in a real-time-like setting, the classifier to be able to extract profiles with features that strongly suggest gang affiliation. Of course, these profiles demand further investigation and extensive evidence from other sources in order to draw a concrete conclusion, especially in the context of a law enforcement investigation. We refrain from reporting any profile names or specific details about the profiles labeled as a `gang' member to comply with the applicable IRB governing this human subject research.", "px" ], [ "This paper presented an approach to address the problem of automatically identifying gang member profiles on Twitter. Despite the challenges in developing such automated systems, mainly due to difficulties in finding online gang member profiles for developing training datasets, we proposed an approach that uses features extracted from textual descriptions, emojis, images and videos shared on Twitter (textual features extracted from images, and videos). Exploratory analysis of these types of features revealed interesting, and sometimes striking differences in the ways gang and non-gang members use Twitter. Classifiers trained over features that highlight these differences, were evaluated under 10-fold cross validation. Our best classifier achieved a promising INLINEFORM0 -score of 0.7755 over the `gang' profiles when all types of features were considered.", "Future work will strengthen our training dataset by including more gang member Twitter profiles by searching for more location-independent keywords. We also plan to develop our own image classification system specifically designed to classify images found on gang member profiles. We would also like to experiment with building dictionaries that contain gang names to understand whether “having a gang name in the profile description” as a feature can improve our results. Finally, we would also like to study how can we further improve our classifier models using word embeddings BIBREF23 and social networks of known gang members.", "px" ], [ "We are thankful to Uday Kiran Yeda for helping us with data collection. We acknowledge partial support from the National Science Foundation (NSF) award: CNS-1513721: “Context-Aware Harassment Detection on Social Media”, National Institutes of Health (NIH) award: MH105384-01A1: “Modeling Social Behavior for Healthcare Utilization in Depression” and Grant No. 2014-PS-PSN-00006 awarded by the Bureau of Justice Assistance. The Bureau of Justice Assistance is a component of the U.S. Department of Justice's Office of Justice Programs, which also includes the Bureau of Justice Statistics, the National Institute of Justice, the Office of Juvenile Justice and Delinquency Prevention, the Office for Victims of Crime, and the SMART Office. Points of view or opinions in this document are those of the authors and do not necessarily represent the official position or policies of the U.S. Department of Justice, NSF or NIH.", "px" ] ] }
{ "question": [ "Do they evaluate only on English datasets?", "What are the differences in the use of emojis between gang member and the rest of the Twitter population?", "What are the differences in the use of YouTube links between gang member and the rest of the Twitter population?", "What are the differences in the use of images between gang member and the rest of the Twitter population?", "What are the differences in language use between gang member and the rest of the Twitter population?", "How is gang membership verified?", "Do the authors provide evidence that 'most' street gang members use Twitter to intimidate others?" ], "question_id": [ "3460393d6888dd34113fa0813a1b3a1514c66aa6", "d491ee69db39ec65f0f6da9ec03450520389699a", "d3839c7acee4f9c8db0a4a475214a8dcbd0bc26f", "a6d00f44ff8f83b6c1787e39333e759b0c3daf15", "0d4aa05eb00d9dee74000ea5b21b08f693ba1e62", "382bef47d316d7c12ea190ae160bf0912a0f4ffe", "32a232310babb92991c4b1b75f7aa6b4670ec447" ], "nlp_background": [ "five", "five", "five", "five", "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no" ], "search_query": [ "twitter", "twitter", "twitter", "twitter", "twitter", "twitter", "twitter" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "1bb70f9de3ac347c51c1e86b8c0965aceb087785" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members", "only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them", "gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Motivated by recent work involving the use of emojis by gang members BIBREF22 , we also studied if and how gang and non-gang members use emoji symbols in their tweets. Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets. The frequency of each emoji symbol used across the set of user's tweets are thus considered as features for our classifier." ], "highlighted_evidence": [ "Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets." ] } ], "annotation_id": [ "47b06d96c2bd0cd02e9e565771cf21112c5202d1" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre" ], "yes_no": null, "free_form_answer": "", "evidence": [ "It has been recognized that music is a key cultural component in an urban lifestyle and that gang members often want to emulate the scenarios and activities the music conveys BIBREF7 . Our analysis confirms that the influence of gangster rap is expressed in gang members' Twitter posts. We found that 51.25% of the gang members collected have a tweet that links to a YouTube video. Following these links, a simple keyword search for the terms gangsta and hip-hop in the YouTube video description found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre. Moreover, this high proportion is not driven by a small number of profiles that prolifically share YouTube links; eight YouTube links are shared on average by a gang member." ], "highlighted_evidence": [ "We found that 51.25% of the gang members collected have a tweet that links to a YouTube video. Following these links, a simple keyword search for the terms gangsta and hip-hop in the YouTube video description found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre." ] } ], "annotation_id": [ "c6649ed21441071f096966248ea46bd534f99d64" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash. Descriptions of these images may thus empower our classifier. Thus, we translated profile images into features with the Clarifai web service. Clarifai offers a free API to query a deep learning system that tags images with a set of scored keywords that reflect what is seen in the image. We tagged the profile image and cover image for each profile using 20 tags identified by Clarifai. Figure FIGREF36 offers the 20 most often used tags applied to gang and non-gang member profiles. Since we take all the tags returned for an image, we see common words such as people and adult coming up in the top 20 tag set. However, gang member profile images were assigned unique tags such as trigger, bullet, worship while non-gang images were uniquely tagged with beach, seashore, dawn, wildlife, sand, pet. The set of tags returned by Clarifai were thus considered as features for the classifier." ], "highlighted_evidence": [ "In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash." ] } ], "annotation_id": [ "d55b22ac52e5ac28a3fd75dbd3cbd1b1aa911214" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word", "gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Figure FIGREF14 summarizes the words seen most often in the gang and non-gang members' tweets as clouds. They show a clear difference in language. For example, we note that gang members more frequently use curse words in comparison to ordinary users. Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, which is nearly five times more than the average curse word usage on Twitter. The clouds also reflect the fact that gang members often talk about drugs and money with terms such as smoke, high, hit, and money, while ordinary users hardly speak about finances and drugs. We also noticed that gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us. These differences make it clear that the individual words used by gang and non-gang members will be relevant features for gang profile classification." ], "highlighted_evidence": [ "Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, which is nearly five times more than the average curse word usage on Twitter.", "The clouds also reflect the fact that gang members often talk about drugs and money with terms such as smoke, high, hit, and money, while ordinary users hardly speak about finances and drugs. We also noticed that gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us." ] } ], "annotation_id": [ "46a19d10f241998aa11df56a8bd01b65a798ce87" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Manual verification" ], "yes_no": null, "free_form_answer": "", "evidence": [ "3. Manual verification of Twitter profiles: We verified each profile discovered manually by examining the profile picture, profile background image, recent tweets, and recent pictures posted by a user. During these checks, we searched for terms, activities, and symbols that we believed could be associated with a gang. For example, profiles whose image or background included guns in a threatening way, stacks of money, showing gang hand signs and gestures, and humans holding or posing with a gun, appeared likely to be from a gang member. Such images were often identified in profiles of users who submitted tweets that contain messages of support or sadness for prisoners or recently fallen gang members, or used a high volume of threatening and intimidating slang language. Only profiles where the images, words, and tweets all suggested gang affiliation were labeled as gang affiliates and added to our dataset. Although this manual verification does have a degree of subjectivity, in practice, the images and words used by gang members on social media are so pronounced that we believe any reasonable analyst would agree that they are gang members. We found that not all the profiles collected belonged to gang members; we observed relatives and followers of gang members posting the same hashtags as in Step 1 to convey similar feelings in their profile descriptions." ], "highlighted_evidence": [ "Manual verification of Twitter profiles: We verified each profile discovered manually by examining the profile picture, profile background image, recent tweets, and recent pictures posted by a user." ] } ], "annotation_id": [ "5bc94f2327bfef2d25b36f70a8a4afe6889ef547" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "Street gang members have established online presences coinciding with their physical occupation of neighborhoods. The National Gang Threat Assessment Report confirms that at least tens of thousands of gang members are using social networking websites such as Twitter and video sharing websites such as YouTube in their daily life BIBREF0 . They are very active online; the 2007 National Assessment Center's survey of gang members found that 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF4 . Gang members typically use social networking sites and social media to develop online respect for their street gang BIBREF5 and to post intimidating, threatening images or videos BIBREF6 . This “Cyber-” or “Internet banging” BIBREF7 behavior is precipitated by the fact that an increasing number of young members of the society are joining gangs BIBREF8 , and these young members have become enamored with technology and with the notion of sharing information quickly and publicly through social media. Stronger police surveillance in the physical spaces where gangs congregate further encourages gang members to seek out virtual spaces such as social media to express their affiliation, to sell drugs, and to celebrate their illegal activities BIBREF9 ." ], "highlighted_evidence": [ "The National Gang Threat Assessment Report confirms that at least tens of thousands of gang members are using social networking websites such as Twitter and video sharing websites such as YouTube in their daily life BIBREF0 . They are very active online; the 2007 National Assessment Center's survey of gang members found that 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF4 ." ] } ], "annotation_id": [ "926e72ab5fb5ac9bec34f0e1a547212a7163ea04" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Fig. 1: Twitter profile descriptions of known gang members. Pursuant to an IRB governing human subject research, we are prohibited from revealing personally identifiable information in this paper. We only report Twitter handles that have already been revealed in widely reported publications and were not collected by the research team for this work.", "Fig. 2: Gang member dataset creation.", "TABLE I: Number of gang member profiles captured.", "Fig. 3: Comparison of words used in tweets.", "Fig. 4: Word usage in profile descriptions: gang vs non-gang.", "Fig. 6: Examples for gang members’ tweets with emojis.", "Fig. 5: Emoji usage distribution: gang vs non-gang.", "Fig. 7: Sample gang member profile images.", "Fig. 8: Image tags distribution: gang vs non-gang.", "TABLE II: Classification results based on 10-fold cross validation." ], "file": [ "2-Figure1-1.png", "2-Figure2-1.png", "3-TableI-1.png", "4-Figure3-1.png", "4-Figure4-1.png", "5-Figure6-1.png", "5-Figure5-1.png", "6-Figure7-1.png", "6-Figure8-1.png", "7-TableII-1.png" ] }
2001.05493
A Unified System for Aggression Identification in English Code-Mixed and Uni-Lingual Texts
Wide usage of social media platforms has increased the risk of aggression, which results in mental stress and affects the lives of people negatively like psychological agony, fighting behavior, and disrespect to others. Majority of such conversations contains code-mixed languages[28]. Additionally, the way used to express thought or communication style also changes from one social media plat-form to another platform (e.g., communication styles are different in twitter and Facebook). These all have increased the complexity of the problem. To solve these problems, we have introduced a unified and robust multi-modal deep learning architecture which works for English code-mixed dataset and uni-lingual English dataset both.The devised system, uses psycho-linguistic features and very ba-sic linguistic features. Our multi-modal deep learning architecture contains, Deep Pyramid CNN, Pooled BiLSTM, and Disconnected RNN(with Glove and FastText embedding, both). Finally, the system takes the decision based on model averaging. We evaluated our system on English Code-Mixed TRAC 2018 dataset and uni-lingual English dataset obtained from Kaggle. Experimental results show that our proposed system outperforms all the previous approaches on English code-mixed dataset and uni-lingual English dataset.
{ "section_name": [ "Introduction", "Related work", "Methodology", "Methodology ::: Data Preprocessing", "Methodology ::: NLP Features", "Methodology ::: Deep Pyramid CNN(DPCNN)", "Methodology ::: Disconnected RNN(DRNN)", "Methodology ::: Pooled BiLSTM", "Methodology ::: Classification Model", "Experiment and Evaluation ::: Dataset Description", "Experiment and Evaluation ::: Experimental Setup", "Experiment and Evaluation ::: Evaluation Strategy", "Experiment and Evaluation ::: Results and Discussion", "Conclusion and Future Work" ], "paragraphs": [ [ "The exponential increase of interactions on the various social media platforms has generated the huge amount of data on social media platforms like Facebook and Twitter, etc. These interactions resulted not only positive effect but also negative effect over billions of people owing to the fact that there are lots of aggressive comments (like hate, anger, and bullying). These cause not only mental and psychological stress but also account deactivation and even suicideBIBREF1. In this paper we concentrate on problems related to aggressiveness.", "The fine-grained definition of the aggressiveness/aggression identification is provided by the organizers of TRAC-2018 BIBREF0, BIBREF2. They have classified the aggressiveness into three labels (Overtly aggressive(OAG), Covertly aggressive(CAG), Non-aggressive(NAG)). The detailed description for each of the three labels is described as follows:", "Overtly Aggressive(OAG) - This type of aggression shows direct verbal attack pointing to the particular individual or group. For example, \"Well said sonu..you have courage to stand against dadagiri of Muslims\".", "Covertly Aggressive(CAG) - This type of aggression the attack is not direct but hidden, subtle and more indirect while being stated politely most of the times. For example, \"Dear India, stop playing with the emotions of your people for votes.\"", "Non-Aggressive(NAG) - Generally these type of text lack any kind of aggression it is basically used to state facts, wishing on occasions and polite and supportive.", "The additional discussion on aggressiveness task can be found in Kaggle task , which just divided the task into two classes - i.e., presence or absence of aggression in tweets.", "The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset.", "The massive increase of the social media data rendered the manual methods of content moderation difficult and costly. Machine Learning and Deep Learning methods to identify such phenomena have attracted more attention to the research community in recent yearsBIBREF4.", "Based on the current context, we can divide the problem into three sub-problems: (a) detection of aggression levels, (b) handling code-mixed data and (c) handling styles (due to differences in social media platforms and text entry rules/restrictions).", "A lot of the previous approachesBIBREF5 have used an ensemble model for the task. For example, some of them uses ensemble of statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9 some used ensemble of statistical and deep learning modelsBIBREF10, BIBREF11, BIBREF12 some used ensemble of deep learning models BIBREF13. There are approaches which proposed unified architecture based on deep learningBIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 while some proposed unified statistical modelBIBREF7. Additionally, there are some approaches uses data augmentation either through translation or labeling external data to make the model generalize across domainsBIBREF14, BIBREF10, BIBREF7.", "Most of the above-discussed systems either shows high performance on (a) Twitter dataset or (b) Facebook dataset (given in the TRAC-2018), but not on both English code-mixed datasets. This may be due to the text style or level of complexities of both datasets. So, we concentrated to develop a robust system for English code-mixed texts, and uni-lingual texts, which can also handle different writing styles. Our approach is based on three main ideas:", "Deep-Text Learning. The goal is to learn long range associations, dependencies between regions of text, N-grams, key-patterns, topical information, and sequential dependencies.", "Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term \"NLP Features\" to represent it in the entire paper.", "Dual embedding based on FastText and Glove. This dual embedding helps in high vocabulary coverage and to capture the rare and partially incorrect words in the text (specially by FastText BIBREF20).", "Our \"Deep-text architecture\" uses model averaging strategy with three different deep learning architectures. Model averaging belongs to the family of ensemble learning techniques that uses multiple models for the same problem and combines their predictions to produce a more reliable and consistent prediction accuracy BIBREF21. This is the simplest form of weighted average ensemble based predictionBIBREF22 where, each ensemble member contribute equally to predictions. Specifically in our case, three different models have been used. The following contains the intuition behind the selection of these three models:", "Deep Pyramid CNN BIBREF23 being deeper helps to learn long range associations between temporal regions of text using two-view embeddings.", "Disconnected RNN BIBREF24 is very helpful in encoding the sequential information with temporal key patterns in the text.", "Pooled BiLSTM In this architecture the last hidden state of BiLSTM is concatenated with mean and max-pooled representation of the hidden states obtained over all the time steps of Bi-LSTM. The idea of using mean and max pooling layers together is taken from BIBREF25 to avoid the loss of information in longer sequences of texts and max-pooling is taken to capture the topical informationBIBREF26.", "NLP Features In each of the individual models, the NLP features are concatenated with last hidden state before the softmax classification layer as meta-data. The main aim is to provide additional information to the deep learning network.", "The intuition behind the NLP features are the following:", "Emotion Sensor Dataset We have introduced to use of emotion sensor features, as a meta-data information. We have obtained the word sensor dataset from Kaggle. In this dataset each word is statistically classified into 7 distinct classes (Disgust, Surprise, Neutral, Anger, Sad, Happy and Fear) using Naive Bayes, based on sentences collected from twitter and blogs.", "Controlled Topical Signals from Empath. Empath can analyse the text across 200 gold standard topics and emotions. Additionally, it uses neural embedding to draw connotation among words across more than 1.8 billion words. We have used only selected categories like violence, hate, anger, aggression, social media and dispute from 200 Empath categories useful for us unlikeBIBREF12 which takes 194 categories.", "Emoticons frequently used on social media indicates the sense of sentenceBIBREF17, BIBREF19, BIBREF9.", "Normalized frequency of POS tags According to BIBREF12, BIBREF11, BIBREF7, BIBREF15 POS Tags provide the degree of target aggressiveness. LikeBIBREF12, we have used only four tags (a) adjective (JJ, JJR, JJS), (b) adverb (RB, RBR, RBS), (c) verb (VB, VBD, VBG, VBN, VBP, VBZ) and (d) noun (NN, NNS, NNP, NNPS) (See Penn-Treebank POS Tags for abbreviations and the full list). The main reason behind the selection of these four tags is to just identify words related to persons, activities, quality, etc, in the text.", "Sentiment polarity obtained from VADER Sentiment Analysis BIBREF27 (positive, negative and neutral) like used in BIBREF15, BIBREF10, BIBREF11, BIBREF7. It helps to demarcate aggressiveness with non-aggressiveness in the text.", "The block diagram of the proposed system is shown in Figure FIGREF22. The proposed system does not use any data augmentation techniques like BIBREF14, which is the top performer in TRAC (in English code-mixed Facebook data). This means the performance achieved by our system totally depends on the training dataset provided by TRAC. This also proves the effectiveness of our approach. Our system outperforms all the previous state of the art approaches used for aggression identification on English code-mixed TRAC data, while being trained only from Facebook comments the system outperforms other approaches on the additional Twitter test set. The remaining part of this paper is organized as follows: Section SECREF2 is an overview of related work. Section SECREF3 presents the methodology and algorithmic details. Section SECREF4 discusses the experimental evaluation of the system, and Section SECREF5 concludes this paper." ], [ "There are several works for aggression identification submitted at TRAC 2018 among them some approaches use the ensemble of multiple statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9. Similarly, some of the models likeBIBREF10, BIBREF11, BIBREF12 have used ensemble of statistical and deep learning models. In these models the statistical part of the model uses additional features from text analysis like parts-of-speech tags, punctuation, emotion, emoticon etc. Model like: BIBREF13 has used the ensemble of deep learning models based on majority voting.", "Some other models like: BIBREF28, BIBREF12, BIBREF9 have used different models for Facebook and twitter. While approaches like:BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 have proposed unified architecture based on deep learning. Systems likeBIBREF14, BIBREF10, BIBREF7 have used data augmentation either through translation or labelling external data to make the model generalize across domains. While BIBREF7 has proposed a unified statistical model.", "Among approaches likeBIBREF6 extracted features from TF-IDF of character n-grams whileBIBREF28 uses LSTM with pre-trained embeddings from FastText. BIBREF15 have used the BiLSTM based model and the SVM metaclassifier model for the Facebook and Twitter test sets, respectively. While BIBREF13 tried ensembling of CNN, LSTM, and BILSTM.", "Some approaches like:BIBREF12 has used emotions frequency as one of the features, while some others use sentiment emotion as featureBIBREF11. Also,BIBREF17, BIBREF19 have converted emoticons to their description. BIBREF9 have used TF-IDF of emoticons per-class as one of the features. Compared to all these approaches, we have concentrated to capture multiple linguistic/pattern based relations, key-terms and key-patters (with their association in text) through a combination of deep learning architectures with model averaging. We have also used NLP features as additional features with our deep learning architecture, obtained from psycho-linguistic and basic linguistic features." ], [ "In this section, we describe our system architecture for aggressiveness classifier. In section SECREF23 we describe data preprocessing applied on the input text before feeding it to each of the classification models. Section SECREF26 describes the computation of NLP features. In Sections SECREF30, SECREF34 and SECREF45 we have described the architecture of different deep learning models like Deep Pyramid CNN, Disconnected RNN and Pooled BiLSTM respectively. Finally, in Section SECREF49, we describe model averaging based classification model which combines the prediction probabilities from three deep learninig architectures discussed above. (see Figure FIGREF22. for block diagram of system architecture)." ], [ "We consider the text to be well formatted before applying the text to the embedding layer. First, we detect non-English text(which are few) and translate all of them to English using Google Translate. Still, there is some code mixed words like \"mc\", \"bc\" and other English abbreviations and spelling errors like \"nd\" in place of \"and\", \"u\" in place of \"you\" causes deep learning model to confuse with sentences of the same meaning. We follow the strategy of preprocessor as inBIBREF17 to normalize the abbreviations and remove spelling errors, URLs and punctuation marks, converting emojis to their description.", "https://spacy.io/usage/linguistic-features#pos-tagging" ], [ "We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).", "Different from previous approachesBIBREF8, BIBREF12 where BIBREF12 have used Emotion features in the form of frequency while BIBREF8 have used emotion feature vector obtained from LIWC 2007BIBREF30. UnlikeBIBREF12 we have used only 6 topical signals from EmapthBIBREF29. We have borrowed the idea of using other features like punctuation features and parts-of-speech tags from BIBREF12. The Table 1. lists and describes features, tools used to obtain them and the number of features resulted from each type." ], [ "Since it has been proved that CNNs are great feature extractors for text classificationBIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF23 while deeper networks(whether RNNs or CNN's) has been proven for learning long-range association like deeper character level CNN'sBIBREF36, BIBREF37, and complex combination of RNN and CNNBIBREF38, BIBREF39, BIBREF40, BIBREF41, BIBREF42. Deep Pyramid CNN (DPCNN)BIBREF23 has 15 layers of word-level CNN's and contains similar pre-activation as proposed in improved ResnetBIBREF43. DPCNN outperforms the 32-layer character CNNBIBREF37 and Hierarchical attention networksBIBREF42 it has added advantage that due to its pyramid structure it does not require dimension matching in shortcut connections defined as z + h(z) as inBIBREF43 where h(z) represents the skipped layers essentially contains two convolutional layers with pre-activation. It uses enhanced region embedding which consumes pre-trained embeddings (in our case it is FastText+Glove based dual embedding).", "Enhanced Region Embedding. The current DPCNNBIBREF23, uses two view type enhanced region embedding. For the text categorization, it defines a region of text as view-1 and its adjacent regions as view-2. Then using unlabeled data, it trains a neural network of one hidden layer with an artificial task of predicting view-2 from view-1. The obtained hidden layer, which is an embedding function that takes view-1 as input, serves as an unsupervised embedding function in the model for text categorization. The detailed architecture has been shown in Figure FIGREF29.", "Let each word input $x_j \\in R^d$ be the d-dimensional vector for the $j^{th}$ word $w_{j}$ and the sentence $s_i$ contains sequence of $n$ words $\\lbrace w_{1},w_{2},w_{3},......,w_{n}\\rbrace $ as shown in Figure FIGREF29. In comparision to conventional convolution layer, DPCNN proposes to use pre-activation, thus essentially the convolutional layer of DPCNN is $\\textbf {W}\\sigma (\\textbf {x})+\\textbf {b}$, where $\\textbf {W}$ and $\\textbf {b}$(unique to each layer) are the weights matrix and bias respectively, we use $\\sigma $ as PReLUBIBREF44. During implementation we use kernel size of 3(represented by $\\textbf {x}$ to denote the small overlapping regions of text.), The number of filters(number of feature maps denoted by the number of rows of $\\textbf {W}$) is 128 as depicted in Figure FIGREF29. With the number of filters same in each convolution layer and max-pooling with stride 2 makes the computation time halved, and doubles the net coverage of convolution kernel. Thus the deeper layers cause to learn long-range associations between regions of text. Let's say $h_{dpcnn} \\in R^{p_1}$ be the hidden state obtained from DPCNN just before the classification layer and $f_{nlp} \\in R^{24}$ be the NLP features computed from the text. Lets $z_1 \\in R^{p_1 + 24}$ be another hidden state obtained as", "where, $\\oplus $ denotes concatenation. The vector $z_1$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i1}^*$ be the softmax probabilities, specifically for class label $k$ is given as:", "where $K$ is the number of classes, $W_{dpcnn}$ and $b_{dpcnn}$ are the weight matrix and bias respectively." ], [ "Given a sequence $s_i = [x_{1}, x_{2}, x_{3},....x_{n}]$ where $x_{j} \\in R^d$ represents the d-dimensional word vector for word $w_{j}$ and $n$ is the length of input text applied to a variant of RNN called Long Short-Term Memory (LSTM)BIBREF45 as shown in Figure FIGREF33. It is widely used for sequential modelling with long-term dependencies. For sequence modelling it keeps on updating the memory cell with current input using an adaptive gating mechanism. At time step $t$ the memory $c_t$ and the hidden state $h_t$ are updated as follows:", "where $\\hat{c}_t$ is the current cell state obtained from current input $x_t$ and previous hidden state $h_{t-1}$, $i_t$, $f_t$ and $o_t$ are the activation corresponding to input gate, forget gate and output gate respectively, $\\sigma $ denotes the logistic sigmoid function and $\\odot $ denotes the element-wise multiplication. Hence the hidden state representation at time step $t$ depends on all the previous input vectors given as", "Specifically we have used Bi-directional LSTM BIBREF45 to capture both past and future context. It provides $h_t$ from both directions(forward & backward). The forward LSTM takes the natural order of words from $x_{1}$ to $x_{n}$ to obtain $\\overrightarrow{h_t}$, while backward-LSTM $x_{n}$ to $x_{1}$ to obtain $\\overleftarrow{h_t}$. then $h_t$ is calculated as", "where $\\oplus $ is the concatenation and $L$ is the size for one-directional LSTM. Therefore we denote the hidden state in equation DISPLAY_FORM37 with BiLSTM as", "To avoid handling of long sequence and to capture local information for each word we define the window size $k$ for each word such that the BiLSTM only sees the the previous $k-1$ words with the current word, where $k$ is a hyperparameterBIBREF24. We use padding <PAD> to make the slices of fixed size k(as shown in Figure FIGREF33). It provides each hidden state $h_t$ with sequence of $k$ previous words. Since the phrase of $k$ words can lie anywhere in the text it helps to model the position invariant phrase representation due to which the it identifies key phrases important for identifying particular category. In this case, the equation of $h_t$ is given as", "The output hidden vectors, $H = [h_1, h_2, h_3, ...... h_n] \\in R^{n \\times 2L}$ are converted to fixed-length vector $h_{drnn} \\in R^{2L}$ with max pooling over time:", "Let's say $f_{nlp} \\in R^{24}$ be the NLP features computed from the text. Let's $z_2 \\in R^{2L + 24}$ be another hidden state obtained as", "where $\\oplus $ denotes concatenation. The vector $z_2$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i2}^*$ be the softmax probabilities, specifically for class label $k$ is given as:", "where $K$ is the number of classes, $W_{drnn}$ is the weight matrix, and $b_{drnn}$ is the bias." ], [ "The architecture has been shown in Figure FIGREF44. Given a sequence $s_i = [x_{1}, x_{2}, x_{3}, ..... x_{j}]$, where $x_j \\in R^d$ is the d-dimensional word vector for word $w_j$, the hidden state obtained after BiLSTM is given as", "To avoid the loss of information because of modelling the entire sequence, we have concatenated the max-pooled($c_{max}$) and mean-pooled($c_{mean}$) representation of hidden states calculated over all time steps BIBREF25. We have also concatenated the nlp features, $f_{nlp} \\in R^{24}$ the final feature vector $z_{3}$ is given as", "where $\\oplus $ denotes concatenation. The final feature $z_3$ vector is fed to the fully connected layer with softmax activation. Let $y_{i3}^*$ be the softmax probablities, specifically for class label $k$ given as:", "where $K$ is the number of classes and $W_{bilstm}$ and $b_{bilstm}$ are the weight matrix and bias respectively." ], [ "According to deep learning literature BIBREF46, BIBREF47, BIBREF48, unweighted averaging might be a reasonable ensemble for similar base learners of comparable performance. Now, similar to the information discussed in BIBREF21, we can compute the model averaging (unweighted) by combining the softmax probabilities of three different classification models obtained from equations DISPLAY_FORM32, DISPLAY_FORM43, DISPLAY_FORM48. The averaged class probabilities are computed as:", "where K is the number of classes, and $\\hat{y_i}$ is the predicted label for sentence $s_i$." ], [ "We have used two datasets in our experimental evaluations: (1) TRAC 2018 Dataset and (2) Kaggle Dataset.", "TRAC 2018 Dataset: We have used the English code-mixed dataset provided by TRAC 2018. This dataset contains three labels, (a) Non-Aggressive(NAG), (b) Overtly-Aggressive (OAG) and (c) Covertly-Aggressive(CAG). The distribution of training, validation and test sets are described in Table TABREF56.", "Kaggle Dataset: This dataset contains 20001 tweets which are manually labeled. The labels are divided into two categories (indicating presence or absence of aggression in tweets) AGG(Aggressive) or NAG(Non-Aggressive). We have used the same test split available in the baseline code. The distribution for each of the training and test is given in Table TABREF56." ], [ "We have used Glove EmbeddingsBIBREF49 concatenated with FastText EmbeddingsBIBREF20 in all the three classification models presented in this paper. Specifically, we used Glove pre-trained vectors obtained from Twitter corpus containing 27 billion tokens and 1.2 million vocabulary entries where each word is represented using 100-dimensional vector. In the case of FastText the word is represented using 300-dimensional vector. Also, we have applied spatial dropoutBIBREF50 of 0.3 at embedding layer for DPCNN(in section SECREF30) and Pooled BiLSTM(in section SECREF45). For DPCNN model(in SECREF30) we have learnt 128-dimensional vector representation for unsupervised embeddings implicitly for task specific representation as in BIBREF23. Additionally, for DPCNN all the convolutional layers used 128 filters, kernel size of 3 and max-pooling stride 2. Additionally, in the case of DPCNN we have used kernel and bias regularizer of value 0.00001 for all convolutional kernels. The pre-activation function used in DPCNN is Parametric ReLU (PReLU) proposed in BIBREF44 while the activation at each of the convolutional kernel is linear. For, DRNN(in section SECREF34) we have used the window size of 8 and rest of the parameters related to LSTM units are same as given inBIBREF24. For, Pooled BiLSTM(in section SECREF45) we have used LSTM hidden units size as 256. The maximum sequence length is 200 in all three models. In each of the classification model the classification layer contains the fully connected layer with softmax activation with output size of 3 equal to number of classes in case of TRAC 2018 dataset and its 2 in case of Kaggle dataset. Training has been done using ADAM optimizerBIBREF51 for DPCNN and RMSPROPBIBREF52 for DRNN and Pooled Bi-LSTM models. All the models are trained end-to-end using softmax cross entropy lossBIBREF53 for TRAC 2018 dataset and binary cross entropy lossBIBREF53 for Kaggle dataset.", "To train our model for TRAC 2018 dataset, we merged the training and validation dataset and then used 10% split from shuffled dataset to save the best model, for all classifiers. We have used only 20 NLP features (except TF-IDF Emoticon feature and Punctuation feature as given in Table TABREF25) for Kaggle dataset (as these are not present in the Kaggle dataset)." ], [ "To compare our experimental results we have used top-5 systems from the published results of TRAC-2018BIBREF5. To compare our results on Kaggle dataset, we have used the last & the best published result on Kaggle website as a baseline. We have conducted the separate experiments, to properly investigate the performance of (a) each of the classifiers (used in our model averaging based system), (b) impact of the NLP features on each of these classifiers and finally, (c) the performance of our proposed system. In Table TABREF57, TABREF57 and TABREF57, models, named as DPCNN(ref SECREF30), DRNN (ref SECREF34) and Pooled BiLSTM(ref SECREF45) are corresponding models without NLP features. Similarly, DPCNN+NLP Features, DRNN + NLP Features and Pooled BiLSTM + NLP Features are corresponding models with NLP features. The Model Averaging (A+B+C) is the ensemble of three models (i.e., model averaging of DPCNN, DRNN and Pooled BiLSTM) without NLP features. Finally, Our Proposed Method, which represents the model averaging of three models with NLP features." ], [ "In this paper, we have evaluated our model using weighted macro-averaged F-score. The measure is defined as in (See BIBREF5, BIBREF2). It weights the F-score computed per class based on the class composition in the test set and then takes the average of these per-class F-score gives the final F-score. Table TABREF57, TABREF57 and TABREF57. presents the comparative experimental results for the proposed method in this paper with respect to the state-of-the-art. The top 5 modelsBIBREF5 given in Table TABREF57 and TABREF57. are the best performing models for Facebook and Twitter test dataset respectively on TRAC 2018. We have followed all the experimental guidelines as discussed in TRAC contest guideline paperBIBREF2, BIBREF5. From the results given in Table TABREF57, TABREF57 and TABREF57 it is clear that our proposed model shows the best performance among all of the approaches. These results also state that all the deep learning architectures with NLP features, perform better than individual corresponding deep learning architectures. This means NLP features, adds some value to the architectures, even if it is not very high.", "" ], [ "In this paper, we have briefly described the approach we have taken to solve the aggressive identification on online social media texts which is very challenging since the dataset is noisy and code-mixed. We presented an ensemble of deep learning models which outperform previous approaches by sufficient margin while having the ability to generalize across domains.", "In future, we will explore other methods to increase the understanding of deep learning models on group targeted text, although the categories are well defined we will look after if we further fine-tune the categories with more data. In the future, we are planning to pay attention on a generalized language model for code-mixed texts which can also handle Hindi-code-mixed and other multi-lingual code-mixed datasets (i.e., trying to reduce the dependencies on language-specific code-mixed resources)." ] ] }
{ "question": [ "What is English mixed with in the TRAC dataset?", "Which psycholinguistic and basic linguistic features are used?", "How have the differences in communication styles between Twitter and Facebook increase the complexity of the problem?", "What are the key differences in communication styles between Twitter and Facebook?", "What data/studies do the authors provide to support the assertion that the majority of aggressive conversations contain code-mixed languages?" ], "question_id": [ "5845d1db7f819dbadb72e7df69d49c3f424b5730", "e829f008d62312357e0354a9ed3b0827c91c9401", "54fe8f05595f2d1d4a4fd77f4562eac519711fa6", "61404466cf86a21f0c1783ce535eb39a01528ce8", "fbe5e513745d723aad711ceb91ce0c3c2ceb669e" ], "nlp_background": [ "five", "five", "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "twitter", "twitter", "twitter", "twitter", "twitter" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Hindi" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In future, we will explore other methods to increase the understanding of deep learning models on group targeted text, although the categories are well defined we will look after if we further fine-tune the categories with more data. In the future, we are planning to pay attention on a generalized language model for code-mixed texts which can also handle Hindi-code-mixed and other multi-lingual code-mixed datasets (i.e., trying to reduce the dependencies on language-specific code-mixed resources).", "The block diagram of the proposed system is shown in Figure FIGREF22. The proposed system does not use any data augmentation techniques like BIBREF14, which is the top performer in TRAC (in English code-mixed Facebook data). This means the performance achieved by our system totally depends on the training dataset provided by TRAC. This also proves the effectiveness of our approach. Our system outperforms all the previous state of the art approaches used for aggression identification on English code-mixed TRAC data, while being trained only from Facebook comments the system outperforms other approaches on the additional Twitter test set. The remaining part of this paper is organized as follows: Section SECREF2 is an overview of related work. Section SECREF3 presents the methodology and algorithmic details. Section SECREF4 discusses the experimental evaluation of the system, and Section SECREF5 concludes this paper.", "The fine-grained definition of the aggressiveness/aggression identification is provided by the organizers of TRAC-2018 BIBREF0, BIBREF2. They have classified the aggressiveness into three labels (Overtly aggressive(OAG), Covertly aggressive(CAG), Non-aggressive(NAG)). The detailed description for each of the three labels is described as follows:", "Overtly Aggressive(OAG) - This type of aggression shows direct verbal attack pointing to the particular individual or group. For example, \"Well said sonu..you have courage to stand against dadagiri of Muslims\".", "Covertly Aggressive(CAG) - This type of aggression the attack is not direct but hidden, subtle and more indirect while being stated politely most of the times. For example, \"Dear India, stop playing with the emotions of your people for votes.\"", "Non-Aggressive(NAG) - Generally these type of text lack any kind of aggression it is basically used to state facts, wishing on occasions and polite and supportive." ], "highlighted_evidence": [ " In the future, we are planning to pay attention on a generalized language model for code-mixed texts which can also handle Hindi-code-mixed and other multi-lingual code-mixed datasets (i.e., trying to reduce the dependencies on language-specific code-mixed resources).", "Our system outperforms all the previous state of the art approaches used for aggression identification on English code-mixed TRAC data, while being trained only from Facebook comments the system outperforms other approaches on the additional Twitter test set.", "The fine-grained definition of the aggressiveness/aggression identification is provided by the organizers of TRAC-2018 BIBREF0, BIBREF2. They have classified the aggressiveness into three labels (Overtly aggressive(OAG), Covertly aggressive(CAG), Non-aggressive(NAG)). The detailed description for each of the three labels is described as follows:\n\nOvertly Aggressive(OAG) - This type of aggression shows direct verbal attack pointing to the particular individual or group. For example, \"Well said sonu..you have courage to stand against dadagiri of Muslims\".\n\nCovertly Aggressive(CAG) - This type of aggression the attack is not direct but hidden, subtle and more indirect while being stated politely most of the times. For example, \"Dear India, stop playing with the emotions of your people for votes.\"\n\nNon-Aggressive(NAG) - Generally these type of text lack any kind of aggression it is basically used to state facts, wishing on occasions and polite and supportive." ] } ], "annotation_id": [ "bf48a718d94133ed24e7ea54cb050ffaa688cf7b" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Emotion Sensor Feature, Part of Speech, Punctuation, Sentiment Analysis, Empath, TF-IDF Emoticon features", "evidence": [ "Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term \"NLP Features\" to represent it in the entire paper.", "We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).", "FLOAT SELECTED: Table 1: Details of NLP features" ], "highlighted_evidence": [ "Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term \"NLP Features\" to represent it in the entire paper.", "We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).", "FLOAT SELECTED: Table 1: Details of NLP features" ] } ], "annotation_id": [ "1e6b29722f6026e7890d7fb1ebfae4bc024cf62c" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Systems do not perform well both in Facebook and Twitter texts", "evidence": [ "Most of the above-discussed systems either shows high performance on (a) Twitter dataset or (b) Facebook dataset (given in the TRAC-2018), but not on both English code-mixed datasets. This may be due to the text style or level of complexities of both datasets. So, we concentrated to develop a robust system for English code-mixed texts, and uni-lingual texts, which can also handle different writing styles. Our approach is based on three main ideas:" ], "highlighted_evidence": [ "Most of the above-discussed systems either shows high performance on (a) Twitter dataset or (b) Facebook dataset (given in the TRAC-2018), but not on both English code-mixed datasets. This may be due to the text style or level of complexities of both datasets." ] } ], "annotation_id": [ "1bd72739177cfb9c85bfecdba849231b7893062f" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "4bd6100879ff88f69bd3197930b3035fe4463808" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "None", "evidence": [ "The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset." ], "highlighted_evidence": [ "The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset." ] } ], "annotation_id": [ "ee7f69ecf994d51d3535bf22d0004da1740e43cc" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] } ] }
{ "caption": [ "Figure 1: Block diagram of the proposed system", "Table 1: Details of NLP features", "Figure 2: DPCNN", "Figure 3: DRNN", "Figure 4: Pooled BiLSTM", "Table 2: TRAC 2018, Details of English Code-Mixed Dataset", "Table 6: Results on Kaggle Test Dataset", "Figure 5: Confusion Matrix for Facebook, Twitter and Kaggle Datasets." ], "file": [ "3-Figure1-1.png", "4-Table1-1.png", "5-Figure2-1.png", "5-Figure3-1.png", "6-Figure4-1.png", "6-Table2-1.png", "7-Table6-1.png", "8-Figure5-1.png" ] }
1908.09951
An Emotional Analysis of False Information in Social Media and News Articles
Fake news is risky since it has been created to manipulate the readers' opinions and beliefs. In this work, we compared the language of false news to the real one of real news from an emotional perspective, considering a set of false information types (propaganda, hoax, clickbait, and satire) from social media and online news articles sources. Our experiments showed that false information has different emotional patterns in each of its types, and emotions play a key role in deceiving the reader. Based on that, we proposed a LSTM neural network model that is emotionally-infused to detect false news.
{ "section_name": [ "Introduction", "Introduction ::: Hypothesis", "Related Work", "Emotionally-infused Model", "Emotionally-infused Model ::: Emotional Lexicons", "Emotionally-infused Model ::: Model", "Emotionally-infused Model ::: Input Representation", "Evaluation Framework ::: Datasets", "Evaluation Framework ::: Datasets ::: News Articles", "Evaluation Framework ::: Datasets ::: Twitter", "Evaluation Framework ::: Baselines", "Experiments and Results ::: Emotion-based Model", "Experiments and Results ::: Emotionally-Infused Model", "Experiments and Results ::: EIN as Clickbaits Detector", "Discussion", "Conclusions and Future Work" ], "paragraphs": [ [ "With the complicated political and economic situations in many countries, some agendas are publishing suspicious news to affect public opinions regarding specific issues BIBREF0. The spreading of this phenomenon is increasing recently with the large usage of social media and online news sources. Many anonymous accounts in social media platforms start to appear, as well as new online news agencies without presenting a clear identity of the owner. Twitter has recently detected a campaign organized by agencies from two different countries to affect the results of the last U.S. presidential elections of 2016. The initial disclosures by Twitter have included 3,841 accounts. A similar attempt was done by Facebook, as they detected coordinated efforts to influence U.S. politics ahead of the 2018 midterm elections.", "False information is categorized into 8 types according to BIBREF1. Some of these types are intentional to deceive where others are not. In this work, we are interested in analyzing 4 main types, i.e. hoaxes, propagandas, clickbaits, and satires. These types can be classified into two main categories - misinformation and disinformation - where misinformation considers false information that is published without the intent to deceive (e.g. satire). Disinformation can be seen as a specific kind of false information with the aim to mislead the reader (e.g. hoax, propaganda, and clickbait). Propagandas are fabricated stories spread to harm the interest of a particular party. Hoaxes are similar to propagandas but the main aim of the writer is not to manipulate the readers' opinions but to convince them of the validity of a paranoia-fueled story BIBREF2. Clickbait is another type of disinformation that refers to the deliberate use of misleading headlines, thumbnails, or stories' snippets to redirect attention (for traffic attention). Satire is the only type of misinformation, where the writer's main purpose is not to mislead the reader, but rather to deliver the story in an ironic way (to entertain or to be sarcastic).", "The topic of fake news is gaining attention due to its risky consequences. A vast set of campaigns has been organized to tackle fake news. The owner of Wikipedia encyclopedia created the news site WikiTribune to encourage the evidence-based journalism.", "Another way of addressing this issue is by fact-checking websites. These websites like politifact.com, snopes.com and factchecking.org aim to debunk false news by manually assess the credibility of claims that have been circulated massively in online platforms. These campaigns were not limited to the English language where other languages such as Arabic have been targeted by some sites like fatabyyano.net." ], [ "Trusted news is recounting its content in a naturalistic way without attempting to affect the opinion of the reader. On the other hand, false news is taking advantage of the presented issue sensitivity to affect the readers' emotions which sequentially may affect their opinions as well. A set of works has been done previously to investigate the language of false information. The authors in BIBREF3 have studied rumours in Twitter. They have investigated a corpus of true and false tweets rumours from different aspects. From an emotional point of view, they found that false rumours inspired fear, disgust, and surprise in their replies while the true ones inspired joy and anticipation. Some kinds of false information are similar to other language phenomena. For example, satire by its definition showed similarity with irony language. The work in BIBREF4 showed that affective features work well in the detection of irony. In addition, they confirmed that positive words are more relevant for identifying sarcasm and negative words for irony BIBREF5. The results of these works motivate us to investigate the impact of emotions on false news types. These are the research questions we aim to answer:", "RQ1 Can emotional features help detecting false information?", "RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources?", "RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones?", "RQ4 What are the top-N emotions that discriminate false information types in both textual sources?", "In this work, we investigate suspicious news in two different sources: Twitter and online news articles. Concerning the news articles source, we focus on the beginning part of them, since they are fairly long, and the emotional analysis could be biased by their length. We believe that the beginning part of false news articles can present a unique emotional pattern for each false information type since the writer in this part is normally trying to trigger some emotions in the reader.", "Throughout the emotional analysis, we go beyond the superficial analysis of words. We hope that our findings in this work will contribute to fake news detection.", "The key contributions of this article are:", "Model: We propose an approach that combines emotional information from documents in a deep neural network. We compare the obtained results with a set of baselines. The results show that our approach is promising.", "Analysis: We show a comprehensive analysis on two false information datasets collected from social media and online news articles, based on a large set of emotions. We compare the differences from an affective perspective in both sources, and obtain valuable insights on how emotions can contribute to detect false news.", "The rest of the paper is structured as follows; After a brief review of related work in Section SECREF2, Section SECREF3 introduces our emotionally-infused model. Then, we present the evaluation framework in Section SECREF4. Section SECREF5 reports the experiments and the results, followed by an analysis on the false information types from emotional perspective in Section SECREF6. Finally, the conclusions of this work are summarized in Section SECREF7." ], [ "The work that has been done previously on the analysis of false information is rather small regarding the approaches that were proposed. In this section, we present some recent works on the language analysis and detection of false information. Recent attempts tried to analyze the language of false news to give a better understanding. A work done in BIBREF6 has studied the false information in Twitter from a linguistic perspective. The authors found that real tweets contain significantly fewer bias markers, hedges, subjective terms, and less harmful words. They also found that propaganda news targets morals more than satires and hoaxes but less than clickbaits. Furthermore, satirical news contains more loyalty and fewer betrayal morals compared to propaganda. In addition, they built a model that combined a set of features (graph-based, cues words, and syntax) and achieved a good performance comparing to other baselines (71% vs. 59% macro-F1). Another similar work BIBREF2 has been done to characterize the language of false information (propaganda, hoax, and satire) in online news articles. The authors have studied the language from different perspectives: the existence of weak and strong subjectivity, hedges, and the degree of dramatization using a lexicon from Wiktionary. As well, they employed in their study the LIWC dictionary to exploit the existence of personal pronouns, swear, sexual, etc. words. The results showed that false news types tend to use first and second personal pronouns more than truthful news. Moreover, the results showed that false news generally uses words to exaggerate (subjectives, superlatives, and modal adverbs), and specifically, the satire type uses more adverbs. Hoax stories tend to use fewer superlatives and comparatives, and propagandas use relatively more assertive verbs. Moving away from these previous false information types, the work in BIBREF3 has focused on analyzing rumours in Twitter (from factuality perspective: True or False). They analyzed about 126,000 rumours and found that falsehood widespread significantly further, faster, deeper, and more broadly than truth in many domains. In addition, they found that false rumours are more novel than truthful ones, which made people more likely to share them. From an emotional perspective, they found that false rumours triggered \"fear\", \"disgust\", and \"surprise\" in replies while truthful ones triggered \"anticipation\", \"sadness\", \"joy\", and \"trust\". Another work BIBREF7 has studied the problem of detecting hoaxes by analyzing features related to the content in Wikipedia. The work showed that some features like hoaxes articles' length as well as the ratio of wiki markups (images, references, links to other articles and to external URLs, etc.) are important to discriminate hoaxes from legitimate articles. Many approaches have been proposed on fake news detection. In general, they are divided into social media and news claims-based approaches. The authors in BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 have proposed supervised methods using recurrent neural networks or by extracting manual features like a set of regular expressions, content-based, network-based etc. As an example, the work by BIBREF13 assessed the credibility of tweets by analyzing trending topics. They used message-based, user-based, and propagation-based features, and they found that some features related to the user information like user's age, number of followers, statuse counts etc. have helped the most to discriminate truthful from deceitful tweets. Other news claims-based approaches BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18 have been mainly focusing on inferring the credibility of the claims by retrieving evidences from Google or Bing search engines. These approaches have employed a different set of features starting from manual features (e.g. cosine similarity between the claims and the results, Alexa Rank of the evidence source, etc.) to a fully automatic approach using deep learning networks. A recent trend started to appear and is trying to approach the detection of fake news from a stance perspective. The aim is to predict how other articles orient to a specific fact BIBREF19, BIBREF20, BIBREF21." ], [ "In this section we describe the Emotionally-Infused Network we propose (EIN)." ], [ "Several emotional models well-grounded in psychology science have been proposed, such as the ones by Magda Arnold BIBREF22, Paul Ekman BIBREF23, Robert Plutchik BIBREF24, and Gerrod Parrot BIBREF25. On the basis of each of them, many emotional resources (lexicons) were built in the literature. In this work, we consider several emotional resources to increase the coverage of the emotional words in texts as well to have a wider range of emotions in the analysis. Concretely, we use EmoSenticNet, EmoLex, SentiSense, LIWC and Empath:", "EmoSenticNet BIBREF26 is a lexical resource that assigns WordNet-Affect emotion labels to SenticNet concepts. It has a total of 13,189 entries annotated using the six Ekman's basic emotions.", "EmoLex BIBREF27 is a word-emotion association lexicon that is labeled using the eight Plutchik's emotions. This lexicon contains 14,181 words.", "SentiSense BIBREF28 is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. SentiSense has 5,496 words labeled with emotions from a set of 14 emotional categories, which is an edited version of the merge between Arnold, Plutchik, and Parrott models.", "LIWC BIBREF29 is a linguistic dictionary that contains 4,500 words categorized to analyze psycholinguistic patterns in text. Linguistic Inquiry and Word Count (LIWC) has 4 emotional categories: \"sadness\", \"anger\", \"positive emotion\", and \"negative emotion\".", "Empath BIBREF30 is a tool that uses deep learning and word embeddings to build a semantically meaningful lexicon for concepts. Empath uses Parrott's model for the emotional representation, but we use only the primary emotions (6 emotions) in the Pattrott's hierarchy (\"love\", \"joy\", \"surprise\", \"anger\", \"sadness\", \"fear\").", "In our study we consider the 17 emotions that we shown in Figure FIGREF14." ], [ "We choose an Long short-term memory (LSTM) BIBREF31 that takes the sequence of words as input and predicts the false information type. The input of our network is based on word embedding (content-based) and emotional features (see Figure FIGREF24)." ], [ "Our network consists of two branches. In the content-based one, we use an embedding layer followed by a LSTM layer. Then, we add an attention layer BIBREF32 to make this branch focus on (highlighting) particular words over others . The attention mechanism assigns a weight to each word vector result from the LSTM layer with a focus on the classification class. The input representation for this branch is represented as follows: the input sentence $S$ of length $n$ is represented as $[S\\textsubscript {1}, S\\textsubscript {2} .. S\\textsubscript {n}]$ where $S\\textsubscript {n} \\in {\\rm I\\!R}^d$; ${\\rm I\\!R}^d$ is a d-dimensional word embedding vector of the $i$-th word in the input sentence. The output vectors of the words are passed to the LSTM layer, where the LSTM learns the hidden state $h\\textsubscript {t}$ by capturing the previous timesteps (past features). The produced hidden state $h\\textsubscript {t}$ at each time step is passed to the attention layer which computes a \"context\" vector $c\\textsubscript {t}$ as the weighted mean of the state sequence $h$ by:", "Where $T$ is the total number of timesteps in the input sequence and $\\alpha \\textsubscript {tj}$ is a weight computed at each time step $j$ for each state hj. This output vector is then concatenated with the output from the densea (see Figure FIGREF24) layer and passed to the denseb layer, which precedes a final Softmax function to predict the output classes. Since the content-based branch is concatenated with the other emotional-based branch.", "On the other hand, the input representation for the emotional-based branch is defined as follows: we have $N$ emotional lexicons $L\\textsubscript {n}$ where $n\\in [1, 5]$, each lexicon has $M$ number of emotions depending on the emotion model that the lexicon uses (e.g. Plutchik, Arnold, etc.). The emotion vector $E\\textsubscript {m}$ of an input document using the $n$-th emotional lexicon is $L\\textsubscript {n}E\\textsubscript {m}$. In our implementation, the emotional vector $E\\textsubscript {m}$ of a Lexicon $L\\textsubscript {n}$ is built using word frequency and normalized by the input sentence's length. Each input sentence is represented using:", "Where $v \\in {\\rm I\\!R}^q$ and $q$ is:" ], [ "Annotated data is a crucial source of information to analyze false information. Current status of previous works lacks available datasets of false information, where the majority of the works focus on annotating datasets from a factuality perspective. However, to analyze the existence of emotions across different sources of news, we rely on two publicly available datasets and a list contains suspicious Twitter accounts." ], [ "Our dataset source of news articles is described in BIBREF2. This dataset was built from two different sources, for the trusted news (real news) they sampled news articles from the English Gigaword corpus. For the false news, they collected articles from seven different unreliable news sites. These news articles include satires, hoaxes, and propagandas but not clickbaits. Since we are interested also in analyzing clickbaits, we slice a sample from an available clickbait dataset BIBREF33 that was originally collected from two sources: Wikinews articles' headlines and other online sites that are known to publish clickbaits. The satire, hoax, and propaganda news articles are considerably long (some of them reach the length of 5,000 words). This length could affect the quality of the analysis as we mentioned before. We focus on analyzing the initial part of the article. Our intuition is that it is where emotion-bearing words will be more frequent. Therefore, we shorten long news articles into a maximum length of N words (N=300). We choose the value of N based on the length of the shortest articles. Moreover, we process the dataset by removing very short articles, redundant articles or articles that do not have a textual content." ], [ "For this dataset, we rely on a list of several Twitter accounts for each type of false information from BIBREF6. This list was created based on public resources that annotated suspicious Twitter accounts. The authors in BIBREF6 have built a dataset by collecting tweets from these accounts and they made it available. For the real news, we merge this list with another 32 Twitter accounts from BIBREF34. In this work we could not use the previous dataset and we decide to collect tweets again. For each of these accounts, we collected the last M tweets posted (M=1000). By investigating these accounts manually, we found that many tweets just contain links without textual news. Therefore, to ensure of the quality of the crawled data, we chose a high value for M (also to have enough data). After the collecting process, we processed these tweets by removing duplicated, very short tweets, and tweets without textual content. Table TABREF35 shows a summary for both datasets." ], [ "Emotions have been used in many natural language processing tasks and they showed their efficiency BIBREF35. We aim at investigating their efficiency to detect false information. In addition to EIN, we created a model (Emotion-based Model) that uses emotional features only and compare it to two baselines. Our aim is to investigate if the emotional features independently can detect false news. The two baselines of this model are Majority Class baseline (MC) and the Random selection baseline (RAN).", "For the EIN model, we compare it to different baselines: a) The first one is bag-of-words with a support vector machine classifier (BOW-SVM). We test different classifiers, and we choose SVM since it gives the highest result in the 10-fold Cross Validation (CV); b) We use another baseline that is based on word embeddings where for each input document we extract an average word embedding vector by taking the mean of the embeddings for the document's words. Similarly, we test different classifiers and the Logistic Regression classifier shows the best performance (WE-LR); c) The last baseline is the same as our neural architecture but without the emotional features branch: an LSTM layer followed by attention and dense layers." ], [ "In our experiments, we use $20\\%$ of each of the datasets for testing and we apply 10-fold cross-validation on the remain part for selecting the best classifier as well for tuning it. We tested many classifiers and we finally choose Random Forest for both datasets since it obtained the best results. Table TABREF39 presents the classification results on both datasets.", "The results in both datasets show that emotional features clearly detect false news, compared to the baselines (RQ1). The emotional features perform better in the news articles dataset compared with these of tweets. We are interested in investigating also how good are the emotional features in detecting each class comparing to the RAN baseline. We choose the RAN baseline since it shows better results with regard to macro-F1 score. For doing so, we investigated the True Positive (TP) classification ratio for each class in each dataset.", "The clickbait class shows the highest TPs comparing to the other classes. From this we can infer that clickbaits exploit emotions much more than the other classes to deceive the reader. It is worth to mention that for the hoax class the proposed approach is better than the random baselines with a small ratio ($4\\%$ difference). This could be justified by the fact that hoaxes, by definition, try to convince the reader of the credibility of a false story. Hence, the writer tries to deliver the story in a normal way without allowing the reader to fall under suspicion. The number of instances related to the false information classes in the news articles dataset is the same. Therefore, there is not a majority class that the classifier can be biased to. This is not the case in the Twitter dataset. For the Twitter dataset, the dataset is not balanced. Therefore, where the results are biased by the majority class (propaganda). But in general, all the classes' TP ratios are larger than the corresponding ones obtained with RAN baseline. From these results, we can conclude that suspicious news exploits emotions with the aim to mislead the reader. Following, we present the results obtained by the proposed emotionally-infused model." ], [ "In the neural model, to reduce the computational costs, instead of the cross-validation process we take another $20\\%$ from the training part as a validation set (other than the $20\\%$ that is prepared for testing). For the pretrained word embeddings, we use Google News Word2Vec 300-Embeddings in the neural network as well as in the W2V-LR baseline. For the classical machine learning classifiers for the baselines, we use the Scikit-Learn python library, and for the deep learning network, we use Keras library with Tensorflow as backend. To tune our deep learning network (hyper-parameters), we use the Hyperopt library. And to reduce the effect of overfitting, we use early stopping technique.", "In Table TABREF44 we summarize the parameters with respect to each dataset. We have to mention that we use Dropout after the dense layer in the emotional features branch (Dropc) as well as after the attention layer in the other one (Dropd) before the concatenation process. Since it is a multiclass classification process, we use categorical cross-entropy loss function. A summary of the models' parameters is presented in Table TABREF44.", "Table TABREF47 summarizes the performance of the proposed model in comparison to those obtained by the baselines. We report Macro- precision, recall, and F1, including also the metric of accuracy; for comparing the models' results we consider the macro of metrics since it shows an averaged result over all the classes. The baselines that we propose clearly show high results, where the LSTM baseline has the best performance in news articles dataset. In Twitter there is a different scenario, the BOW-SVM baseline shows a higher performance with respect to LSTM. We are interested in investigating the reason behind that. Therefore, we checked the coverage ratio of the used embeddings in the Twitter dataset. We have to mention that we excluded stop words during representing the input documents using the pre-trained Google News word embeddings. In the news articles dataset, we found that the coverage ratio of the embeddings is around $94\\%$ while in Twitter it is around $70\\%$. Therefore, we tuned the word embeddings during the training process to improve the document's representation since we have a larger dataset from Twitter. This process contributed with $1.9\\%$ on the final macro-F1 results in Twitter (the result without tuning is $53.51\\%$). Even though, the results obtained with the LSTM baseline is still lower than the one obtained with BOW-SVM. This experiment gives us some intuition that the weaker performance on Twitter may be due to the embeddings. Therefore, we tried different embeddings but none of them improved the result. The second baseline (W2V-LR) proved the same issue regarding the embeddings. The W2V-LR macro-F1 result in the news articles dataset is competitive, where it is much lower in Twitter. The usage of LSTM is two folds: in addition to being a good baseline, it shows also how much the emotional features contribute in the emotionally-infused network.", "EIN results outperform the baselines with a large margin (around 2% in Twitter and 7% in news articles), especially in the news articles dataset. The margin between EIN and the best baseline is lower in the Twitter dataset. The results also show that combining emotional features clearly boosts the performance. We can figure out the improvement by comparing the results of EIN to LSTM. EIN shows superior results in news articles dataset with regard to the LSTM (79.43%). A similar case appears in the Twitter dataset but with a lower margin (59.70%). The results of EIN in Twitter dataset show that emotional features help the weak coverage of word embeddings to improve the performance as well as to overcome the BOW-SVM baseline.", "We observed before that clickbait TP's ratio of the news articles dataset is the highest one, and this result points out that the clickbait class is less difficult to detect specifically from an emotional perspective. Therefore, in order to assess how our model separates false information types, we employ dimensionality reduction using t-distributed Stochastic Neighbor Embedding (T-SNE) technique BIBREF36 to project the document's representation from a high dimensional space to a 2D plane. Thus, we project the embeddings in EIN by extracting them from the outputs of Denseb layer (see Figure FIGREF48). We extract the embeddings twice, once from a random epoch (epoch 10) at the beginning of the training phase and the other at the last epoch.", "Our aim from the early epoch projection is to validate what we have noticed: the clickbait class is less difficult to detect with regard to the other classes. As we can notice in the 10-epoch plot, the clickbait class needs few epochs to be separated from the other types, and this supports what we found previously in the manual investigation of the classes' TP ratios. Despite this clear separation, there is still an overlapping with some real-news records. This results points out that emotions in clickbaits play a key role in deceiving the reader. Also, the figure shows that the disinformation classes still need more training epochs for better separation. Real-news records are totally overlapped with the false information classes as well as the false information classes with each other. On the other hand, for the last epoch, clearly, the classes are separated from each other and the more important, from the real news. But generally, there still a small overlapping between satires and hoaxes as well few records from the propaganda class." ], [ "From the previous results in Section SECREF37 as well as from what we notice in Figure FIGREF48, EIN obtains a clear separability of the clickbait class. These observations motivate us to investigate EIN as clickbait detector. Concretely, we test EIN on the source of our clickbait instances BIBREF33 in the news articles dataset. As we mentioned previously, this dataset originally was built using two different text sources. For clickbaits, the authors have manually identified a set of online sites that publish many clickbait articles. Whereas for the negative class, they collected headlines from a corpus of Wikinews articles collected in other research work. They took 7,500 samples from each class for the final version of the dataset. The authors also proposed a clickbaits detector model (Stop_Clickbait) that employed a combination of features: sentence structure (sentence length, average length of words, the ratio of the number of stop words to the number of thematic words and the longest separation between the syntactically dependent words), word patterns (presence of cardinal number at the beginning of the sentence, presence of unusual punctuation patterns), clickbait language (presence of hyperbolic words, common clickbait phrases, internet slangs and determiners), and N-grams features (word, Part-Of-Speech, and syntactic n-grams). Using this set of features group, the authors tested different classifiers where SVM showed the state-of-the-art results. They considered Accuracy, Precision, Recall and F1 to compare their approach to a baseline (an online web browser extension for clickbaits detection called Downworthy).", "In this experiment, we consider the third baseline (LSTM) to observe the improvement of the emotional features in the EIN model. Different from the previous experiments, this is a binary classification task. Therefore, we use binary cross-entropy as loss function and we change the Softmax layer to a Sigmoid function. The new parameters for both LSTM and EIN models are mentioned in Table TABREF44.", "In Table TABREF51 we present the results of the Stop_Clickbait approach, LSTM baseline, and the EIN model. The results show that our baseline outperforms the proposed clickbait detector with a good margin. Furthermore, the results of the EIN are superior to the LSTM and the Stop_Clickbait detector. Considering emotions in the EIN deep learning approach improved the detection of false information. This is due to the fact that in clickbaits emotions are employed to deceive the reader." ], [ "The results show that the detection of suspicious news in Twitter is harder than detecting them in news articles. Overall, the results of EIN showed that emotional features improve the performance of our model, especially in the case of the news articles dataset. We manually inspected the Twitter dataset and observed that the language of the tweets has differences compared to the news articles one. We found that news in Twitter has many abbreviations (amp, wrt, JFK...etc.), bad words abbreviations (WTF, LMFO...etc.), informal language presentation, and typos. This reduces the coverage ratio of word embeddings. We also noticed that suspicious news in Twitter are more related to sexual issues. To validate our observations, we extracted the mean value of sexual words using a list of sexual terms BIBREF37. The mean value is the average number of times a sexual/bad word appears in a tweet normalized by the length of the tweet. The mean value in Twitter is 0.003 while in news articles is 0.0024. Similarly, suspicious news in Twitter presented more insulting words than in news articles where the mean value in Twitter is 0.0027 and 0.0017 in news articles.", "Following, we focus on analyzing false information from an emotional perspective. We are aiming to answer the rest of the questions, RQ2, RQ3, and RQ4.", "RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources?", "Intuitively, the emotions contribution in the classification process is not the same, where some words could manifest the existence of specific kind of emotions rather than others. To investigate this point, we use Information Gain (IG) in order to identify the importance of emotions in discriminating between real and all the other types of false news (multiclass task) in both Twitter and news articles datasets (see Figure FIGREF54). Before going through the ranking of features importance, we notice that the emotions ranking shapes are very similar in both Twitter and news articles. This states that despite the fact that the language is different, both sources have similar overall emotions distribution. In other words, false news employs a similar emotional pattern in both text sources. Since the news language in Twitter is not presented clearly as in news articles, this observation can help to build a cross-source system that is trained on suspicious news from news articles to detect the corresponding ones in Twitter. Figure FIGREF54 shows also that the emotion \"joy\" is the most important emotion in both datasets. It also mentions that \"despair\" and \"hate\" are almost not used in the classification process. The ranking of the features in both sources is different, where in the news articles dataset the top important emotions are \"joy\", \"anticipation\", \"fear\", and \"disgust\" respectively. On the other hand, the top ones in Twitter are \"joy\", \"sadness\", \"fear\", and \"disgust\".", ".", "RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones?", "We measure statically significant differences using the t-test on emotions across real news and false news (binary task) in the both datasets in Figure FIGREF55. These findings provide a deeper understanding of the EIN performance. The results show that \"joy\", \"neg_emo\", \"ambiguous\", \"anticipation\", \"calmness\", \"disgust\", \"trust\" and \"surprise\" have significant statistical differences between real and suspicious news in both datasets. Some other emotions such as \"despair\" and \"anger\" have no statistical difference in both datasets. It turns out that the results we obtain are generally consistent with the IG results in research question RQ2. We notice in the IG analysis that some emotions have a higher importance in one of the news sources: \"sadness\", \"anger\", and \"fear\" have a higher importance in Twitter than in news articles, and the opposite for \"hope\". We observe the same findings using the t-test.", ".", "RQ4 What are the top-N emotions that discriminate false information types in both textual sources?", "False information types are different in the way they present the news to the reader. This raises a question: what are the top employed emotions in each type of false information? In Table TABREF57, we present the first three emotions that contribute mostly to the classification process to each type. This can indicate to us what are the emotion types that are used mostly in each type of false information.", "Table TABREF57 shows that clickbaits express \"surprise\" and \"negative emotion\" at the most. This validates the definition of clickbaits as \"attention redirection\" by exploiting the reader and convincing him/her that there is an unexpected thing with negative emotion. The result of seeing \"fear\" in the top features in Twitter is interesting; one of the recent studies is presenting the hypothesis that says: curiosity is the best remedy for fear BIBREF38 based on psychological interpretations. Taking into account the definition of clickbaits as \"attention redirection\", looking at our results, we can proof this hypothesis. Furthermore, despite the language differences in both datasets, we obtain almost the same results, which emphasize our results. For hoaxes, it is not simple to interpret a specific pattern of emotions in the results. We might justify it by the fact that hoaxes are written to convince the reader of the validity of a story. Therefore, the writer is trying to present the story in a normal way (truthful) similar to a real story. Therefore, the top emotions are not unique to the hoax type. But what we find from the top hoaxes emotions in both datasets is that they are generally different except the emotion \"like\". Despite the natural narrative way of presenting the story, the analysis shows that the writer still uses \"like\" to grab reader's attention smoothly. Propaganda type has clearer emotional interpretation considering its definition. We find that propaganda expresses \"joy\", \"fear\" and at the same time \"calmness\" in the news articles. Both \"joy\" and \"fear\" are contrary from an emotional polar perspective, where \"joy\" shows the extreme of the positive emotions and \"fear\" the extreme negative, and at the same time, \"calmness\" is present. The emotional shifting between the two extremes is a clear attempt of opinion manipulation from an emotional perspective. We obtain a similar emotion set from Twitter, but instead of \"joy\" we get \"hope\". Lastly, satire is defined as a type of parody presented in a typical format of mainstream journalism, but in a similar way to irony and sarcasm phenomena BIBREF39. The results of the analysis show that \"disgust\" and \"positive emotion\" are present in both datasets, but we get \"negative emotion\" in the news articles and \"sadness\" in Twitter (both are placed in the negative side of emotions). We are interested in investigating the cause of the emotion \"disgust\" which appeared in the results from both datasets. We conduct a manual analysis on the text of the satire type in both datasets in order to shed some light on the possible causes. We notice that the satire language in the news often employs the emotion \"disgust\" to give a sense of humor. Figure FIGREF58 shows some examples from the news articles dataset highlighting the words that triggered the emotion \"disgust\"." ], [ "In this article we have presented an emotionally-infused deep learning network that uses emotional features to identify false information in Twitter and news articles sources. We performed several experiments to investigate the effectiveness of the emotional features in identifying false information. We validated the performance of the model by comparing it to a LSTM network and other baselines. The results on the two datasets showed that clickbaits have a simpler manipulation language where emotions help detecting them. This demonstrates that emotions play a key role in deceiving the reader. Based on this result, we investigated our model performance on a clickbaits dataset and we compared it to the state-of-the-art performance. Our model showed superior results near to 96% F1 value.", "Overall results confirmed that emotional features have boosted EIN model performance achieving better results on 3 different datasets (RQ1). These results emphasized the importance of emotional features in the detection of false information. In Twitter, false news content is deliberately sexual oriented and it uses many insulting words. Our analysis showed that emotions can help detecting false information also in Twitter. In the analysis section, we answered a set of questions regarding the emotions distribution in false news. We found that emotions have similar importance distribution in Twitter and news articles regardless of the differences in the used languages (RQ2). The analysis showed that most of the used emotions have statistical significant difference between real and false news (RQ3). Emotions plays a different role in each type of false information in line with its definition (RQ4). We found that clickbaits try to attract the attention of the reader by mainly employing the \"surprise\" emotion. Propagandas are manipulating the feelings of the readers by using extreme positive and negative emotions, with triggering a sense of \"calmness\" to confuse the readers and enforcing a feeling of confidence. Satire news instead use the \"disgust\" emotion to give a sense of humor. To sum up, we can say that the initial part of false news contains more emotions than the rest of document. Our approach exploit this fact for their detection.", "To the best of our knowledge, this is the first work that analyzes the impact of emotions in the detection of false information considering both social media and news articles. As a future work, the results of our approach as a clickbaits detector motivate us to develop for a clickbaits detector as a web browser extension. Also, we will study how the emotions flow inside the articles of each kind of false information, which is worthy to be investigated as the results of this work confirmed." ] ] }
{ "question": [ "What is the baseline?", "What datasets did they use?" ], "question_id": [ "1571e16063b53409f2d1bd6ec143fccc5b29ebb9", "d71937fa5da853f7529f767730547ccfb70e5908" ], "nlp_background": [ "", "" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Majority Class baseline (MC) ", "Random selection baseline (RAN)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Emotions have been used in many natural language processing tasks and they showed their efficiency BIBREF35. We aim at investigating their efficiency to detect false information. In addition to EIN, we created a model (Emotion-based Model) that uses emotional features only and compare it to two baselines. Our aim is to investigate if the emotional features independently can detect false news. The two baselines of this model are Majority Class baseline (MC) and the Random selection baseline (RAN)." ], "highlighted_evidence": [ " In addition to EIN, we created a model (Emotion-based Model) that uses emotional features only and compare it to two baselines. Our aim is to investigate if the emotional features independently can detect false news. The two baselines of this model are Majority Class baseline (MC) and the Random selection baseline (RAN)." ] } ], "annotation_id": [ "cb321633dfd739b19e770fea4088c2d5fd8af189" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "News Articles", "Twitter" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Evaluation Framework ::: Datasets ::: News Articles", "Our dataset source of news articles is described in BIBREF2. This dataset was built from two different sources, for the trusted news (real news) they sampled news articles from the English Gigaword corpus. For the false news, they collected articles from seven different unreliable news sites. These news articles include satires, hoaxes, and propagandas but not clickbaits. Since we are interested also in analyzing clickbaits, we slice a sample from an available clickbait dataset BIBREF33 that was originally collected from two sources: Wikinews articles' headlines and other online sites that are known to publish clickbaits. The satire, hoax, and propaganda news articles are considerably long (some of them reach the length of 5,000 words). This length could affect the quality of the analysis as we mentioned before. We focus on analyzing the initial part of the article. Our intuition is that it is where emotion-bearing words will be more frequent. Therefore, we shorten long news articles into a maximum length of N words (N=300). We choose the value of N based on the length of the shortest articles. Moreover, we process the dataset by removing very short articles, redundant articles or articles that do not have a textual content.", "With the complicated political and economic situations in many countries, some agendas are publishing suspicious news to affect public opinions regarding specific issues BIBREF0. The spreading of this phenomenon is increasing recently with the large usage of social media and online news sources. Many anonymous accounts in social media platforms start to appear, as well as new online news agencies without presenting a clear identity of the owner. Twitter has recently detected a campaign organized by agencies from two different countries to affect the results of the last U.S. presidential elections of 2016. The initial disclosures by Twitter have included 3,841 accounts. A similar attempt was done by Facebook, as they detected coordinated efforts to influence U.S. politics ahead of the 2018 midterm elections.", "For this dataset, we rely on a list of several Twitter accounts for each type of false information from BIBREF6. This list was created based on public resources that annotated suspicious Twitter accounts. The authors in BIBREF6 have built a dataset by collecting tweets from these accounts and they made it available. For the real news, we merge this list with another 32 Twitter accounts from BIBREF34. In this work we could not use the previous dataset and we decide to collect tweets again. For each of these accounts, we collected the last M tweets posted (M=1000). By investigating these accounts manually, we found that many tweets just contain links without textual news. Therefore, to ensure of the quality of the crawled data, we chose a high value for M (also to have enough data). After the collecting process, we processed these tweets by removing duplicated, very short tweets, and tweets without textual content. Table TABREF35 shows a summary for both datasets." ], "highlighted_evidence": [ " News Articles\nOur dataset source of news articles is described in BIBREF2. This dataset was built from two different sources, for the trusted news (real news) they sampled news articles from the English Gigaword corpus. For the false news, they collected articles from seven different unreliable news sites.", "Twitter\nFor this dataset, we rely on a list of several Twitter accounts for each type of false information from BIBREF6. This list was created based on public resources that annotated suspicious Twitter accounts. The authors in BIBREF6 have built a dataset by collecting tweets from these accounts and they made it available. " ] } ], "annotation_id": [ "1bf5afa0a7d1ece4578d52c9772e5123cfe4ed9a" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ] }
{ "caption": [ "Fig. 1. The emotional lexicons with their own emotions.", "Fig. 2. Emotionally-infused neural network architecture for false information detection.", "Table 1. News articles and Twitter datasets’ statistics.", "Table 2. The results of the Emotion-based Model with the emotional features comparing to the baselines.", "Table 3. Models’ parameters used in the three datasets (News articles, Twitter, Stop_Clickbaits). LSTM: the 3rd baseline, EIN: Emotionally-Infused Network.", "Table 4. Results of the proposed model (EIN) vs. the baselines.", "Fig. 3. Projection of documents representation from the news articles dataset.", "Table 5. The performance of EIN on the clickbaits dataset using 10-fold CV.", "Fig. 4. Best ranked features according to Information Gain.", "Fig. 5. Statistical significant differences between false and real news on Twitter and news articles datasets using t-test.", "Table 6. The top 3 most important emotions in each false information type." ], "file": [ "4-Figure1-1.png", "5-Figure2-1.png", "7-Table1-1.png", "8-Table2-1.png", "9-Table3-1.png", "10-Table4-1.png", "10-Figure3-1.png", "11-Table5-1.png", "13-Figure4-1.png", "13-Figure5-1.png", "14-Table6-1.png" ] }
1606.08140
STransE: a novel embedding model of entities and relationships in knowledge bases
Knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform link prediction or knowledge base completion, i.e., predict whether a relationship not in the knowledge base is likely to be true. This paper combines insights from several previous link prediction models into a new embedding model STransE that represents each entity as a low-dimensional vector, and each relation by two matrices and a translation vector. STransE is a simple combination of the SE and TransE models, but it obtains better link prediction performance on two benchmark datasets than previous embedding models. Thus, STransE can serve as a new baseline for the more complex models in the link prediction task.
{ "section_name": [ "Introduction", "Our approach", "Related work", "Experiments", "Task and evaluation protocol", "Main results", "Conclusion and future work", "Acknowledgments" ], "paragraphs": [ [ "Knowledge bases (KBs), such as WordNet BIBREF0 , YAGO BIBREF1 , Freebase BIBREF2 and DBpedia BIBREF3 , represent relationships between entities as triples $(\\mathrm {head\\ entity, relation, tail\\ entity})$ . Even very large knowledge bases are still far from complete BIBREF4 , BIBREF5 . Link prediction or knowledge base completion systems BIBREF6 predict which triples not in a knowledge base are likely to be true BIBREF7 , BIBREF8 . A variety of different kinds of information is potentially useful here, including information extracted from external corpora BIBREF9 , BIBREF10 and the other relationships that hold between the entities BIBREF11 , BIBREF12 . For example, toutanova-EtAl:2015:EMNLP used information from the external ClueWeb-12 corpus to significantly enhance performance.", "While integrating a wide variety of information sources can produce excellent results BIBREF13 , there are several reasons for studying simpler models that directly optimize a score function for the triples in a knowledge base, such as the one presented here. First, additional information sources might not be available, e.g., for knowledge bases for specialized domains. Second, models that don't exploit external resources are simpler and thus typically much faster to train than the more complex models using additional information. Third, the more complex models that exploit external information are typically extensions of these simpler models, and are often initialized with parameters estimated by such simpler models, so improvements to the simpler models should yield corresponding improvements to the more complex models as well.", "Embedding models for KB completion associate entities and/or relations with dense feature vectors or matrices. Such models obtain state-of-the-art performance BIBREF14 , BIBREF8 , BIBREF15 , BIBREF16 , BIBREF4 , BIBREF17 , BIBREF18 and generalize to large KBs BIBREF19 . Table 1 summarizes a number of prominent embedding models for KB completion.", "Let $(h, r, t)$ represent a triple. In all of the models discussed here, the head entity $h$ and the tail entity $t$ are represented by vectors $\\textbf {h}$ and $\\textbf {t}\\in \\mathbb {R}^{k}$ respectively. The Unstructured model BIBREF15 assumes that $\\textbf {h} \\approx \\textbf {t}$ . As the Unstructured model does not take the relationship $r$ into account, it cannot distinguish different relation types. The Structured Embedding (SE) model BIBREF8 extends the unstructured model by assuming that $h$ and $t$ are similar only in a relation-dependent subspace. It represents each relation $r$ with two matrices $h$0 and $h$1 , which are chosen so that $h$2 . The TransE model BIBREF16 is inspired by models such as Word2Vec BIBREF20 where relationships between words often correspond to translations in latent feature space. The TransE model represents each relation $h$3 by a translation vector r $h$4 , which is chosen so that $h$5 .", "The primary contribution of this paper is that two very simple relation-prediction models, SE and TransE, can be combined into a single model, which we call STransE. Specifically, we use relation-specific matrices $\\textbf {W}_{r,1}$ and $\\textbf {W}_{r,2}$ as in the SE model to identify the relation-dependent aspects of both $h$ and $t$ , and use a vector $\\textbf {r}$ as in the TransE model to describe the relationship between $h$ and $t$ in this subspace. Specifically, our new KB completion model STransE chooses $\\textbf {W}_{r,1}$ , $\\textbf {W}_{r,2}$ and $\\textbf {r}$ so that $\\textbf {W}_{r,2}$0 . That is, a TransE-style relationship holds in some relation-dependent subspace, and crucially, this subspace may involve very different projections of the head $\\textbf {W}_{r,2}$1 and tail $\\textbf {W}_{r,2}$2 . So $\\textbf {W}_{r,2}$3 and $\\textbf {W}_{r,2}$4 can highlight, suppress, or even change the sign of, relation-specific attributes of $\\textbf {W}_{r,2}$5 and $\\textbf {W}_{r,2}$6 . For example, for the “purchases” relationship, certain attributes of individuals $\\textbf {W}_{r,2}$7 (e.g., age, gender, marital status) are presumably strongly correlated with very different attributes of objects $\\textbf {W}_{r,2}$8 (e.g., sports car, washing machine and the like).", "As we show below, STransE performs better than the SE and TransE models and other state-of-the-art link prediction models on two standard link prediction datasets WN18 and FB15k, so it can serve as a new baseline for KB completion. We expect that the STransE will also be able to serve as the basis for extended models that exploit a wider variety of information sources, just as TransE does." ], [ "Let $\\mathcal {E}$ denote the set of entities and $\\mathcal {R}$ the set of relation types. For each triple $(h, r, t)$ , where $h, t \\in \\mathcal {E}$ and $r \\in \\mathcal {R}$ , the STransE model defines a score function $f_r(h, t)$ of its implausibility. Our goal is to choose $f$ such that the score $f_r(h,t)$ of a plausible triple $(h,r,t)$ is smaller than the score $f_{r^{\\prime }}(h^{\\prime },t^{\\prime })$ of an implausible triple $\\mathcal {R}$0 . We define the STransE score function $\\mathcal {R}$1 as follows:", " $\nf_r(h, t) & = & \\Vert \\textbf {W}_{r,1}\\textbf {h} + \\textbf {r} - \\textbf {W}_{r,2}\\textbf {t}\\Vert _{\\ell _{1/2}}\n$ ", "using either the $\\ell _1$ or the $\\ell _2$ -norm (the choice is made using validation data; in our experiments we found that the $\\ell _1$ norm gave slightly better results). To learn the vectors and matrices we minimize the following margin-based objective function: $\n\\mathcal {L} & = & \\sum _{\\begin{array}{c}(h,r,t) \\in \\mathcal {G} \\\\ (h^{\\prime },r,t^{\\prime }) \\in \\mathcal {G}^{\\prime }_{(h, r, t)}\\end{array}} [\\gamma + f_r(h, t) - f_r(h^{\\prime }, t^{\\prime })]_+\n$ ", "where $[x]_+ = \\max (0, x)$ , $\\gamma $ is the margin hyper-parameter, $\\mathcal {G}$ is the training set consisting of correct triples, and $\\mathcal {G}^{\\prime }_{(h, r, t)} = \\lbrace (h^{\\prime }, r, t) \\mid h^{\\prime } \\in \\mathcal {E}, (h^{\\prime }, r, t) \\notin \\mathcal {G} \\rbrace \\cup \\lbrace (h, r,\nt^{\\prime }) \\mid t^{\\prime } \\in \\mathcal {E}, (h, r, t^{\\prime }) \\notin \\mathcal {G} \\rbrace $ is the set of incorrect triples generated by corrupting a correct triple $(h, r, t)\\in \\mathcal {G}$ .", "We use Stochastic Gradient Descent (SGD) to minimize $\\mathcal {L}$ , and impose the following constraints during training: $\\Vert \\textbf {h}\\Vert _2 \\leqslant 1$ , $\\Vert \\textbf {r}\\Vert _2 \\leqslant 1$ , $\\Vert \\textbf {t}\\Vert _2 \\leqslant 1$ , $\\Vert \\textbf {W}_{r,1}\\textbf {h}\\Vert _2\n\\leqslant 1$ and $\\Vert \\textbf {W}_{r,2}\\textbf {t}\\Vert _2 \\leqslant 1$ ." ], [ "Table 1 summarizes related embedding models for link prediction and KB completion. The models differ in the score functions $f_r(h, t)$ and the algorithms used to optimize the margin-based objective function, e.g., SGD, AdaGrad BIBREF21 , AdaDelta BIBREF22 and L-BFGS BIBREF23 .", "DISTMULT BIBREF24 is based on a Bilinear model BIBREF14 , BIBREF15 , BIBREF25 where each relation is represented by a diagonal rather than a full matrix. The neural tensor network (NTN) model BIBREF4 uses a bilinear tensor operator to represent each relation while ProjE BIBREF26 could be viewed as a simplified version of NTN with diagonal matrices. Similar quadratic forms are used to model entities and relations in KG2E BIBREF27 , ComplEx BIBREF28 , TATEC BIBREF29 and RSTE BIBREF30 . In addition, HolE BIBREF31 uses circular correlation—a compositional operator—which could be interpreted as a compression of the tensor product.", "The TransH model BIBREF17 associates each relation with a relation-specific hyperplane and uses a projection vector to project entity vectors onto that hyperplane. TransD BIBREF32 and TransR/CTransR BIBREF33 extend the TransH model using two projection vectors and a matrix to project entity vectors into a relation-specific space, respectively. TransD learns a relation-role specific mapping just as STransE, but represents this mapping by projection vectors rather than full matrices, as in STransE. The lppTransD model BIBREF34 extends TransD to additionally use two projection vectors for representing each relation. In fact, our STransE model and TranSparse BIBREF35 can be viewed as direct extensions of the TransR model, where head and tail entities are associated with their own projection matrices, rather than using the same matrix for both, as in TransR and CTransR.", "Recently, several authors have shown that relation paths between entities in KBs provide richer information and improve the relationship prediction BIBREF36 , BIBREF37 , BIBREF18 , BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF43 , BIBREF44 . In addition, NickelMTG15 reviews other approaches for learning from KBs and multi-relational data." ], [ "For link prediction evaluation, we conduct experiments and compare the performance of our STransE model with published results on the benchmark WN18 and FB15k datasets BIBREF16 . Information about these datasets is given in Table 2 ." ], [ "The link prediction task BIBREF8 , BIBREF15 , BIBREF16 predicts the head or tail entity given the relation type and the other entity, i.e. predicting $h$ given $(?, r, t)$ or predicting $t$ given $(h, r, ?)$ where $?$ denotes the missing element. The results are evaluated using the ranking induced by the score function $f_r(h,t)$ on test triples.", "For each test triple $(h, r, t)$ , we corrupted it by replacing either $h$ or $t$ by each of the possible entities in turn, and then rank these candidates in ascending order of their implausibility value computed by the score function. This is called as the “Raw” setting protocol. For the “Filtered” setting protocol described in BIBREF16 , we removed any corrupted triples that appear in the knowledge base, to avoid cases where a correct corrupted triple might be ranked higher than the test triple. The “Filtered” setting thus provides a clearer view on the ranking performance. Following BIBREF16 , we report the mean rank and the Hits@10 (i.e., the proportion of test triples in which the target entity was ranked in the top 10 predictions) for each model. In addition, we report the mean reciprocal rank, which is commonly used in information retrieval. In both “Raw” and “Filtered” settings, lower mean rank, higher mean reciprocal rank or higher Hits@10 indicates better link prediction performance.", "Following TransR BIBREF33 , TransD BIBREF32 , rTransE BIBREF37 , PTransE BIBREF36 , TATEC BIBREF29 and TranSparse BIBREF35 , we used the entity and relation vectors produced by TransE BIBREF16 to initialize the entity and relation vectors in STransE, and we initialized the relation matrices with identity matrices. We applied the “Bernoulli” trick used also in previous work for generating head or tail entities when sampling incorrect triples BIBREF17 , BIBREF33 , BIBREF27 , BIBREF32 , BIBREF36 , BIBREF34 , BIBREF35 . We ran SGD for 2,000 epochs to estimate the model parameters. Following NIPS20135071 we used a grid search on validation set to choose either the $l_1$ or $l_2$ norm in the score function $f$ , as well as to set the SGD learning rate $\\lambda \\in \\lbrace 0.0001, 0.0005, 0.001, 0.005, 0.01 \\rbrace $ , the margin hyper-parameter $\\gamma \\in \\lbrace 1, 3, 5 \\rbrace $ and the vector size $k\\in \\lbrace 50, 100 \\rbrace $ . The lowest filtered mean rank on the validation set was obtained when using the $l_1$ norm in $f$ on both WN18 and FB15k, and when $\\lambda = 0.0005, \\gamma = 5,\n\\text{ and } k = 50$ for WN18, and $\\lambda = 0.0001, \\gamma = 1,\n\\text{ and } k = 100$ for FB15k." ], [ "Table 3 compares the link prediction results of our STransE model with results reported in prior work, using the same experimental setup. The first 15 rows report the performance of the models that do not exploit information about alternative paths between head and tail entities. The next 5 rows report results of the models that exploit information about relation paths. The last 3 rows present results for the models which make use of textual mentions derived from a large external corpus.", "It is clear that the models with the additional external corpus information obtained best results. In future work we plan to extend the STransE model to incorporate such additional information. Table 3 also shows that the models employing path information generally achieve better results than models that do not use such information. In terms of models not exploiting path information or external information, the STransE model produces the highest filtered mean rank on WN18 and the highest filtered Hits@10 and mean reciprocal rank on FB15k. Compared to the closely related models SE, TransE, TransR, CTransR, TransD and TranSparse, our STransE model does better than these models on both WN18 and FB15k.", "Following NIPS20135071, Table 4 analyzes Hits@10 results on FB15k with respect to the relation categories defined as follows: for each relation type $r$ , we computed the averaged number $a_h$ of heads $h$ for a pair $(r, t)$ and the averaged number $a_t$ of tails $t$ for a pair $(h, r)$ . If $a_h < 1.5$ and $a_t\n< 1.5$ , then $r$ is labeled 1-1. If $a_h$0 and $a_h$1 , then $a_h$2 is labeled M-1. If $a_h$3 and $a_h$4 , then $a_h$5 is labeled as 1-M. If $a_h$6 and $a_h$7 , then $a_h$8 is labeled as M-M. 1.4%, 8.9%, 14.6% and 75.1% of the test triples belong to a relation type classified as 1-1, 1-M, M-1 and M-M, respectively.", "Table 4 shows that in comparison to prior models not using path information, STransE obtains the second highest Hits@10 result for M-M relation category at $(80.1\\% + 83.1\\%) / 2 = 81.6\\%$ which is 0.5% smaller than the Hits@10 result of TranSparse for M-M. However, STransE obtains 2.5% higher Hits@10 result than TranSparse for M-1. In addition, STransE also performs better than TransD for 1-M and M-1 relation categories. We believe the improved performance of the STransE model is due to its use of full matrices, rather than just projection vectors as in TransD. This permits STransE to model diverse and complex relation categories (such as 1-M, M-1 and especially M-M) better than TransD and other similiar models. However, STransE is not as good as TransD for the 1-1 relations. Perhaps the extra parameters in STransE hurt performance in this case (note that 1-1 relations are relatively rare, so STransE does better overall)." ], [ "This paper presented a new embedding model for link prediction and KB completion. Our STransE combines insights from several simpler embedding models, specifically the Structured Embedding model BIBREF8 and the TransE model BIBREF16 , by using a low-dimensional vector and two projection matrices to represent each relation. STransE, while being conceptually simple, produces highly competitive results on standard link prediction evaluations, and scores better than the embedding-based models it builds on. Thus it is a suitable candidate for serving as future baseline for more complex models in the link prediction task.", "In future work we plan to extend STransE to exploit relation path information in knowledge bases, in a manner similar to lin-EtAl:2015:EMNLP1, guu-miller-liang:2015:EMNLP or NguyenCoNLL2016." ], [ "This research was supported by a Google award through the Natural Language Understanding Focused Program, and under the Australian Research Council's Discovery Projects funding scheme (project number DP160102156).", "NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence Program. The first author is supported by an International Postgraduate Research Scholarship and a NICTA NRPA Top-Up Scholarship." ] ] }
{ "question": [ "What scoring function does the model use to score triples?", "What datasets are used to evaluate the model?" ], "question_id": [ "8d258899e36326183899ebc67aeb4188a86f682c", "955ca31999309685c1daa5cb03867971ca99ec52" ], "nlp_background": [ "five", "five" ], "topic_background": [ "familiar", "familiar" ], "paper_read": [ "no", "no" ], "search_query": [ "link prediction", "link prediction" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "$ f_r(h, t) & = & \\Vert \\textbf {W}_{r,1}\\textbf {h} + \\textbf {r} - \\textbf {W}_{r,2}\\textbf {t}\\Vert _{\\ell _{1/2}} $" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Let $\\mathcal {E}$ denote the set of entities and $\\mathcal {R}$ the set of relation types. For each triple $(h, r, t)$ , where $h, t \\in \\mathcal {E}$ and $r \\in \\mathcal {R}$ , the STransE model defines a score function $f_r(h, t)$ of its implausibility. Our goal is to choose $f$ such that the score $f_r(h,t)$ of a plausible triple $(h,r,t)$ is smaller than the score $f_{r^{\\prime }}(h^{\\prime },t^{\\prime })$ of an implausible triple $\\mathcal {R}$0 . We define the STransE score function $\\mathcal {R}$1 as follows:", "$ f_r(h, t) & = & \\Vert \\textbf {W}_{r,1}\\textbf {h} + \\textbf {r} - \\textbf {W}_{r,2}\\textbf {t}\\Vert _{\\ell _{1/2}} $", "using either the $\\ell _1$ or the $\\ell _2$ -norm (the choice is made using validation data; in our experiments we found that the $\\ell _1$ norm gave slightly better results). To learn the vectors and matrices we minimize the following margin-based objective function: $ \\mathcal {L} & = & \\sum _{\\begin{array}{c}(h,r,t) \\in \\mathcal {G} \\\\ (h^{\\prime },r,t^{\\prime }) \\in \\mathcal {G}^{\\prime }_{(h, r, t)}\\end{array}} [\\gamma + f_r(h, t) - f_r(h^{\\prime }, t^{\\prime })]_+ $" ], "highlighted_evidence": [ "We define the STransE score function $\\mathcal {R}$1 as follows:\n\n$ f_r(h, t) & = & \\Vert \\textbf {W}_{r,1}\\textbf {h} + \\textbf {r} - \\textbf {W}_{r,2}\\textbf {t}\\Vert _{\\ell _{1/2}} $\n\nusing either the $\\ell _1$ or the $\\ell _2$ -norm (the choice is made using validation data; in our experiments we found that the $\\ell _1$ norm gave slightly better results)." ] } ], "annotation_id": [ "1c1dfad3a62e0b5a77ea7279312f43e2b0f155c0" ], "worker_id": [ "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "WN18, FB15k", "evidence": [ "As we show below, STransE performs better than the SE and TransE models and other state-of-the-art link prediction models on two standard link prediction datasets WN18 and FB15k, so it can serve as a new baseline for KB completion. We expect that the STransE will also be able to serve as the basis for extended models that exploit a wider variety of information sources, just as TransE does." ], "highlighted_evidence": [ "As we show below, STransE performs better than the SE and TransE models and other state-of-the-art link prediction models on two standard link prediction datasets WN18 and FB15k, so it can serve as a new baseline for KB completion." ] } ], "annotation_id": [ "97bb27301de49b9136971207ffed30e1f9e2e8eb" ], "worker_id": [ "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] } ] }
{ "caption": [ "Table 1: The score functions fr(h, t) and the optimization methods (Opt.) of several prominent embedding models for KB completion. In all of these the entities h and t are represented by vectors h and t ∈ Rk respectively.", "Table 2: Statistics of the experimental datasets used in this study (and previous works). #E is the number of entities, #R is the number of relation types, and #Train, #Valid and #Test are the numbers of triples in the training, validation and test sets, respectively.", "Table 3: Link prediction results. MR and H10 denote evaluation metrics of mean rank and Hits@10 (in %), respectively. “NLFeat” abbreviates Node+LinkFeat. The results for NTN (Socher et al., 2013) listed in this table are taken from Yang et al. (2015) since NTN was originally evaluated on different datasets. The results marked with + are obtained using the optimal hyper-parameters chosen to optimize Hits@10 on the validation set; trained in this manner, STransE obtains a mean rank of 244 and Hits@10 of 94.7% on WN18, while producing the same results on FB15k.", "Table 4: Hits@10 (in %) by the relation category on FB15k. “Unstr.” abbreviates Unstructured." ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "4-Table3-1.png", "5-Table4-1.png" ] }
1911.11698
Doc2Vec on the PubMed corpus: study of a new approach to generate related articles
PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the "similar articles" section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method. Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra. The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm. While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm.
{ "section_name": [ "Abstract", "Background ::: PubMed", "Background ::: The pmra model", "Background ::: Documents embedding", "Background ::: Related Work", "Methods ::: Material", "Methods ::: Optimisation", "Methods ::: Training", "Methods ::: Evaluation", "Methods ::: Evaluation ::: String length", "Methods ::: Evaluation ::: Words co-occurrences", "Methods ::: Evaluation ::: Stems co-occurrences", "Methods ::: Evaluation ::: MeSH similarity", "Methods ::: Evaluation ::: Manual evaluation", "Results ::: Optimisation", "Results ::: Evaluation ::: String length", "Results ::: Evaluation ::: Words co-occurrences", "Results ::: Evaluation ::: Stems co-occurrences", "Results ::: Evaluation ::: MeSH similarity", "Results ::: Evaluation ::: Manual evaluation", "Discussion", "Conclusion" ], "paragraphs": [ [ "Background PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the “similar articles” section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method.", "Methods Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra.", "Results The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm.", "Conclusions While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm." ], [ "PubMed is the largest database of bio-medical articles worldwide with more than 29,000,000 freely available abstracts. Each article is identified by an unique PubMed IDentifier (PMID) and is indexed with the Medical Subject Headings (MeSH) terminology. In order to facilitate the Information Retrieval (IR) process for the end-user, PubMed launched in 2007 a service of related articles search, available both through its Graphical User Interface (GUI) and its Application Programming Interface (API). Regarding the GUI, while the user is reading a publication, a panel presents title of articles that may be linked to the current reading. For the API, the user must query eLink with a given PMID BIBREF0. The output will be a list of others PMIDs, each associated with the similarity score computed by the pmra (pubmed related article) model BIBREF1." ], [ "To do so, each document is tokenized into many topics $S_{i}$. Then, the probability $P(C|D)$ that the user will find relevant the document C when reading the document D will be calculated. For this purpose, the authors brought the concept of eliteness. Briefly, a topic $S_{i}$ is presented as elite topic for a given document if a word $W_{i}$ representing $S_{i}$ is used with a high frequency in this document. This work allows to bring closer documents sharing a maximum of elite topics. In the article presenting the pmra model, authors claim that “the deployed algorithm in PubMed also takes advantage of MeSH terms, which we do not discuss here”. We can thus assume that a similar score is computed thanks to the associated MeSH terms with both documents D and C. Such an indexing is highly time-consuming and has to be manually performed." ], [ "Nowadays, embedding models allow to represent a text into a vector of fixed dimensions. The primary purpose of this mathematical representation of documents was to be able to use texts as input of deep neural networks. However, these models have been used by the IR community as well: once all fitted in the same multidimensional space, the cosine distance between two documents vectors can estimate the proximity between these two texts. In 2013, Mikolov et al. released a word embedding method called Word2Vec (W2V) BIBREF2. Briefly, this algorithm uses unsupervised learning to train a model which embeds a word as a vector while preserving its semantic meaning. Following this work, Mikolov and Le released in 2014 a method to vectorize complete texts BIBREF3. This algorithm, called Doc2Vec (D2V), is highly similar to W2V and comes with two architectures. The Distributed Memory Model of Paragraph Vectors (PV-DM) first trains a W2V model. This word embedding will be common for all texts from a given corpus C on which it was trained. Then, each document $D_{x}$ from C will be assigned to a randomly initialised vector of fixed length, which will be concatenated with vectors of words composing $D_{x}$ during the training time (words and documents vectors are sharing the same number of dimensions). This concatenation will be used by a final classifier to predict the next token of a randomly selected window of words. The accuracy of this task can be calculated and used to compute a loss function, used to back-propagate errors to the model, which leads to a modification of the document’s representation. The Distributed Bag of Words version of Paragraph Vector (PV-DBOW) is highly similar to the PV-DM, the main difference being the goal of the final classifier. Instead of concatenating vector from the document with word vectors, the goal here is to output words from this window just by using the mathematical representation of the document." ], [ "Doc2Vec has been used for many cases of similar document retrieval. In 2016, Lee et al. used D2V to clusterize positive and negative sentiments with an accuracy of 76.4% BIBREF4. The same year, Lau and Baldwin showed that D2V provides a robust representation of documents, estimated with two tasks: document similarity to retrieve 12 different classes and sentences similarity scoring BIBREF5. Recently, studies started to use documents embedding on the PubMed corpus. In 2017, Gargiulo et al. used a combination of words vectors coming from the abstract to bring closer similar documents from Pubmed BIBREF6. Same year, Wang and Koopman used the PubMed database to compare D2V and their own document embedding method BIBREF7. Their designed accuracy measurement task was consisting in retrieving documents having a small cosine distance with the embedding of a query. Recently, Chen et al. released BioSentVec, a set of sentence vectors created from PubMed with the algorithm sent2vec BIBREF8, BIBREF9. However, their evaluation task was based on public sentences similarity datasets, when the goal here is to embed entire abstracts as vectors and to use them to search for similar articles versus the pmra model. In 2008, the related articles feature of PubMed has been compared (using a manual evaluation) with one that uses both a TF-IDF BIBREF10 representation of the documents and Lin’s distance BIBREF11 to compare their MeSH terms BIBREF12. Thus, no study was designed so far to compare documents embedding and the pmra algorithm. The objectives of this study were to measure the ability of these two models to infer the similarity between documents from PubMed and to search what impacts the most this proximity. To do so, different evaluation tasks were defined to cover a wide range of aspects of document analogy, from their context to their morphological similarities." ], [ "During this study, the optimisation of the model’s parameters and one of the evaluation tasks require associated MeSH terms with the abstracts from PubMed. Briefly, the MeSH is a medical terminology, used to index documents on PubMed to perform keywords-based queries. The MEDOC program was used to create a MySQL database filled with 26,345,267 articles from the PubMed bulk downloads on October 2018, 5th BIBREF13. Then, 16,048,372 articles having both an abstract and at least one associated MeSH term were selected for this study. For each, the PMID, title, abstract and MeSH terms were extracted. The titles and abstracts were lowered, tokenized and concatenated to compose the PubMed documents corpus." ], [ "Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector.", "A list of possible values was defined for each of these six parameters. The full amount of possible combinations of these parameters were sent to slave nodes on a cluster, each node training a D2V model with a unique combination of parameters on 85% of 100,000 documents randomly selected from the corpus. Every article from the remaining 15% were then sent to each trained model and queried for the top-ten closest articles. For each model, a final accuracy score represented by the average of common MeSH terms percentage between each document $D_{i}$ from the 15,000 extracted texts and their returning top-ten closest documents was calculated. The combination of parameters with the highest score was kept for both PV-DBOW and PV-DM." ], [ "The final models were trained on a server powered by four XEON E7 (144 threads) and 1To of RAM. Among the total corpus (16,048,372 documents), 1% (160,482) was extracted as a test set (named TeS) and was discarded from the training. The final models were trained on 15,887,890 documents representing the training set called TrS." ], [ "The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity.", "Indeed, a reliable algorithm to find related documents should be able to bring closer texts sharing either a similar context, some important ideas (stems of words), an amount of non-stemmed vocabulary (e.g. verbs tenses are taken in account) and should not be based on raw character-similarity (two documents sharing the same proportion of letter “A” or having a similar length should not be brought together if they do not exhibit upper levels similarity)." ], [ "To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents)." ], [ "A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \\in D_{x}$ and all words $WC_{x} \\in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm." ], [ "The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed." ], [ "It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V." ], [ "Among all documents contained in the TeS, 10 articles $D_{x}$ have been randomly selected. All of them were sent to the pmra and to the most accurate of the two D2V architectures, regarding the automatic evaluations explained above. Each model was then queried for the ten closest articles for each $D_{x_i} \\in D_{x}$ and the relevance between $D_{x_i}$ and every of the top-ten documents was blindly assessed by a three-modality scale used in other standard Information Retrieval test sets: bad (0), partial (1) or full relevance (2) BIBREF15. In addition, evaluators have been asked to rank publications according their relevant proximity with the query, the first being the closest from their perspective. Two medical doctors and two medical data librarians took part in this evaluation." ], [ "Regarding the optimisation, 1,920 different models were trained and evaluated. First, the dm parameter highly affects the accuracy. Indeed, the PV-DBOW architecture looks more precise with a highest accuracy of 25.78%, while the PV-DM reached only 18.08% of common MeSH terms in average between query and top-close documents. Then, embedding vectors having large number of dimensions ($> 256$) seem to lead to a better accuracy, for PV-DBOW at least. Finally, when set too low ($< 0.01$), the alpha parameter leads to poor accuracy. The best combination of parameters, obtained thanks to the PV-DBOW architecture, was selected. The best parameters regarding the PV-DM, but having the same vector_size value, were also kept (13.30% of accuracy). The concatenation of models is thus possible without dimensions reduction, this method being promoted by Mikolov and Lee BIBREF3. Selected values are listed on the table TABREF16." ], [ "By looking at the length difference in term of characters between documents brought closer by D2V, a difference is visible between the two architectures (Figure FIGREF19C). In fact, while a very low correlation is visible under the PV-DM architecture (coefficient $-2.6e10^{-5}$) and under the pmra model ($-5.4e10^{-5}$), a stronger negative one is observed between the cosine distance computed by the PV-DBOW for two documents and their difference in terms of length (coefficient $-1.1e10^{-4}$). This correlation suggests that two documents having a similar size are more likely to be closer in the vectorial space created by the PV-DBOW (cosine distance closer to 1)." ], [ "Once scores from pmra have been normalized, the correlation between words co-occurrences and scores returned by both D2V and pmra were studied (Figure FIGREF19B). The very low slopes of the D2V trend lines ($-1.1e10^{-5}$ for the PV-DBOW and $-3e10^{-6}$ for PV-DM) indicate that the vocabulary content does not influence (positively or negatively) the proximity between two documents for this algorithm. By looking at the green dots or line, the pmra seems to give less importance to the co-occurrence of terms. A low slope is observed ($-5.8e10^{-5}$), indicating a slight negative correlation between word co-occurrence and computed score." ], [ "This test assigns a score reflecting the proximity between two documents regarding their vocabulary content, the impact of the conjugation, plural forms, etc was lowered by a stemming step. The D2V model returns a cosine score S for a pair of documents ($0 < S < 1$, the top-close document is not likely to have a negative cosine value), while the pmra returns a score between 18M and 75M in our case BIBREF0. These scores were normalized to fit between the same limits than the cosine distance. For PV-DBOW, PV-DM and pmra, the influence of the stems is almost insignificant with very flat slopes looking at the trend lines ($1e10^{-6}$, $-2e10^{-6}$ and $-2e10^{-6}$ respectively, see figure FIGREF19A). This indicates that the stem content of two documents will not affect (negatively or positively) their proximity for these models." ], [ "By studying the common MeSH labels between two close documents, it is possible to assess whether the context influence or not this proximity. By looking at the figure FIGREF23A, we can see that PV-DBOW and pmra are very close in term of MeSH score, indicating that they bring closer documents sharing a similar number of common MeSH labels in average. The pmra model seems to be more likely to output documents sharing a higher MeSH score (the distribution tail going further 4 with a mean equal to 1.58, standard deviation: 1.06), while the PV-DM brings closer documents that are less likely to share an important number of MeSH terms, with a majority of score between 0 and 1 (mean equal to 1.16, standard deviation: 0.73). The figure FIGREF23B shows the correlation between the MeSH score for documents returned by the pmra and those returned by both PV-DM and PV-DBOW models. The PV-DBOW algorithm looks way closer to the pmra in terms of common MeSH labels between two close documents with a slope of 1.0064. The PV-DM model is much less correlated, with a slope of 0.1633, indicating less MeSH in common for close articles." ], [ "Regarding the results obtained by both PV-DBOW and PV-DM sub-architectures, the PV-DBOW model has been used versus the pmra. Its close score in the MeSH evaluation task compared to the pmra's one indicates an ability to bring closer documents sharing same concepts. Thus, 10 randomly chosen documents were sent to the pmra and to the PV-DBOW models and they were asked to output the 10 closest documents for each. Their relevance was then assessed by four evaluators.", "The agreement between all evaluators regarding the three-modalities scale was assessed by computing the Cohen's kappa score $K$ thanks to the SKlearn Python's library (Figure FIGREF25) BIBREF16. First, we can notice that the highest $K$ was obtained by the two medical data librarian (EL and GK) with $K=0.61$, indicating a substantial agreement BIBREF17. In contrary, the lowest $K$ was computed using evaluations from the two Medical Doctors (SJD and JPL) with $K=0.49$, indicating barely a moderate agreement. The average agreement is represented by $K=0.55$, indicating a moderate global agreement.", "Regarding the ranking of all results (the first being the most accurate compared to the query, the last the worst one), the agreement can also be seen as moderate. The concordance rate has been defined between two evaluators for a given pair of results $A/B$ as the probability for A to be better ranked than B for both judges. For each couple of evaluators the mean agreement was computed by averaging ten pairs $result/query$ randomly selected. In order to evaluate the 95% bilateral confidence interval associated with the average concordance rate of each pair of judges the Student confidence interval estimation method has been used. Deviation from normal has been reduced by hyperbolic arc-tangent transformation. The global mean concordance by pooling all judges together was 0.751 (sd = 0.08). The minimal concordance was equal to 0.73 and the maximal one to 0.88.", "Regarding the evaluation itself, based on the three-modality scale (bad, partial or full relevance), models are clearly not equivalents (Figure FIGREF26). The D2V model has been rated 80 times as \"bad relevance\" while the pmra returned only 24 times badly relevant documents. By looking at the results ranking, the mean position for D2V was 14.09 (ranging from 13.98 for JPL to 14.20 for EL). Regarding the pmra, this average position was equal to 6.89 (ranging from 6.47 for EL to 7.23 for SJD)." ], [ "In this study, the ability of D2V to infer similarity between biomedical abstracts has been compared versus the pmra, the algorithm actually used in Pubmed.", "Regarding the strings length task, even if trending lines slopes are very close to zero, a slight negative correlation is observed between the difference in terms of character and scores calculated by PV-DBOW and pmra. This result can be relativized. Indeed, it was expected that two different abstracts regarding their number of characters are more likely to be different in term of context. The longest text can treat more subjects with different words (explaining D2V’s results) or to be associated with more MeSH labels (clarifying pmra ones’).", "Words or stems content analysis does not showed any particular correlation between common words/stems and scores computed by both D2V models or pmra. Inverse results could have been expected, regarding the way pmra is linking documents (using common terms between documents). The score brought to the pmra model by the MeSH terms should be quite important for the final scoring formula. However, among all possible couples of words between two documents, only 500 were randomly selected, due to computational limits. Random sampling effect could have led to these results.", "D2V takes in account many language features such as bi- or trigrams, synonyms, other related meanings and stopwords. No prior knowledge of analysis on the documents are needed. The pmra is based (in addition to words) on the manual MeSH indexing of the document, even if this aspect was not discussed in the Lin and Wilbur’s publication. This indexing step is highly time-consuming and employs more than 50 people to assign labels on documents from PubMed. The result displayed on the figure FIGREF23 could have been expected for the pmra algorithm, this model using the MeSH terms on the statistical formula used to link documents as well as elite or elitness terms. It was thus expected that two documents sharing a lot of indexing labels would have been seen close by the pmra. However, these MeSH descriptors were only used to select the appropriate parameters used to train the D2V models. The fact that D2V still manages, with the PV-DBOW architecture, to find documents that are close to each other regarding the MeSH indexing demonstrates its ability to capture an article’s subject solely with its abstract and title.", "Regarding the manual evaluation, D2V PV-DBOW model has been very largely underrated compared to the pmra model. Its results have been seen as not accurate more than three times compared to the Pubmed's model. Regarding the ranking of the results, the average position of the pmra is centred around 7, while D2V's one is around 14. However, the real signification of these results can be relativised. Indeed, the agreement between the four annotators is only moderate and no general consensus can be extracted.", "This study also has some limitations. First, the MeSH indexing of documents on PubMed can occur on full-text data, while both optimisation of the hyper-parameters and an evaluation task are based on abstracts' indexing. However, this bias should have a limited impact on the results. The indexing being based on the main topics from the documents, these subjects should also be cited in the abstract. About this manual indexing, a bias is brought by the indexers. It is well-known in the information retrieval community that intra- and inter-indexers bias exist.", "As the parameters optimisation step relied only on MeSH terms, it assumed that a model trained on articles’ abstracts can be optimised with MeSH terms which are selected according to the full text of the articles. In other words, this optimisation assumed an abstract is enough to semantically represent the whole text. But this is not completely true. If it was, MeSH terms would have not be selected on full texts in the first place. Also, the principle that a PubMed related article feature has to give articles which have a lot of MeSH terms in common has been followed throughout this work.", "To go further, as mentioned in the paper presenting D2V, the concatenation of vectors from both PV-DM and PV-DBOW for a single document could lead to a better accuracy. A third model could be designed by the merge of the two presented here. Another moot point on the text embedding community is about the part-of-speech tagging of the text before sending it to the model (during both training and utilisation). This supplementary information could lead to a better understanding of the text, particularly due to the disambiguation of homonyms." ], [ "This study showed that Doc2Vec PV-DBOW, an unsupervised text embedding technique, can infer similarity between biomedical articles' abstract. It requires no prior knowledge on the documents such as text indexing and is not impacted by raw words content or document structure. This algorithm was able to link documents sharing MeSH labels in a similar way the pmra did. A manual evaluation returned very low scores for the D2V PV-DBOW model, but with a highly moderate agreement between evaluators. More investigation should be carried out to understand this difference between the evaluation based on the MeSH indexing (performed by humans) and the manual evaluation." ] ] }
{ "question": [ "How long it took for each Doc2Vec model to be trained?", "How better are results for pmra algorithm than Doc2Vec in human evaluation? ", "What Doc2Vec architectures other than PV-DBOW have been tried?", "What four evaluation tasks are defined to determine what influences proximity?", "What six parameters were optimized with grid search?" ], "question_id": [ "9b2b063e8a9938da195c9c0d6caa3e37a4a615a8", "ac3c88ace59bf75788370062db139f60499c2056", "26012f57cba21ba44b9a9f7ed8b1ed9e8ee7625d", "bd26a6d5d8b68d62e1b6eaf974796f3c34a839c4", "7d4fad6367f28c67ad22487094489680c45f5062" ], "nlp_background": [ "zero", "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "4286fa0b2bd14e834aa47849a0ecca3ae8f31fa0" ], "worker_id": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "The D2V model has been rated 80 times as \"bad relevance\" while the pmra returned only 24 times badly relevant documents." ], "yes_no": null, "free_form_answer": "", "evidence": [ "Regarding the evaluation itself, based on the three-modality scale (bad, partial or full relevance), models are clearly not equivalents (Figure FIGREF26). The D2V model has been rated 80 times as \"bad relevance\" while the pmra returned only 24 times badly relevant documents. By looking at the results ranking, the mean position for D2V was 14.09 (ranging from 13.98 for JPL to 14.20 for EL). Regarding the pmra, this average position was equal to 6.89 (ranging from 6.47 for EL to 7.23 for SJD)." ], "highlighted_evidence": [ "The D2V model has been rated 80 times as \"bad relevance\" while the pmra returned only 24 times badly relevant documents. " ] } ], "annotation_id": [ "24b1bfee43c130b0b369c61f60dec601becef8d4" ], "worker_id": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "PV-DM" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector." ], "highlighted_evidence": [ "Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). " ] } ], "annotation_id": [ "e87dd706d1ee6b47a4cb76fdd72e19966abd3c9b" ], "worker_id": [ "5e3e382b7704b26b88492038ec503e65307c11d5" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "String length", "Words co-occurrences", "Stems co-occurrences", "MeSH similarity" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity.", "Methods ::: Evaluation ::: String length", "To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents).", "Methods ::: Evaluation ::: Words co-occurrences", "A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \\in D_{x}$ and all words $WC_{x} \\in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm.", "Methods ::: Evaluation ::: Stems co-occurrences", "The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed.", "Methods ::: Evaluation ::: MeSH similarity", "It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V." ], "highlighted_evidence": [ "The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. ", "Methods ::: Evaluation ::: String length\nTo assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents).", "Methods ::: Evaluation ::: Words co-occurrences\nA matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \\in D_{x}$ and all words $WC_{x} \\in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm.", "Methods ::: Evaluation ::: Stems co-occurrences\nThe evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed.", "Methods ::: Evaluation ::: MeSH similarity\nIt is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V." ] } ], "annotation_id": [ "4d0f42ae6364a4c54db290264fbd00f1a46e675f" ], "worker_id": [ "5e3e382b7704b26b88492038ec503e65307c11d5" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "window_size", "alpha", "sample", "dm", "hs", "vector_size" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector." ], "highlighted_evidence": [ "Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector." ] } ], "annotation_id": [ "1c34a50ccdc0c2e7a575a848b76edbd6aa59284f" ], "worker_id": [ "5e3e382b7704b26b88492038ec503e65307c11d5" ] } ] }
{ "caption": [ "Figure 1. Ranking of the five designed documents similarity evaluation tasks.", "Figure 2. Analysis of stems, words and length differences between texts broughts closer by D2V and pmra. Correlation plot between the stems co-occurrence score (A), words co-occurrence score (B), length difference (C) and scores returned by two D2V architectures (PV-DBOW, blue and PV-DM, orange) or the pmra model (green, normalized values). Outliers with z-score ¿ 3 were discarded from the plot.", "Figure 3. Study of both pmra and D2V models regarding their ability to bring closer documents sharing many MeSH labels. A (upper panel): frequency of the different MeSH scores for the pmra, PV-DM and PV-DBOW models. PV-DBOW and pmra are centred on the same value and have a similar distribution, indicating a common ability to link documents regarding their topic. However, the PV-DM algorithm looks less efficient. B (lower panel): correlation between MeSH scores calculated from the pmra and those from D2V. The slopes of the trend lines support the precedent result with a slope close to 1 for PV-DBOW while the PV-DM only reach 0.1, indicating a weaker correlation. Outliers with z-score ¿ 3 were discarded from the plot.", "Figure 4. Global agreement between four evaluators rating the accuracy of the D2V and pmra models. Colour scale indicates the strength of the agreement between two annotators. It ranges from 0.49 between the two medical doctors SJD and JPL to 0.61 between the two medical data librarian EL and GK.", "Figure 5. Pulled rating of both models D2V and pmra. The height indicates the number of times each model has been rated as bad, moderate or strong accuracy result by the evaluators. D2V has been mostly rated as badly relevant (80 times) while the pmra was mostly rated as good relevance." ], "file": [ "4-Figure1-1.png", "6-Figure2-1.png", "8-Figure3-1.png", "9-Figure4-1.png", "10-Figure5-1.png" ] }
1901.02257
Multi-Perspective Fusion Network for Commonsense Reading Comprehension
Commonsense Reading Comprehension (CRC) is a significantly challenging task, aiming at choosing the right answer for the question referring to a narrative passage, which may require commonsense knowledge inference. Most of the existing approaches only fuse the interaction information of choice, passage, and question in a simple combination manner from a \emph{union} perspective, which lacks the comparison information on a deeper level. Instead, we propose a Multi-Perspective Fusion Network (MPFN), extending the single fusion method with multiple perspectives by introducing the \emph{difference} and \emph{similarity} fusion\deleted{along with the \emph{union}}. More comprehensive and accurate information can be captured through the three types of fusion. We design several groups of experiments on MCScript dataset \cite{Ostermann:LREC18:MCScript} to evaluate the effectiveness of the three types of fusion respectively. From the experimental results, we can conclude that the difference fusion is comparable with union fusion, and the similarity fusion needs to be activated by the union fusion. The experimental result also shows that our MPFN model achieves the state-of-the-art with an accuracy of 83.52\% on the official test set.
{ "section_name": [ "paragraph 1", "paragraph 2", "Related Work", "Model", "Encoding Layer", "Context Fusion Layer", "Output Layer", "Experimental Settings", "Experimental Results", "Discussion of Multi-Perspective", "Encoding Inputs Ablation", "Influence of Word-level Interaction", "Visualization", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "Content: Task Definition", "1. Describe the task of commonsense reading comprehension(CRC) belongs to which filed and how important it is.", "2. Define the task of CRC", "3. Data feature of CRC", "4. Figure 1 shows an example.", "Machine Reading Comprehension (MRC) is an extremely challenging topic in natural language processing field. It requires a system to answer the question referring to a given passage.no matter whether the answer is mentioned in the passage. MRC consists of several sub-tasks, such as cloze-style reading comprehension, span-extraction reading comprehension, and open-domain reading comprehension. Most of existing datasets emphasize the question whose answer is mentioned in the passage since it does not need any commonsense. In real reading comprehension, the human reader can fully understand the passage with the prior knowledge to answer the question. To directly relate commonsense knowledge to reading comprehension, SemEval2018 Task 11 defines a new sub-task called Commonsense Reading Comprehension, aiming at answering the questions that requires both commonsense knowledge and the understanding of the passage. The challenge of this task is how tolies in answer questions requires a system to draw inferences from multiple sentences from the passage and requireswith the commonsense knowledge that does not appear in the passage explicitly. Table 1 shows an example of CRC." ], [ "Content: Previous Research", "1. Category the methods in SemEval2018 task 11", "2. Describe the first method", "3. Describe the second method", "4. State that your work is belong to which method", "Most studies on CRC task are neural network based (NN-based) models, which typically have the following characteristics. Firstly, word representations are augmented by additional lexical information. , such as pre-trained embedding, POS and NER embedding, Relation embedding and some other handcraft features. Secondly, the interaction process is usually implemented by the attention mechanism, which can provide the interaction representations like choice-aware passage, choice-aware question, and question-aware passage. Thirdly, the original representations and interaction representations are fused together and then aggregated by a Bidirectional Long Short-Term Memory Network (BiLSTM) BIBREF1 to get high-order semantic information. Fourthly, the final output based on their bilinear interactions. is the sum scores of passage to choice and question to choice.", "The NN-based models have shown powerfulness on this task. However, there are still some limitations. Firstly, the two fusion processes of passage and question to choice are implemented separately, until producing the final output. Secondly, the existing fusion method used in reading comprehension task is usually implemented by concatenation BIBREF2 , BIBREF3 , which is monotonous and cannot capture the partial comparison information between two parts. Studies on Natural Language Inference (NLI) have explored more functions BIBREF4 , BIBREF5 , such as element-wise subtraction and element-wise multiplication, to capture more comparison information, which have been proved to be effective.", "In this paper, we introduce a Muti-Perspective Fusion Network (MPFN) to tackle these limitations. The model can fuse the choice with passage and question simultaneously to get a multi-perspective fusion representation. Furthermore, inspired by the element-wise subtraction and element-wise multiplication function used in BIBREF5 , we define three kinds of fusion functions from multiple perspectives to fuse choice, choice-aware passage, and choice-aware question. The three fusions are union fusion, difference fusion, and similarity fusion. Note that, we name the concatenation fusion method as union fusion in this paper, which collects the global information. The difference fusion and the similarity fusion can discover the different parts and similar parts among choice, choice-aware passage, and choice-aware question respectively.", "MPFN comprises an encoding layer, a context fusion layer, and an output layer. In the encoding layer, we employ a BiLSTM as the encoder to obtain context representations. to convert the embeddings of passage, question, and choice to their corresponding context embeddings. To acquire better semantic representations, we apply union fusion in the word level. to choice, choice-aware passage embedding, and choice-aware question embedding. In the context fusion layer, we apply union fusion, difference fusion, and similarity fusion to obtain a multi-perspective fusion representation. In the output layer, a self-attention and a feed-forward neural network are used to make the final prediction.", "We conduct experiments on MRScript dataset released by BIBREF0 . Our single and ensemble model achieve the accuracy of 83.52% and 84.84% on the official test set respectively. Our main contributions are as follows:", "We propose a general fusion framework with two-layer fusion, which can fuse the passage, question, and choice simultaneously.", "To collect multi-perspective fusion representations, we define three types of fusions, consisting of union fusion, difference fusion, and similarity fusion.", "We extend the fusion method to multi-perspective to obtain deeper understanding of the passage, question, and choice.", "We design several groups of experiments to evaluate the effectiveness of the three types of fusion and prove that our MPFN model outperforms all the other models. with an accuracy of 83.52%." ], [ "MRC has gained significant popularity over the past few years. Several datasets have been constructed for testing the comprehension ability of a system, such as MCTest BIBREF6 , SQuAD BIBREF7 , BAbI BIBREF8 , TriviaQA BIBREF9 , RACE BIBREF10 , and NewsQA BIBREF11 . The types of passage, question and answer of these datasets are various. Each dataset focuses on one specific aspect of reading comprehension. Particularly, the MCScript BIBREF0 dataset concerns answering the question which requires using commonsense knowledge.", "including Wikipedia articles, examinations, narrative stories, news articles. Answering questions in these datasets. Meanwhile, the question types and answer types vary differently. The answer type multiple choice, span-answer, exact match", "Many architectures on MRC follow the process of representation, attention, fusion, and aggregation BIBREF12 , BIBREF2 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . BiDAF BIBREF12 fuses the passage-aware question, the question-aware passage, and the original passage in context layer by concatenation, and then uses a BiLSTM for aggregation. The fusion levels in current advanced models are categorized into three types by BIBREF14 , including word-level fusion, high-level fusion, and self-boosted fusion. They further propose a FusionNet to fuse the attention information from bottom to top to obtain a fully-aware representation for answer span prediction.", " BIBREF16 present a DFN model to fuse the passage, question, and choice by dynamically determine the attention strategy.", "On SemEval2018 Task 11, most of the models use the attention mechanism to build interactions among the passage, the question, and the choice BIBREF17 , BIBREF3 , BIBREF18 , BIBREF19 . The most competitive models are BIBREF17 , BIBREF3 , and both of them employ concatenation fusion to integrate the information. BIBREF17 utilizes choice-aware passage and choice-aware question to fuse the choice in word level. In addition, they apply the question-aware passage to fuse the passage in context level. Different from BIBREF17 , both the choice-aware passage and choice-aware question are fused into choice in the context level in BIBREF3 , which is the current state-of-the-art result on the MCSript dataset.", "On NLI task, fusing the premise-aware hypothesis into the hypothesis is an effective and commonly-used method. BIBREF20 , BIBREF21 leverage the concatenation of the hypothesis and the hypothesis-aware premise to help improve the performance of their model. The element-wise subtraction and element-wise multiplication between the hypothesis and the hypothesis-aware premise are employed in BIBREF5 to enhance the concatenation. and further achieved the state-of-the-art results on Stanford Natural Language Inference BIBREF22 benchmark.", "Almost all the models on CRC only use the union fusion. In our MPFN model, we design another two fusion methods to extend the perspective of fusion. We evaluate the MPFN model on MRC task and achieve the state-of-the-art result." ], [ "The overview of our Multi-Perspective Fusion Network (MPFN) is shown in Fig. 1 . Given a narrative passage about a series of daily activities and several corresponding questions, a system requires to select a correct choice from two options for each question. In this paper, we denote $\\bf {p=\\lbrace p_1,p_2,...,p_{|p|}\\rbrace }$ as the passage, $\\bf {q=\\lbrace q_1,q_2,...,q_{|q|}\\rbrace }$ as a question, $\\bf {c=\\lbrace c_1,c_2,...,c_{|c|}\\rbrace }$ as one of the candidate choice, and a true label $y^{*} \\in \\lbrace 0,1\\rbrace $ . Our model aims to compute a probability for each choice and take the one with higher probability as the prediction label. Our model consists of three layers: an encoding layer, a context fusion layer, and an output layer. The details of each layer are described in the following subsections." ], [ "This layer aims to encode the passage embedding $p$ , the question embedding $q$ , and the choice embedding $c$ into context embeddings. Specially, we use a one-layer BiLSTM as the context encoder. ", "$$&\\bar{c}_i = \\text{BiLSTM}(c, i) , & i \\in [1,2, \\cdots ,|c|] \\\\\n&\\bar{p}_j = \\text{BiLSTM}(p, j) , & j \\in [1,2, \\cdots ,|p|] \\\\\n&\\bar{q}_k = \\text{BiLSTM}(q, k) , & k \\in [1,2, \\cdots ,|q|] $$ (Eq. 18) ", "The embeddings of $p$ , $q$ and $c$ are semantically rich word representations consisting of several kinds of embeddings. Specifically, the embeddings of passage and question are the concatenation of the Golve word embedding, POS embedding, NER embedding, Relation embedding and Term Frequency feature. And the embeddings of choice comprise the Golve word embedding, the choice-aware passage embedding, $c^p$ and choice-aware question embedding $c^q$ . The details about each embedding are follows:", "Glove word embedding We use the 300-dimensional Glove word embeddings trained from 840B Web crawl data BIBREF23 . The out-of-vocabulary words are initialized randomly. The embedding matrix are fixed during training.", "POS&NER embedding We leverage the Part-of-Speech (POS) embeddings and Named-Entity Recognition(NER) embeddings. The two embeddings $c_i^{pos} \\text{and} c_i^{ner}$ are randomly initialized to 12d and 8d respectively, and updated during training.", "Relation embedding Relations are extracted form ConceptNet. For each word in the choice, if it satisfies any relation with another word in the passage or the question, the corresponding relation will be taken out. If the relations between two words are multiple, we just randomly choose one. The relation embeddings $c_i^{rel}$ are generated in the similar way of POS embeddings. randomly initialized and updated during training as well.", "Term Frequency Following BIBREF17 , we introduce the term frequency feature to enrich the embedding of each word. The calculation is based on English Wikipedia.", "Choice-aware passage embedding The information in the passage that is relevant to the choice can help encode the choice BIBREF24 . To acquire the choice-aware passage embedding $c_i^p$ , we utilize dot product between non-linear mappings of word embeddings to compute the attention scores for the passage BIBREF25 . ", "$$& c_i^p = Attn(c_i,\\lbrace p_j\\rbrace _1^{|p|}) = \\sum _{j=1}^{|p|} {\\alpha }_{ij} p_j \\\\\n& {\\alpha }_{ij} \\propto exp(S(c_i, p_j)), \\quad S(c_i, p_j) = {ReLU(W{c_i})}^{T} ReLU(W {p_j})$$ (Eq. 19) ", "Choice-aware question embedding The choice relevant question information is also important for the choice. Therefore, we adopt the similar attention way as above to get the choice-aware question embedding $c_i^q=Attn(c_i, \\lbrace q_k\\rbrace _{1}^{|q|})$ .", "The embeddings delivered to the BiLSTM are the concatenation the above components, where $p_j = [p_j^{glove}, p_j^{pos},p_j^{ner},p_j^{rel}, p_j^{tf} ]$ , $c_i = [c_i^{glove}, c_i^{p},c_i^{q}]$ , and $q_k = [q_k^{glove}, q_k^{pos}, q_k^{ner}, q_k^{rel},q_k^{tf} ]$ ." ], [ "This is the core layer of our MPFN model. To take the union, different and similar information of the choice, passage, and question into consideration, three fusion functions are defined in this layer. In this layer, we define three fusion functions, which consider the union information, the different information, and the similar information of the choice, passage, and question.", "Since we have obtained the choice context $\\bar{c}_i$ , the passage context $\\bar{p}_j$ , and the question context $\\bar{q}_k$ in the encoding layer, we can calculate the choice-aware passage contexts $\\tilde{c}^p_i$ and choice-aware question contexts $\\tilde{c}^q_i$ . Then we deliver them together with the choice contexts $\\bar{c}_i$ to the three fusion functions.", "In this layer, we define three fusion functions to fuse the $\\bar{c}_i$ , $\\tilde{c}^p_j$ , and $\\bar{c}^q_k$ simultaneously and multi-perspectively. The three fusion functions take the union information, the different information, and the similar information of the choice, passage, and question into consideration. To better integrate this information, we feed the three fusion outputs to FNN for aggregation.", "Choice-aware passage context In this part, we calculate the choice-aware passage representations $\\tilde{c}_i^p= \\sum _{j}{\\beta }_{ij} \\bar{p}_j$ . For model simplification, here we use dot product between choice contexts and passage contexts to compute the attention scores ${\\beta }_{ij}$ : ", "$$&{\\beta }_{ij}= \\frac{exp({\\bar{c}_i^T \\bar{p}_j)}}{\\sum \\limits {_{j^\\prime =1}^{|p|}exp(\\bar{c}_i^T \\bar{p}_{j^\\prime })}}$$ (Eq. 21) ", "Choice-aware question context In a similar way as above, we get the choice-aware question context $\\tilde{c}_i^q= \\sum _{j}{\\beta }_{ik} \\bar{q}_k$ . The ${\\beta }_{ik}$ is the dot product of the choice context $\\bar{c}_i$ and question context $\\bar{q}_k$ .", "Multi-perspective Fusion This is the key module in our MPFN model. The goal of this part is to produce multi-perspective fusion representation for the choice $\\bar{c}_i$ , the choice-aware passage $\\tilde{c}^p_i$ , and the choice-aware question $\\tilde{c}^q_i$ . In this paper, we define fusion in three perspectives: union, difference, and similarity. Accordingly, we define three fusion functions to describe the three perspectives. The outputs and calculation of the three functions are as follows: : concatenation $;$ , element-wise dot product and element-wise subtraction. $f^u$ , $f^d$ , and $f^s$ All of the three fusion functions take the choice context, the choice-aware passage, and the choice-aware question as input. ", "$$&u_i = [\\bar{c}_i \\, ; \\tilde{c}_i^p \\,; \\tilde{c}^q_i] ,\\\\\n&d_i = ( \\bar{c}_i - \\tilde{c}_i^p)\\odot (\\bar{c_i} - \\tilde{c}_i^q) ,\\\\\n&s_i = \\bar{c}_i \\odot \\tilde{c}_i^p \\odot \\tilde{c}_i^q ,$$ (Eq. 22) ", " where $; \\,$ , $-$ , and $\\odot $ represent concatenation, element-wise subtraction, and element-wise multiplication respectively. And $u_i$ , $d_i$ , and $s_i$ are the representations from the union, difference and similarity perspective respectively.", "The union perspective is commonly used in a large bulk of tasks BIBREF21 , BIBREF14 , BIBREF2 . It can see the whole picture of the passage, the question, and the choice by concatenating the $\\tilde{c}^p_i$ and $\\tilde{c}^q_i$ together with $c_i$ . While the difference perspective captures the different parts between choice and passage, and the difference parts between choice and question by $\\bar{c_i} - \\tilde{c}_i^p$ and $\\bar{c_i} - \\tilde{c}_i^q$ respectively. The $\\odot $ in difference perspective can detect the two different parts at the same time and emphasize them. In addition, the similarity perspective is capable of discovering the similar parts among the passage, the question, and the choice.", "To map the three fusion representations to lower and same dimension, we apply three different FNNs with the ReLU activation to $u_i$ , $d_i$ , and $s_i$ . The final output $g_i$ is the concatenation of the results of the three FNNs, which represents a global perspective representation. ", "$$g_i=[f^u(u_i),f^d(d_i),f^s(s_i)] $$ (Eq. 23) " ], [ " The output layer includes a self-attention layer and a prediction layer. Following BIBREF26 , we summarize the global perspective representation $\\lbrace g_i\\rbrace _1^{|c|}$ to a fixed length vector $r$ . We compute the $r= \\sum _{i=1}^{|c|} b_i g_i$ , where $b_j$ is the self-weighted attention score : ", "$$&b_i = \\frac{exp(W{g}_i)}{\\sum \\limits {_{i^\\prime =1}^{|c|}exp(W {g}_{i^\\prime })}}$$ (Eq. 25) ", "In the prediction layer, we utilize the output of self-attention $r$ to make the final prediction.", "The final output y is obtained by transforming the $\\mathbf {v}$ to a scalar and then apply a sigmoid activation to map it to a probability." ], [ "Data We conduct experiments on the MCScript BIBREF0 , which is used as the official dataset of SemEval2018 Task11. This dataset constructs a collection of text passages about daily life activities and a series of questions referring to each passage, and each question is equipped with two answer choices. The MCScript comprises 9731, 1411, and 2797 questions in training, development, and test set respectively. For data preprocessing, we use spaCy for sentence tokenization, Part-of-Speech tagging, and Name Entity Recognization. The relations between two words are generated by ConceptNet. The MCScript is a recently released dataset, which collects 2,119 narrative texts about daily events along with 13,939 questions. In this dataset, 27.4% questions require commonsense inference.", "Parameters We use the standard cross-entropy function as the loss function. We choose Adam BIBREF27 with initial momentums for parameter optimization. As for hyper-parameters, we set the batch size as 32, the learning rate as 0.001, the dimension of BiLSTM and the hidden layer of FNN as 123. The embedding size of Glove, NER, POS, Relation are 300, 8, 12, 10 respectively. The dropout rate of the word embedding and BiLSTM output are 0.386 and 0.40 respectively." ], [ "Table 2 shows the results of our MPFN model along with the competitive models on the MCScript dataset. The TriAN achieves 81.94% in terms of test accuracy, which is the best result of the single model. The best performing ensemble result is 84.13%, provided by HMA, which is the voting results of 7 single systems.", "Our single MPFN model achieves 83.52% in terms of accuracy, outperforming all the previous models. The model exceeds the HMA and TriAN by approximately 2.58% and 1.58% absolute respectively. Our ensemble model surpasses the current state-of-the-art model with an accuracy of 84.84%. We got the final ensemble result by voting on 4 single models. Every single model uses the same architecture but different parameters." ], [ "To study the effectiveness of each perspective, we conduct several experiments on the three single perspectives and their combination perspective. Table 3 presents their comparison results. The first group of models are based on the three single perspectives, and we can observe that the union perspective performs best compared with the difference and similarity perspective. Moreover, the union perspective achieves 82.73% in accuracy, exceeding the TriAN by 0.79% absolute. We can also see that the similarity perspective is inferior to the other two perspectives.", "The second group of models in the Table 3 are formed from two perspectives. Compared with the single union perspective, combining the difference perspective with the union perspective can improve 0.11%. Composing union and similarity fusion together doesn't help the training. To our surprise, the combination of similarity perspective and difference perspective obtains 83.09% accuracy score.", "The last model is our MPFN model, which performing best. The final result indicates that composing the union perspective, difference perspective, and similarity perspective together to train is helpful.", "Many advanced models employ a BiLSTM to further aggregate the fusion results. To investigate whether a BiLSTM can assist the model, we apply another BiLSTM to the three fusion representations in Formula 23 respectively and then put them together. The results are shown in the second column in Table 3 , which indicate that the BiLSTM does not help improve the performance of the models." ], [ "In the section, we conduct ablation study on the encoding inputs to examine the effectiveness each component. The experiment results are listed in Table 3 . In Section \"Encoding Layer\" , we describe that our encoding inputs comprise six components: POS embedding, NER embedding, Relation embedding, Term Frequency, choice-aware passage embedding $C^p$ and choice-aware question embedding $C^q$ .", "From the best model, if we remove the POS embedding and NER embedding, the accuracy drops by 0.82% and 0.9%. Without Relation embedding, the accuracy drops to 81.98%, revealing that the external relations are helpful to the context fusions. Without Term Frequency, the accuracy drops by approximately 1.61%. This behavior suggests that the Term Frequency feature has a powerful capability to guide the model.", "After removing the $C^p$ , we find the performance degrades to 81.62%. This demonstrates that information in the passage is significantly important to final performance. If we remove $C^q$ from the MPFN, the accuracy drops to 82.16%. If we remove the word level fusion completely, we will obtain an 81.66% accuracy score. These results demonstrate that each component is indispensable and the bottom embeddings are the basic foundations of the top layer fusions." ], [ "In this section, we explore the influence of word-level interaction to each perspective. Fig 2 reports the overall results of how each perspective can be affected by the lower level interaction. The $C^p$ and the $C^q$ represent the choice-aware passage embedding and the choice-aware question embedding respectively. We can observe that the results of $[C;C^p]$ , $[C;C^q]$ , and $[C;C^p;C^q]$ are all higher than the result of $C$ alone, indicating the effectiveness of word embedding interaction.", "Both the union fusion and difference fusion can achieve more than 80% accuracy, while the similarity fusion is very unstable. We also observe that the difference fusion is comparable with the union fusion, which even works better than the union fusion when the information of $C^p$ is not introduced into the input of encoding. The similarity fusion performs poorly in $C$ and $[C;C^q]$ , while yielding a huge increase in the remaining two groups of experiments, which is an interesting phenomenon. We infer that the similarity fusion needs to be activated by the union fusion.", "In summary, we can conclude that integrate the information of $C^p$ into $C$ can greatly improve the performance of the model. Combining $C^q$ together with $C^p$ can further increase the accuracy. The information in the passage is richer than the question The overall conclusion" ], [ "In this section, we visualize the union and difference fusion representations and show them in Fig 3 . And, we try to analyze their characteristics and compare them to discover some connections. The values of similarity fusion are too small to observe useful information intuitively, so we do not show it here. We use the example presented in Table 1 for visualization, where the question is Why didn't the child go to bed by themselves? and the corresponding True choice is The child wanted to continue playing.", "The left region in Fig 3 is the union fusion. The most intuitive observation is that it captures comprehensive information. The values of child, wanted, playing are obvious higher than other words. This is consistent with our prior cognition, because the concatenation operation adopted in union fusion does not lose any content. While the difference union shows in the right region in Fig 3 focuses on some specific words. By further comparison, we find that the difference fusion can pay attention to the content ignored by the union fusion. What's more, the content acquired by the union would not be focused by the difference again. In other words, the union fusion and difference fusion indeed can emphasize information from the different perspective. Due to space limitation and" ], [ "In this paper, we propose the Multi-Perspective Fusion Network (MPFN) for the Commonsense Reading Comprehension (CMC) task. We propose a more general framework for CRC by designing the difference and similarity fusion to assist the union fusion. Our MPFN model achieves an accuracy of 83.52% on MCScript, outperforming the previous models. The experimental results show that union fusion based on the choice-aware passage, the choice-aware question, and the choice can surpass the TriAN and HMA model. The difference fusion performs stably, which is comparable with the union fusion. We find that the word-level union fusion can significantly influence the context-level fusion. The choice-aware passage word embedding can activate the similarity fusion. We find that combining the similar parts and the difference parts together can obtain the best performance among the two-perspective models. By taking the three types of fusion methods into consideration, our MPFN model achieves a state-of-the-art result." ], [ "This work is funded by Beijing Advanced Innovation for Language Resources of BLCU, the Fundamental Research Funds for the Central Universities in BLCU (17PT05), the Natural Science Foundation of China (61300081), and the Graduate Innovation Fund of BLCU (No.18YCX010)." ] ] }
{ "question": [ "What baseline models do they compare against?" ], "question_id": [ "3aa7173612995223a904cc0f8eef4ff203cbb860" ], "nlp_background": [ "infinity" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "reading comprehension" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "SLQA, Rusalka, HMA Model (single), TriAN (single), jiangnan (ensemble), MITRE (ensemble), TriAN (ensemble), HMA Model (ensemble)", "evidence": [ "FLOAT SELECTED: Table 2: Experimental Results of Models" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Experimental Results of Models" ] } ], "annotation_id": [ "1cbbd80eee1c4870bf7827e2e3bb278186731b7d" ], "worker_id": [ "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86" ] } ] }
{ "caption": [ "Fig. 1: Architecture of our MPFN Model.", "Table 2: Experimental Results of Models", "Table 3: Test Accuracy of Multi-Perspective", "Fig. 2: Influence of Word-level Interaction.", "Fig. 3: Visualization of Fusions" ], "file": [ "4-Figure1-1.png", "7-Table2-1.png", "8-Table3-1.png", "9-Figure2-1.png", "10-Figure3-1.png" ] }
1710.01507
Identifying Clickbait: A Multi-Strategy Approach Using Neural Networks
Online media outlets, in a bid to expand their reach and subsequently increase revenue through ad monetisation, have begun adopting clickbait techniques to lure readers to click on articles. The article fails to fulfill the promise made by the headline. Traditional methods for clickbait detection have relied heavily on feature engineering which, in turn, is dependent on the dataset it is built for. The application of neural networks for this task has only been explored partially. We propose a novel approach considering all information found in a social media post. We train a bidirectional LSTM with an attention mechanism to learn the extent to which a word contributes to the post's clickbait score in a differential manner. We also employ a Siamese net to capture the similarity between source and target information. Information gleaned from images has not been considered in previous approaches. We learn image embeddings from large amounts of data using Convolutional Neural Networks to add another layer of complexity to our model. Finally, we concatenate the outputs from the three separate components, serving it as input to a fully connected layer. We conduct experiments over a test corpus of 19538 social media posts, attaining an F1 score of 65.37% on the dataset bettering the previous state-of-the-art, as well as other proposed approaches, feature engineering or otherwise.
{ "section_name": [ "Introduction", "Related Work", "Model Architecture", "Bidirectional LSTM with Attention", "Siamese Net with Text Embeddings", "Siamese Neural Network with Visual Embeddings", "Fusion of the components", "Learning the Parameters", "Evaluation Results", "Training", "Comparison with other models", "Conclusion" ], "paragraphs": [ [ "The Internet provides instant access to a wide variety of online content, news included. Formerly, users had static preferences, gravitating towards their trusted sources, incurring an unwavering sense of loyalty. The same cannot be said for current trends since users are likely to go with any source readily available to them.", "In order to stay in business, news agencies have switched, in part, to a digital front. Usually, they generate revenue by (1) advertisements on their websites, or (2) a subscription based model for articles that might interest users. However, since the same information is available via multiple sources, no comment can be made on the preference of the reader. To lure in more readers and increase the number of clicks on their content, subsequently increasing their agency's revenue, writers have begun adopting a new technique - clickbait.", "The concept of clickbait is formalised as something to encourage readers to click on hyperlinks based on snippets of information accompanying it, especially when those links lead to content of dubious value or interest. Clickbaiting is the intentional act of over-promising or purposely misrepresenting - in a headline, on social media, in an image, or some combination - what can be expected while reading a story on the web. It is designed to create and, consequently, capitalise on the Loewenstein information gap BIBREF0 . Sometimes, especially in cases where such headlines are found on social media, the links can redirect to a page with an unoriginal story which contains repeated or distorted facts from the original article itself.", "Our engine is built on three components. The first leverages neural networks for sequential modeling of text. Article title is represented as a sequence of word vectors and each word of the title is further converted into character level embeddings. These features serve as input to a bidirectional LSTM model. An affixed attention layer allows the network to treat each word in the title in a differential manner. The next component focuses on the similarity between the article title and its actual content. For this, we generate Doc2Vec embeddings for the pair and act as input for a Siamese net, projecting them into a highly structured space whose geometry reflects complex semantic relationships. The last part of this system attempts to quantify the similarity of the attached image, if any, to the article title. Finally, the output of each component is concatenated and sent as input to a fully connected layer to generate a score for the task." ], [ "The task of automating clickbait detection has risen to prominence fairly recently. Initial attempts for the same have worked on (1) news headlines, and (2) heavy feature engineering for the particular dataset. BIBREF1 's work is one of the earliest pieces of literature available in the field, focusing on an aggregation of news headlines from previously categorised clickbait and non-clickbait sources. Apart from defining different types of clickbait, they emphasise on the presence of language peculiarities exploited by writers for this purpose. These include qualitative informality metrics and use of forward references in the title to keep the reader on the hook. The first instance of detecting clickbait across social media can be traced to BIBREF2 , hand-crafting linguistic features, including a reference dictionary of clickbait phrases, over a dataset of crowdsourced tweets BIBREF3 . However, BIBREF4 argued that work done specifically for Twitter had to be expanded since clickbait was available throughout the Internet, and not just social networks.", "It was not until BIBREF5 that neural networks were tried out for the task as the authors used the same news dataset as BIBREF4 to develop a deep learning based model to detect clickbait. They used distributional semantics to represent article titles, and BiLSTM to model sequential data and its dependencies. Since then, BIBREF6 has also experimented with Twitter data BIBREF3 deploying a BiLSTM for each of the textual features (post-text, target-title, target-paragraphs, target-description, target-keywords, post-time) available in the corpus, and finally concatenating the dense output layers of the network before forwarding it to a fully connected layer. Since it was proposed in BIBREF7 , the attention mechanism has been used for a variety of text-classification tasks, such as fake news detection and aspect-based sentiment analysis. BIBREF8 used a self-attentive BiGRU to infer the importance of tweet tokens in predicting the annotation distribution of the task.", "One common point in all the approaches yet has been the use of only textual features available in the dataset. Our model not only incorporates textual features, modeled using BiLSTM and augmented with an attention mechanism, but also considers related images for the task." ], [ "In this section, we present our hybrid approach to clickbait detection. We first explain the three individual components followed by their fusion, which is our proposed model. These components are (1) BiLSTM with attention, (2) Siamese Network on Text Embeddings, and (3) Siamese Network on Visual Embeddings. An overview of the architecture can be seen in Figure 1.", "We start with an explanation of the features used in the first component of the model.", "Distributed Word Embeddings", "Considering the effectiveness of distributional semantics in modeling language data, we use a pre-trained 300 dimensional Word2Vec BIBREF9 model trained over 100 billion words in the Google News corpus using the Continuous Bag of Words architecture. These map the words in a language to a high dimensional real-valued vectors to capture hidden semantic and syntactic properties of words, and are typically learned from large, unannotated text corpora. For each word in the title, we obtain its equivalent Word2Vec embeddings using the model described above.", "Character Level Word Embeddings", "Character level word embeddings BIBREF10 capture the orthographic and morphological features of a word. Apart from this, using them is a step toward mitigating the problem of out-of-vocabulary (OoV) words. In such a case, the word can be embedded by its characters using character level embedding. We follow BIBREF5 and first initialize a vector for every character in the corpus. The vector representation of each word is learned by applying 3 layers of a 1-dimensional Convolutional Neural Network BIBREF11 with ReLU non-linearity on each vector of character sequence of that word and finally max-pooling the sequence for each convolutional feature.", "Document Embeddings", "Doc2Vec BIBREF12 is an unsupervised approach to generate vector representations for slightly larger bodies of text, such as sentences, paragraphs and documents. It has been adapted from Word2Vec BIBREF9 which is used to generate vectors for words in large unlabeled corpora. The vectors generated by this approach come handy in tasks like calculating similarity metrics for sentences, paragraphs and documents. In sequential models like RNNs, the word sequence is captured in the generated sentence vectors. However, in Doc2Vec, the representations are order independent. We use GenSim BIBREF13 to learn 300 dimensional Doc2Vec embeddings for each target description and post title available.", "Pre-trained CNN Features", "As seen in various visual understanding problems recently, image descriptors trained using Convolutional Neural Networks over large amounts of data such as ImageNet have proven to be very effective. The implicit learning of spatial layout and object semantics in the later layers of the network from very large datasets has contributed to the success of these features. We use a pre-trained network of VGG-19 architecture BIBREF14 trained over the ImageNet database (ILSVRC-2012) and extract CNN features. We use the output of the fully-connected layer (FC7), which has 4096 dimensions, as feature representations for our architecture.", "We now go into detail about the components of the model, individual and combined, and how the parameters are learned." ], [ "Recurrent Neural Network (RNN) is a class of artificial neural networks which utilizes sequential information and maintains history through its intermediate layers. A standard RNN has an internal state whose output at every time-step which can be expressed in terms of that of previous time-steps. However, it has been seen that standard RNNs suffer from a problem of vanishing gradients BIBREF15 . This means it will not be able to efficiently model dependencies and interactions between words that are a few steps apart. LSTMs are able to tackle this issue by their use of gating mechanisms. For each record in the dataset, the content of the post as well as the content of the related web page is available. We convert the words from the title of both attributes into the previously mentioned types of embeddings to act as input to our bidirectional LSTMs.", " $(\\overrightarrow{h}_1, \\overrightarrow{h}_2, \\dots , \\overrightarrow{h}_R)$ represent forward states of the LSTM and its state updates satisfy the following equations: ", "$$\\big [\\overrightarrow{f_t},\\overrightarrow{i_t},\\overrightarrow{o_t}\\big ] = \\sigma \\big [ \\overrightarrow{W} \\big [\\overrightarrow{h}_{t-1},\\overrightarrow{r_t}\\big ] + \\overrightarrow{b}\\big ]$$ (Eq. 3) ", "$$\\overrightarrow{l_t} = \\tanh \\big [\\overrightarrow{V} \\big [\\overrightarrow{h}_{t-1}, \\overrightarrow{r_t}\\big ] + \\overrightarrow{d}\\big ]$$ (Eq. 4) ", "here $\\sigma $ is the logistic sigmoid function, $\\overrightarrow{f_t}$ , $\\overrightarrow{i_t}$ , $\\overrightarrow{o_t}$ represent the forget, input and output gates respectively. $\\overrightarrow{r_t}$ denotes the input at time $t$ and $\\overrightarrow{h_t}$ denotes the latent state, $\\overrightarrow{b_t}$ and $\\overrightarrow{d_t}$ represent the bias terms. The forget, input and output gates control the flow of information throughout the sequence. $\\overrightarrow{W}$ and $\\overrightarrow{f_t}$0 are matrices which represent the weights associated with the connections.", " $(\\overleftarrow{h}_1, \\overleftarrow{h}_2, \\dots , \\overleftarrow{h}_R)$ denote the backward states and its updates can be computed similarly.", "The number of bidirectional LSTM units is set to a constant K, which is the maximum length of all title lengths of records used in training. The forward and backward states are then concatenated to obtain $(h_1, h_2, \\dots , h_K)$ , where ", "$$h_i = \\begin{bmatrix}\n\\overrightarrow{h}_i \\\\\n\\overleftarrow{h}_i\n\\end{bmatrix}$$ (Eq. 7) ", "Finally, we are left with the task of figuring out the significance of each word in the sequence i.e. how much a particular word influences the clickbait-y nature of the post. The effectiveness of attention mechanisms have been proven for the task of neural machine translation BIBREF7 and it has the same effect in this case. The goal of attention mechanisms in such tasks is to derive context vectors which capture relevant source side information and help predict the current target word. The sequence of annotations generated by the encoder to come up with a context vector capturing how each word contributes to the record's clickbait quotient is of paramount importance to this model. In a typical RNN encoder-decoder framework BIBREF7 , a context vector is generated at each time-step to predict the target word. However, we only need it for calculation of context vector for a single time-step. ", "$$c_{attention} = \\sum _{j=1}^{K}\\alpha _jh_j$$ (Eq. 8) ", "where, $h_1$ ,..., $h_K$ represents the sequence of annotations to which the encoder maps the post title vector and each $\\alpha _j$ represents the respective weight corresponding to each annotation $h_j$ . This component is represented on the leftmost in Figure 1." ], [ "The second component of our model is a Siamese net BIBREF16 over two textual features in the dataset. Siamese networks are designed around having symmetry and it is important because it's required for learning a distance metric. We use them to find the similarity between the title of the record and its target description. The words in the title and in the target description are converted into their respective Doc2Vec embeddings and concatenated, after which they are considered as input into a Siamese network. A visual representation of this can be found in the middle of Figure 1." ], [ "The final component of our hybrid model is also a Siamese net. However, it considers visual information available in the dataset, and sets our model apart from other approaches in this field. The relevance of the image attached to the post can be quantified by capturing its similarity with the target description. The VGG-19 architecture outputs a 4096 dimensional vector for each image which, in turn, is fed as input into a dense layer to convert each representation to a 300 dimensional vector. This serves as one input to the visual Siamese net. The target description is converted into its 300 dimensional vector representation by passing it through the pre-trained Doc2Vec model, which acts as the second input for the network. It is the rightmost part of Figure 1." ], [ "To combine the components and complete our hybrid model, the output from each of the three parts is concatenated and subsequently acts as input for a fully connected layer. This layer finally gives as its output the probability/extent that a post, together with its related information, can be considered clickbait." ], [ "We use binary cross-entropy as the loss optimization function for our model. The cross-entropy method BIBREF17 is an iterative procedure where each iteration can be divided into two stages:", "(1) Generate a random data sample (vectors, trajectories etc.) according to a specified mechanism.", "(2) Update the parameters of the random mechanism based on the data to produce a \"better\" sample in the next iteration." ], [ "The model was evaluated over a collection of 19538 social media posts BIBREF3 , each containing supplementary information like target description, target keywords and linked images. We performed our experiments with the aim of increasing the accuracy and F1 score of the model. Other metrics like mean squared error (MSE) were also considered." ], [ "We randomly partition the training set into training and validation set in a 4:1 ratio. This ensures that the two sets do not overlap. The model hyperparameters are tuned over the validation set. We initialise the fully connected network weights with the uniform distribution in the range $-\\sqrt{{6}/{(fanin + fanout)}}$ and $\\sqrt{{6}/{(fanin + fanout)}}$ BIBREF18 . We used a batch size of 256 and adadelta BIBREF19 as a gradient based optimizer for learning the parameters of the model." ], [ "In Table 1, we compare our model with the existing state-of-the-art for the dataset used and other models which have employed similar techniques to accomplish the task. Calculation and comparison across these metrics was conducted on TIRA BIBREF2 , a platform that offers evaluation as a service. It is clear that our proposed model outperforms the previous feature engineering benchmark and other work done in the field both in terms of F1 score and accuracy of detection." ], [ "In this work, we have come up with a multi-strategy approach to tackle the problem of clickbait detection across the Internet. Our model takes into account both textual and image features, a multimedia approach, to score the classify headlines. A neural attention mechanism is utilised over BIBREF5 to improve its performance, simultaneously adding Siamese nets for scoring similarity between different attributes of the post. To build on this approach, we would like to explore better image embedding techniques to better relate it to the article." ] ] }
{ "question": [ "What are the differences with previous applications of neural networks for this task?" ], "question_id": [ "acc8d9918d19c212ec256181e51292f2957b37d7" ], "nlp_background": [ "infinity" ], "topic_background": [ "unfamiliar" ], "paper_read": [ "no" ], "search_query": [ "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "This approach considers related images", "evidence": [ "One common point in all the approaches yet has been the use of only textual features available in the dataset. Our model not only incorporates textual features, modeled using BiLSTM and augmented with an attention mechanism, but also considers related images for the task." ], "highlighted_evidence": [ "One common point in all the approaches yet has been the use of only textual features available in the dataset. Our model not only incorporates textual features, modeled using BiLSTM and augmented with an attention mechanism, but also considers related images for the task." ] } ], "annotation_id": [ "1cbfdce25dfdc7c55ded63bbade870a96b66c848" ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b" ] } ] }
{ "caption": [ "Figure 1: Model Architecture", "Table 1: Comparison of our model with existing methods" ], "file": [ "3-Figure1-1.png", "4-Table1-1.png" ] }
2002.02492
Consistency of a Recurrent Language Model With Respect to Incomplete Decoding
Despite strong performance on a variety of tasks, neural sequence models trained with maximum likelihood have been shown to exhibit issues such as length bias and degenerate repetition. We study the related issue of receiving infinite-length sequences from a recurrent language model when using common decoding algorithms. To analyze this issue, we first define inconsistency of a decoding algorithm, meaning that the algorithm can yield an infinite-length sequence that has zero probability under the model. We prove that commonly used incomplete decoding algorithms - greedy search, beam search, top-k sampling, and nucleus sampling - are inconsistent, despite the fact that recurrent language models are trained to produce sequences of finite length. Based on these insights, we propose two remedies which address inconsistency: consistent variants of top-k and nucleus sampling, and a self-terminating recurrent language model. Empirical results show that inconsistency occurs in practice, and that the proposed methods prevent inconsistency.
{ "section_name": [ "Introduction", "Background", "Background ::: Recurrent Language Models", "Background ::: Decoding Algorithms", "Background ::: Decoding Algorithms ::: Stochastic decoding.", "Background ::: Decoding Algorithms ::: Deterministic decoding.", "Background ::: Decoding Algorithms ::: Incompleteness.", "Consistency of a Decoding Algorithm ::: Definition of consistency.", "Consistency of a Decoding Algorithm ::: Inconsistency of incomplete decoding.", "Fixing the inconsistency", "Fixing the inconsistency ::: Consistent Sampling Algorithms", "Fixing the inconsistency ::: A Self-Terminating Recurrent Language Model", "Empirical Validation", "Empirical Validation ::: Sequence completion.", "Empirical Validation ::: Dataset.", "Empirical Validation ::: Context distribution.", "Empirical Validation ::: Evaluation metrics.", "Empirical Validation ::: Training.", "Empirical Validation ::: Models.", "Empirical Validation ::: Inconsistency of Recurrent Language Models", "Empirical Validation ::: Consistency of the Proposed Methods", "Empirical Validation ::: Consistency of the Proposed Methods ::: Consistent sampling.", "Empirical Validation ::: Consistency of the Proposed Methods ::: Self-terminating RNN.", "Future Directions", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "Neural sequence models trained with maximum likelihood estimation (MLE) have become a standard approach to modeling sequences in a variety of natural language applications such as machine translation BIBREF0, dialogue modeling BIBREF1, and language modeling BIBREF2. Despite this success, MLE-trained neural sequence models have been shown to exhibit issues such as length bias BIBREF3, BIBREF4 and degenerate repetition BIBREF5. These issues are suspected to be related to the maximum likelihood objective's local normalization, which results in a discrepancy between the learned model's distribution and the distribution induced by the decoding algorithm used to generate sequences BIBREF6, BIBREF7. This has prompted the development of alternative decoding methods BIBREF8, BIBREF5 and training objectives BIBREF9, BIBREF10. In this paper, we formalize and study this discrepancy between the model and the decoding algorithm.", "We begin by formally defining recurrent neural language models, a family that encompasses neural models used in practice, such as recurrent neural networks BIBREF11, BIBREF12, BIBREF13, and transformers BIBREF14. Next, we formally define a decoding algorithm – a function that induces a distribution over sequences given a recurrent language model and a context distribution – which is used to obtain probable sequences from a model. In this paper, we show that the distribution induced by a decoding algorithm can contradict this intended use; instead, the decoding algorithm may return improbable, infinite-length sequences.", "Our main finding is that a sequence which receives zero probability under a recurrent language model's distribution can receive nonzero probability under the distribution induced by a decoding algorithm. This occurs when the recurrent language model always ranks the sequence termination token outside of the set of tokens considered at each decoding step, yielding an infinite-length, zero probability sequence. This holds whenever the decoding algorithm is incomplete, in the sense that the algorithm excludes tokens from consideration at each step of decoding, which is the case for common methods such as greedy search, beam search, top-$k$ sampling BIBREF15, and nucleus sampling BIBREF5. We formalize our main finding using the notion of consistency BIBREF16 – whether a distribution assigns probability mass only to finite sequences – and prove that a consistent recurrent language model paired with an incomplete decoding algorithm can induce an inconsistent sequence distribution.", "Based on the insight that inconsistency occurs due to the behavior of the termination token under incomplete decoding, we develop two methods for addressing inconsistency. First, we propose consistent sampling methods which guarantee that the termination token is not excluded from selection during decoding. Second, we introduce a self-terminating recurrent language model which ensures that the termination token is eventually ranked above all others, guaranteeing consistency under incomplete decoding.", "To empirically measure inconsistency, we decode sequences from trained recurrent language models and measure the proportion of sequences with lengths far exceeding the maximum training sequence length. Our experiments on the Wikitext2 dataset BIBREF17 suggest that inconsistency occurs in practice when using incomplete decoding methods, while the proposed consistent sampling methods and self-terminating model parameterization prevent inconsistency and maintain language modeling quality.", "The theoretical analysis reveals defects of existing decoding algorithms, providing a way to develop future models, inference procedures, and learning algorithms. We present methods related to sampling and model parameterization, but there are more directions which we leave to the future; we close with directions related to sequence-level learning." ], [ "We begin our discussion by establishing background definitions. First, we define a sequence which is the main object of our investigation.", "Definition 2.1 (Sequence) A sequence $Y$ is an ordered collection of items from a predefined finite vocabulary $V$. A sequence of finite length always ends with a special token $\\left<\\text{eos}\\right>\\in V$ that only appears at the end of a sequence.", "Each model we consider generates a sequence conditioned on context information, such as a prefix in sentence completion. To consider this, we define a context distribution.", "Definition 2.2 (Context distribution) A context distribution $p(C)$ is a probability distribution defined over a set $\\mathcal {C}$. An element $C\\in \\mathcal {C}$ is called a context." ], [ "A recurrent language model is an autoregressive model of a sequence distribution, where each conditional probability is parameterized with a neural network. Importantly, we assume that all tokens in a sequence are dependent on each other under a recurrent language model. This allows us to avoid cases in which the model degenerates to a Markovian language model, such as an $n$-gram model with a finite $n$.", "Definition 2.3 (Recurrent language model) A recurrent language model $p_\\theta $ is a neural network that computes the following conditional probability at each time step", "where $h_t = f_{\\theta }(y_t, h_{t-1})$ and $h_0 = g_{\\theta }(C)$, and $u,c,\\theta $ are parameters. A recurrent language model thereby computes the probability of a sequence $Y=(y_1, \\ldots , y_T)$ by", "where $y_{<t}=(y_1,\\ldots ,y_{t-1})$. This distribution satisfies", "Practical variants of the recurrent language model differ by the choice of transition function $f_{\\theta }$ BIBREF11, BIBREF13, BIBREF12, BIBREF14. The use of softmax BIBREF18 implies that every unique token in the vocabulary is considered at every location of a sequence.", "Remark 2.1 Under the conditional distribution of a recurrent language model, every token $v\\in V$ is assigned a positive probability. This implies that $0 < p_\\theta (v\\,|\\,y_{<t}, C) < 1.$ In addition, it follows that any finite sequence is probable by a recurrent language model under any context, i.e., $p_{\\theta }(Y\\,|\\,C) > 0$ for any sequence $Y$ of finite length." ], [ "Because it is intractable to decode the most probable sequence, it is necessary in practice to use an approximate decoding algorithm.", "Definition 2.4 (Decoding algorithm) A decoding algorithm $\\mathcal {F}(p_{\\theta }, C)$ is a function that generates a sequence $\\tilde{Y}$ given a recurrent language model $p_{\\theta }$ and context $C$. Let $q_{\\mathcal {F}}$ denote the distribution induced by the decoding algorithm $\\mathcal {F}$.", "We consider two families of decoding algorithms. In our analysis we only consider decoding algorithms that decode in a single pass, forward in time, without modifying previously selected tokens." ], [ "The first family consists of stochastic algorithms. Among them, ancestral sampling is asymptotically unbiased and can be used for finding the most probable sequence, although it requires a substantial number of samples to achieve a low-variance estimate.", "Definition 2.5 (Ancestral sampling) Ancestral sampling $\\mathcal {F}_{\\text{anc}}$ generates a sequence from a recurrent language model $p_{\\theta }$ given context $C$ by recursively sampling from $p_{\\theta }(y_t\\,|\\,\\tilde{y}_{<t}, C)$ until $\\tilde{y}_t = \\left<\\text{eos}\\right>$:", "In order to avoid the high variance, two approximate stochastic decoding algorithms have recently been proposed and tested with recurrent language models. Top-$k$ sampling considers only a subset of the $k$ most probable tokens from the vocabulary at a time, while nucleus sampling considers only the minimal subset of most probable tokens whose total probability is higher than a predefined threshold.", "Definition 2.6 (Top-$k$ sampling BIBREF15) Top-$k$ sampling $\\mathcal {F}_{\\text{top-k}}$ generates a sequence from a recurrent language model $p_{\\theta }$ given context $C$ by recursively sampling from the following proposal distribution:", "Definition 2.7 (Nucleus sampling BIBREF5) Nucleus sampling $\\mathcal {F}_{\\text{nuc-}\\mu }$ generates a sequence from a recurrent language model $p_{\\theta }$ given context $C$ by recursively sampling from the following proposal distribution. Let $v_1,\\ldots ,v_{|V|}$ denote tokens in $V$ such that $p_{\\theta }(v_i\\,|\\,y_{<t},C) \\ge p_{\\theta }(v_j\\,|\\,y_{<t},C)$ for all $i < j$, and define", "where $V_{\\mu } = \\left\\lbrace v_1, \\cdots , v_{k_\\mu } \\right\\rbrace $ with" ], [ "The other family consists of deterministic decoding algorithms, where a token is selected deterministically according to a rule at each decoding step. The most naive algorithm, called greedy decoding, simply takes the most probable token at each step.", "Definition 2.8 (Greedy decoding) Greedy decoding $\\mathcal {F}_{\\text{greedy}}$ generates a sequence from a recurrent language model $p_{\\theta }$ given context $C$ by recursively selecting the most likely token from $p_{\\theta }(y_t | \\tilde{y}_{<t}, C)$ until $\\tilde{y}_t = \\left<\\text{eos}\\right>$:", "In contrast to greedy decoding, beam search operates on the level of partial sequences or prefixes.", "Definition 2.9 (Prefix) A prefix $\\rho _t$ is an ordered collection of items from $V$. The score of a prefix is", "where $\\rho _t[\\tau ]$ is a token at time $\\tau $ from $\\rho _t$.", "Starting from a set of empty prefixes, at each iteration a new prefix set is formed by expanding each prefix, then choosing the highest scoring expanded prefixes.", "Definition 2.10 (Beam search) Beam search with width $k$, $\\mathcal {F}_{\\text{beam}-k}$, generates a sequence from a recurrent language model $p_{\\theta }$ by maintaining a size-$k$ prefix set $\\mathrm {P}_t^{\\text{top}}$. Starting with $P_0^{top}=\\varnothing $, at each iteration $t\\in \\lbrace 1,2,\\ldots \\rbrace $ beam search forms a new prefix set $\\mathrm {P}_t^{\\text{top}}$ by expanding the current set, $\\mathrm {P}_t = \\bigcup _{\\rho \\in \\mathrm {P}_{t-1}^{\\text{top}}} \\lbrace \\rho \\circ v\\, |\\, v\\in V\\rbrace $ (where $\\rho \\circ v$ is concatenation), then choosing the $k$ highest scoring elements,", "Any $\\rho \\in \\mathrm {P}_t^{\\text{top}}$ ending with $\\left<\\text{eos}\\right>$ is restricted from being expanded further, and is added to a set $S$. Beam search ends when $S$ contains $k$ sequences, and returns the highest scoring sequence in $S$." ], [ "Other than ancestral sampling, the decoding algorithms above are incomplete in that they only consider a strict subset of the the full vocabulary $V$ at each time step, aside from the trivial case of $k=|V|$.", "Definition 2.11 (Incomplete Decoding) A decoding algorithm $\\mathcal {F}$ is incomplete when for each context $C$ and prefix $y_{<t}$, there is a strict subset $V^{\\prime }_t\\subsetneq V$ such that" ], [ "A recurrent language model $p_{\\theta }$ may assign a positive probability to an infinitely long sequence, in which case we call the model inconsistent. This notion of consistency was raised and analyzed earlier, for instance by BIBREF19 and BIBREF16, in terms of whether the distribution induced by $p_{\\theta }$ is concentrated on finite sequences. We extend their definition to account for the context $C$.", "Definition 3.1 (Consistency of a recurrent language model) A recurrent language model is consistent under a context distribution $p(C)$ if $p_{\\theta }(|Y|=\\infty ) = 0$. Otherwise, the recurrent language model is said to be inconsistent.", "Any sequence decoded from a consistent model for a given probable context is guaranteed to terminate.", "Lemma 3.1 If a recurrent language model $p_{\\theta }$ is consistent, $p_{\\theta }(|Y|=\\infty \\,|\\,C)=0$ for any probable context $C$.", "Next, we establish a practical condition under which a recurrent language model is consistent.", "Lemma 3.2 A recurrent language model $p_{\\theta }$ is consistent if $\\Vert h_t\\Vert _p$ is uniformly bounded for some $p\\ge 1$.", "[Proof sketch] If $\\Vert h_t\\Vert _p$ is bounded, then each $u_v^\\top h_t$ is bounded, hence $p_{\\theta }(\\left<\\text{eos}\\right>| y_{<t}, C)>\\xi >0$ for a constant $\\xi $. Thus $p_{\\theta }(|Y|=\\infty ) \\le \\lim _{t\\rightarrow \\infty } (1 - \\xi )^t = 0$, meaning that $p_{\\theta }$ is consistent.", "Although this condition is practical because layer normalization or bounded activation functions BIBREF11, BIBREF12, BIBREF14 result in bounded $h_t$, we show that even if a recurrent language model is consistent, a decoding algorithm may produce an infinite-length sequence. We formalize this discrepancy using the consistency of a decoding algorithm.", "Definition 3.2 (Consistency of a decoding algorithm) A decoding algorithm $\\mathcal {F}$ is consistent with respect to a consistent recurrent language model $p_{\\theta }$ under a context distribution $p(C)$ if the decoding algorithm $\\mathcal {F}$ preserves the consistency of the model $p_{\\theta }$, that is, $q_{\\mathcal {F}}(|Y|=\\infty )=0$.", "When a consistent recurrent language model $p_{\\theta }$ and a decoding algorithm $\\mathcal {F}$ induce a consistent distribution $q_{\\mathcal {F}}$, we say that $p_{\\theta }$ paired with $\\mathcal {F}$ is consistent. For instance, any consistent recurrent language model paired with ancestral sampling is consistent, because the induced distribution $q_{\\mathcal {F}_{\\text{anc}}}$ is the same as the distribution of the original model. We also have an analogue of Lemma UNKREF21.", "Lemma 3.3 A consistent decoding algorithm with respect to a consistent recurrent language model decodes only probable sequences. That is, if $q_{\\mathcal {F}}(Y\\,|\\,C)>0$, then $p_{\\theta }(Y\\,|\\,C)>0$ for any probable context $C$." ], [ "Any incomplete decoding algorithm (Definition UNKREF18) can be inconsistent regardless of the context distribution, because there is a recurrent language model that places $\\left<\\text{eos}\\right>$ outside of $V^{\\prime }_t$ at every step of decoding. To show this, we construct a consistent recurrent language model whose distribution induced by an incomplete decoding algorithm is inconsistent.", "Theorem 3.4 (Inconsistency of an incomplete decoding algorithm) There exists a consistent recurrent language model $p_{\\theta }$ from which an incomplete decoding algorithm $\\mathcal {F}$, that considers only up to $(|V|-1)$-most likely tokens according to $p_{\\theta }(y_t\\,|\\,y_{<t},C)$ at each step $t$, finds a sequence $\\tilde{Y}$ whose probability under $p_{\\theta }$ is 0 for any context distribution.", "We prove this theorem by constructing a $\\tanh $ recurrent network. We define the recurrent function $f_{\\theta }$ as", "where $e(y_{t}) \\in \\mathbb {R}^{|V|}$ is a one-hot representation of $y_t$, $W_h \\in \\mathbb {R}^{d \\times d}$ where every entry is positive, and $I$ is an identity matrix of size $|V| \\times |V|$. $h_0 = g_{\\theta }(C)$ is constructed to consist of positive values only. Because each element of $|h_t|$ is bounded by 1, the constructed recurrent language model $p_{\\theta }$ is consistent by Lemma UNKREF23.", "For $v \\ne \\left<\\text{eos}\\right>$, we set $u_v$ (see Definition UNKREF4) to be", "where all elements of $\\bar{u}_v$ are positive and $e(v)$ is a one-hot representation of $v$. $c_v$ is set to zero. Next, let", "where all elements of $\\bar{u}_{\\left<\\text{eos}\\right>}$ are negative.", "This defines a valid recurrent language model (Definition UNKREF4), since the conditional distribution at each time $t$ is influenced by all the previous tokens. More specifically, the logit of a token $v$ depends on $\\sum _{t^{\\prime }=1}^t {1}(y_{t^{\\prime }} = v)$, where 1 is an indicator function.", "This recurrent language model always outputs positive logits for non-$\\left<\\text{eos}\\right>$ tokens, and outputs negative logits for the $\\left<\\text{eos}\\right>$ token. This implies $p(\\left<\\text{eos}\\right>|\\,y_{<t}, C) < p(v\\,|\\,y_{<t}, C)$ for all $v \\in V \\backslash \\left\\lbrace \\left<\\text{eos}\\right>\\right\\rbrace $. This means that $\\left<\\text{eos}\\right>$ is always ranked last at each time step, so an incomplete decoding algorithm that considers at most $(|V|-1)$ most probable tokens at each time step from $p_{\\theta }(y_t\\,|\\,y_{<t}, C)$ cannot decode $\\left<\\text{eos}\\right>$ and thus always decodes an infinitely long sequence.", "The log-probability of this infinitely long sequence $\\hat{Y}$ is", "For any $v\\in V$,", "where $b_v = \\sum _{v^{\\prime }\\ne v} \\exp (-\\Vert u_{v^{\\prime }}\\Vert _1)$. The last inequality holds because $x/(x+b_v)$ is increasing in $x>0$. Therefore, the log-probability $\\log p_{\\theta }(\\hat{Y}\\,|\\,C)$ diverges as $|\\hat{Y}| \\rightarrow \\infty $, and thus $p_{\\theta }(\\hat{Y}\\,|\\,C) = 0$, which implies the decoding algorithm $\\mathcal {F}$ is inconsistent by Lemma UNKREF25. Greedy decoding, beam search, top-$k$ sampling, and nucleus sampling are all inconsistent according to this theorem; there are consistent models $p_{\\theta }$ that induce inconsistent distributions when paired with these decoding algorithms." ], [ "In this section, we consider two ways to prevent inconsistency arising from incomplete decoding algorithms. First, we introduce consistent versions of top-$k$ and nucleus sampling. Second, we introduce the self-terminating recurrent language model, which is consistent when paired with any of the decoding algorithms considered in this paper." ], [ "The proof of Theorem UNKREF27 suggests that inconsistency of incomplete decoding algorithms arises from the fact that $\\left<\\text{eos}\\right>$ may be excluded indefinitely from the set of top-ranked tokens. We propose a simple modification to top-$k$ and nucleus sampling that forces $\\left<\\text{eos}\\right>$ to be included at each step of decoding. First, we give a condition for when a particular model $p_{\\theta }$ paired with a decoding algorithm $\\mathcal {F}$ is consistent.", "Theorem 4.1 Let $p_{\\theta }$ be a consistent recurrent language model. If a decoding algorithm $\\mathcal {F}$ satisfies $q_{\\mathcal {F}}(\\left<\\text{eos}\\right>|\\,y_{<t}, C) \\ge p_{\\theta }(\\left<\\text{eos}\\right>|\\,y_{<t}, C)$ for every prefix $y_{<t}$ and context $C$, then the decoding algorithm $\\mathcal {F}$ is consistent with respect to the model $p_{\\theta }$.", "Let $P^{\\prime }_{t-1}$ denote a set of all prefixes $y_{<t}$ of length $t-1$. For $t\\ge 1$,", "Taking the limit $t\\rightarrow \\infty $ and expectation over $C$ on both sides, we have", "from which the decoding algorithm is consistent.", "We define consistent variants of top-$k$ and nucleus sampling which satisfy this condition.", "Definition 4.1 (Consistent top-$k$ sampling) Consistent top-$k$ sampling is top-$k$ sampling with the following modified proposal distribution:", "where $V^{\\prime } = \\left\\lbrace \\left<\\text{eos}\\right>\\right\\rbrace \\cup \\underset{v^{\\prime }}{\\arg \\text{top-k}}\\ p_{\\theta }(v^{\\prime }\\,|\\,y_{<t}, C)$.", "Definition 4.2 (Consistent nucleus sampling) Consistent nucleus sampling is nucleus sampling with the following modified proposal distribution:", "The induced probability of $\\left<\\text{eos}\\right>$ under these two algorithms is always equal to or larger than the model's probability. By Theorem UNKREF29, these algorithms are consistent with respect to any consistent recurrent language model." ], [ "Although these consistent sampling algorithms can be used with any recurrent language model, their stochastic nature may not be suitable for finding a single, highly probable sequence. To avoid this limitation, we propose the self-terminating recurrent language model (STRLM).", "Definition 4.3 (Self-terminating recurrent language model) A self-terminating recurrent language model computes the following conditional probability at each time step:", "where", "with $\\sigma : \\mathbb {R} \\rightarrow [0,1-\\epsilon ]$ and $\\epsilon \\in (0,1)$. $h_t$ is computed as in the original recurrent language model.", "The underlying idea is that the probability of $\\left<\\text{eos}\\right>$ increases monotonically. The model is consistent when paired with greedy decoding.", "Theorem 4.2 Greedy decoding is consistent with respect to any self-terminating recurrent language model.", "Let $p_{t}^{\\left<\\text{eos}\\right>}$ denote $p_{\\theta }(\\left<\\text{eos}\\right>|\\,y_{<t}, C)$ and $a_{t}^{\\left<\\text{eos}\\right>}$ denote $u_{\\left<\\text{eos}\\right>}^\\top h_t + c_{\\left<\\text{eos}\\right>}$. By Definition UNKREF33 we have", "Take $B=-\\log 2 / \\log (1-\\epsilon )$. We then have $p_{t}^{\\left<\\text{eos}\\right>}>1/2$ for all $t > B$, which implies that $\\left<\\text{eos}\\right>$ is always the most probable token after time step $B$. Hence, the sequence length is less than $B$ with probability 1. Beam search is also consistent with respect to any self-terminating recurrent language model according to a similar argument; see Appendix for the proof." ], [ "The theoretical results rely on the existence of a model that results in inconsistency; it remains to be shown that inconsistency with respect to incomplete decoding occurs with recurrent language models encountered in practice. Moreover, while the proposed consistent sampling methods and self-terminating recurrent language model carry theoretical guarantees in terms of consistency, we must check whether they retain language modeling quality. To do so, we perform two experiments using a sequence completion task. In each experiment, we use the beginning of a sequence as context, then decode continuations from a trained recurrent language model and measure the proportion of non-terminated sequences in order to approximately measure inconsistency. The first experiment (§SECREF45) shows that inconsistency occurs in practice, and the second experiment (§SECREF47) shows the effectiveness of the proposed approaches." ], [ "We evaluate recurrent language models on a sequence completion task, which has previously been used to evaluate the effectiveness of sequence models, e.g. BIBREF20, BIBREF21, BIBREF2, BIBREF5, BIBREF10. Sequence completion is a general setting for studying the behavior of language models, encompassing machine translation BIBREF0, story generation BIBREF15, and dialogue modeling BIBREF1. The task consists of decoding a continuation $\\hat{Y}\\sim \\mathcal {F}(p_{\\theta }, C)$ given a length-$k$ prefix $C=(c_1,\\ldots ,c_k)$, resulting in a completion $(c_1,\\ldots ,c_k,\\hat{y}_1\\ldots ,\\hat{y}_T)$." ], [ "We use the Wikitext2 dataset BIBREF17 consisting of paragraphs from Wikipedia, since it has frequently been used to evaluate language models BIBREF22, BIBREF23, BIBREF24. We split each paragraph into sentences using Spacy, resulting in roughly 100k sequences (78,274 train, 8,464 valid, 9,708 test). We split each sequence, using the first $k$ tokens as a context and the remaining tokens as a continuation. To ensure that each sequence contains a prefix, we prepend padding tokens to make it length $k$. Special $\\left<\\text{bos}\\right>$ and $\\left<\\text{eos}\\right>$ tokens are then inserted at the beginning and end of every sequence. Our experiments use $k=10$. We model sequences at the word level with a vocabulary size of 33,182. The average training sequence length is 24 tokens, with a maximum of 137." ], [ "We define empirical context distributions with prefixes from the train, valid, and test sets,", "where $\\mathcal {D}=\\lbrace (C^{(n)},Y^{(n)})\\rbrace _{n=1}^{N}$ is a dataset split." ], [ "We use finite sequences to approximately measure the consistency of a model paired with a decoding algorithm, since decoding an infinite-length sequence is impossible. We use the proportion of decoded continuations that are longer than a predefined limit,", "where $\\hat{Y}^{(n)}\\sim \\mathcal {F}(p_{\\theta }, C^{(n)})$ for each context $C^{(n)}$ in $\\mathcal {D}$. We call $r_L$ the non-termination ratio of the decoding algorithm $\\mathcal {F}$ for an underlying model and context distribution. A value of $r_L$ greater than zero means that some sequences did not terminate within $L$ steps. When $L$ is infinity, this implies that the model paired with the decoding algorithm is inconsistent. In practice, we use a finite $L$ that is substantially larger than the maximum training sequence length, and we interpret a non-zero $r_L$ as evidence that the model paired with the decoding algorithm is inconsistent. We use $L=1500$, which is more than 10 times the maximum training sequence length.", "In each experiment, we report the mean and standard deviation of metrics across 10 independent initializations. Unless specified otherwise, we report metrics using the test context distribution, since the train, valid, and randomly generated context distributions had similar results." ], [ "We train recurrent language models for sequence completion with maximum likelihood, using the following loss on each sequence $Y=(c_1,\\ldots ,c_k,y_1,\\ldots ,y_T)$:", "This amounts to running the full training sequence through a recurrent model and zeroing the loss for the first $k$ tokens, so that the first $k$ steps correspond to learning a $g_{\\theta }$ that encodes the context. Each model is trained on a single Nvidia P40 GPU for up to 100 epochs, stopping early when validation perplexity does not decrease for 10 consecutive epochs." ], [ "We consider recurrent neural networks with hyperbolic tangent activations ($\\tanh $-RNN) BIBREF11 and LSTM units (LSTM-RNN) BIBREF13. We perform an initial hyper-parameter sweep and select the best set of hyper-parameters for each of $\\tanh $-RNN and LSTM-RNN based on the validation perplexities. With this best set of hyperparameters, we train each of these models with 10 different initializations. The choice of $\\tanh $ and LSTM RNNs implies that all of the recurrent language models that we train are consistent according to Lemma UNKREF23. Our LSTM models achieve similar test perplexity ($91.86 \\pm 0.4$) to those reported in previous work BIBREF24; see Appendix for further details.", "Additionally, we train self-terminating $\\tanh $-RNN and LSTM-RNN variants (Definition UNKREF33) at various values of $\\epsilon $, which controls a lower bound on the termination probability at each step. We use $\\sigma (x)=(1-\\epsilon )\\text{sigmoid}(x)$. We use the hyper-parameters selected in the preceding grid search." ], [ "In this experiment, we demonstrate evidence of inconsistency with incomplete decoding methods (Theorem UNKREF27).", "Table TABREF43 shows non-termination ratios for the recurrent language models using the incomplete decoding algorithms considered in this work, along with ancestral sampling. Decoding with ancestral sampling always resulted in sequences that terminated within $L$ steps, since the induced distribution is the same as that of the consistent model. On the other hand, the non-zero non-termination ratios for the incomplete decoding algorithms suggest inconsistency with respect to each algorithm, providing evidence for Theorem UNKREF27.", "In particular, greedy search, beam search, and nucleus sampling yielded non-terminating sequences with both the $\\tanh $ and LSTM RNNs. Using greedy decoding, roughly 6% of all contexts resulted in a non-terminating continuation with the $\\tanh $-RNN, and roughly 1% with the LSTM-RNN. Nucleus sampling also produced non-terminating sequences with the $\\tanh $-RNN (2.49%, nuc-0.2) and LSTM-RNN (0.76%, nuc-0.2), with the amount of non-termination decreasing as $\\mu $ increased (see Definition UNKREF11), likely due to $\\left<\\text{eos}\\right>$ having a higher chance of being included in $V_{\\mu }$. Top-$k$ sampling resulted in non-terminating sequences with the $\\tanh $-RNN, but not with the LSTM, implying that $\\left<\\text{eos}\\right>$ was ranked within the top $k$ positions on at least one timestep during each decoding. Beam search produced non-terminating sequences with both the $\\tanh $-RNN (beam-2,4) and LSTM-RNN (beam-2) models. This means that $\\left<\\text{eos}\\right>$ was outside of the top tokens (determined by the beam width) considered at each step, since in our experiments we terminated the beam search when a single beam prefix contained $\\left<\\text{eos}\\right>$. With the LSTM-RNN, a larger beam width (beam-4) prevented non-termination." ], [ "In this experiment, we evaluate the consistent variants of top-$k$ and nucleus sampling (§SECREF28) as well as the self-terminating recurrent language model (§SECREF32) in terms of consistency and language modeling quality." ], [ "Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\\left<\\text{eos}\\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate." ], [ "As seen in Table TABREF50, the self-terminating recurrent language models with $\\epsilon \\in \\lbrace 10^{-2},10^{-3}\\rbrace $ are consistent with respect to greedy decoding, at the expense of perplexity compared to the vanilla model. The value of $\\epsilon $ from Definition UNKREF33, which controls a lower-bound on termination probability at each step, influences both $r_L$ and perplexity. When $\\epsilon $ is too large ($\\epsilon =10^{-2}$), perplexity degrades. When $\\epsilon $ is too small ($\\epsilon =10^{-4}$), the lower-bound grows slowly, so $\\left<\\text{eos}\\right>$ is not guaranteed to be top-ranked within $L$ steps, and the metrics resemble the baseline's. An $\\epsilon $ of $10^{-3}$ balanced consistency and language modeling quality, with a zero non-termination ratio and perplexity within 3 points of the baseline.", "For the example decoded sequences in Table TABREF46, generation quality is similar when both the self-terminating and baseline models terminate (first row). For prefixes that led to non-termination with the baseline, the self-terminating variant can yield a finite sequence with reasonable quality (second row). This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition." ], [ "The methods we proposed in this paper have focused on how to resolve inconsistency from the viewpoint of decoding algorithms or model parameterization. Another approach is to address the issue of inconsistency in the learning phase.", "One interesting direction is to investigate whether maximum likelihood learning is a cause of inconsistency. Given a training set $\\left\\lbrace (C^{(n)}, Y^{(n)}) \\right\\rbrace _{n=1}^N$ drawn from a data distribution, maximum likelihood learning solves:", "where $\\Omega (\\theta )$ is a regularizer and $\\lambda $ is a regularization weight.", "Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding. Sequence-level learning, however, uses a decoding algorithm during training BIBREF25, BIBREF26. We hypothesize that sequence-level learning can result in a good sequence generator that is consistent with respect to incomplete decoding." ], [ "We extended the notion of consistency of a recurrent language model put forward by BIBREF16 to incorporate a decoding algorithm, and used it to analyze the discrepancy between a model and the distribution induced by a decoding algorithm. We proved that incomplete decoding is inconsistent, and proposed two methods to prevent this: consistent decoding and the self-terminating recurrent language model. Using a sequence completion task, we confirmed that empirical inconsistency occurs in practice, and that each method prevents inconsistency while maintaining the quality of generated sequences. We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future." ], [ "We thank Chris Dyer, Noah Smith and Kevin Knight for valuable discussions. This work was supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Research (Improving Deep Learning using Latent Structure). KC thanks eBay and NVIDIA for their support." ] ] }
{ "question": [ "How much improvement is gained from the proposed approaches?", "Is the problem of determining whether a given model would generate an infinite sequence is a decidable problem? ", "Is infinite-length sequence generation a result of training with maximum likelihood?" ], "question_id": [ "6f2f304ef292d8bcd521936f93afeec917cbe28a", "82fa2b99daa981fc42a882bb6db8481bdbbb9675", "61fb982b2c67541725d6db76b9c710dd169b533d" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f", "fa716cd87ce6fd6905e2f23f09b262e90413167f", "fa716cd87ce6fd6905e2f23f09b262e90413167f" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "It eliminates non-termination in some models fixing for some models up to 6% of non-termination ratio.", "evidence": [ "Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\\left<\\text{eos}\\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate.", "For the example decoded sequences in Table TABREF46, generation quality is similar when both the self-terminating and baseline models terminate (first row). For prefixes that led to non-termination with the baseline, the self-terminating variant can yield a finite sequence with reasonable quality (second row). This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition.", "FLOAT SELECTED: Table 2. Non-termination ratio (rL (%)) of decoded sequences using consistent sampling methods.", "FLOAT SELECTED: Table 1. Non-termination ratio (rL (%)) of decoded sequences using ancestral sampling and incomplete decoding methods." ], "highlighted_evidence": [ "Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\\left<\\text{eos}\\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate.", " This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition.", "FLOAT SELECTED: Table 2. Non-termination ratio (rL (%)) of decoded sequences using consistent sampling methods.", "FLOAT SELECTED: Table 1. Non-termination ratio (rL (%)) of decoded sequences using ancestral sampling and incomplete decoding methods." ] } ], "annotation_id": [ "cfd1e076d4a9b5356e4b4202f216399e66547e50" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "1cffe76ed7d5f8f9ba0dd6ee3592f71b0cf46488" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "There are is a strong conjecture that it might be the reason but it is not proven.", "evidence": [ "We extended the notion of consistency of a recurrent language model put forward by BIBREF16 to incorporate a decoding algorithm, and used it to analyze the discrepancy between a model and the distribution induced by a decoding algorithm. We proved that incomplete decoding is inconsistent, and proposed two methods to prevent this: consistent decoding and the self-terminating recurrent language model. Using a sequence completion task, we confirmed that empirical inconsistency occurs in practice, and that each method prevents inconsistency while maintaining the quality of generated sequences. We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future.", "Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding. Sequence-level learning, however, uses a decoding algorithm during training BIBREF25, BIBREF26. We hypothesize that sequence-level learning can result in a good sequence generator that is consistent with respect to incomplete decoding." ], "highlighted_evidence": [ "We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future.", "Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding." ] } ], "annotation_id": [ "a830540b9688e3ea11b8b8b9185415022c4f3fb1" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 1. Non-termination ratio (rL (%)) of decoded sequences using ancestral sampling and incomplete decoding methods.", "Table 2. Non-termination ratio (rL (%)) of decoded sequences using consistent sampling methods.", "Table 3. Example continuations using nucleus and consistent nucleus (µ = 0.4) sampling with the LSTM-RNN.", "Table 4. Example continuations with the LSTM-RNN and a self-terminating LSTM-RNN ( = 10−3).", "Table 5. Non-termination ratio (rL (%)) of greedy-decoded sequences and test perplexity for self-terminating recurrent models.", "Table 6. More example continuations from the LSTM-RNN and a self-terminating LSTM-RNN ( = 10−3).", "Table 7. Grid search specification. The values selected for the LSTM-RNN and tanh-RNN models are shown in bold and italics, respectively.", "Table 8. Perplexities of trained recurrent language models." ], "file": [ "7-Table1-1.png", "7-Table2-1.png", "8-Table3-1.png", "8-Table4-1.png", "8-Table5-1.png", "12-Table6-1.png", "12-Table7-1.png", "12-Table8-1.png" ] }